Major Data Leaks Caused by AI Apps: A Growing Risk for Users

By Thomas | Published on January 29, 2026

News

AI-powered apps are becoming more popular, but research from both Cybernews and CovertLabs is shedding light on a troubling trend: poor security in these apps is putting millions of users at risk. From hardcoded secrets in Android apps to exposed chat logs in iOS apps, the lack of proper data protection is alarmingly widespread. The scale of these breaches suggests that AI app developers need to rethink their approach to security.

AI Apps Exposing Sensitive Data

Cybernews analyzed over 1.8 million Android apps and found that AI apps were the worst offenders when it came to poor security. Around 72% of the apps contained hardcoded secrets, including API keys, passwords, and encryption keys embedded directly in the code. This simple mistake makes it easy for attackers to exploit these apps and gain access to sensitive data.

The research revealed that these hardcoded secrets often linked directly to Google Cloud services, leading to serious breaches. Over 200 million files were exposed across Firebase and Google Cloud Storage, totaling nearly 730TB of data. Many of these breaches were the result of misconfigured databases, which had no authentication, leaving sensitive information vulnerable to automated exploits. While not all the exposed credentials were immediately dangerous, they significantly increased the risk of further attacks. The fact that many of these issues remained unaddressed points to a lack of oversight and concern from developers.

On the iOS side, CovertLabs’ Firehound repository has cataloged 198 apps with similar security issues. The worst example is "Chat & Ask AI by Codeway," which exposed over 406 million files, including the chat histories of 18 million users. These records, totaling 380 million messages, contained highly personal and potentially sensitive data, putting users at serious risk.

Firehound doesn’t make all this data public. Instead, it offers registered users access, with priority given to law enforcement and security experts. The goal is to expose vulnerabilities without causing further harm, and to encourage developers to fix the issues by offering guidance on how to do so. The repository serves as a wake-up call to developers who continue to overlook basic security practices in their rush to release AI-driven apps.

Gemini Reveals User Data

Researchers were recently able to trick Google’s Gemini AI into leaking sensitive user data by exploiting a flaw in its integration with Google Calendar. This vulnerability allowed them to gain unauthorized access to private calendar information, including event names, locations, and other details tied to users' Google accounts. The incident underscores that even advanced AI systems like Gemini can be manipulated if security measures are not properly implemented, putting users’ personal data at risk.

Financial and Personal Risks

The dangers of these leaks go beyond just exposed conversations. In some cases, attackers could exploit API keys linked to payment systems, like Stripe, to manipulate transaction histories, issue refunds, or reroute funds to malicious accounts. Other APIs connected to marketing platforms or analytics services could allow attackers to alter app data, hijack user profiles, and tamper with performance metrics. These breaches are a serious threat to both personal and financial security.

Developers Must Step Up

Both the Cybernews and CovertLabs research highlight a critical issue: many developers aren’t prioritizing security, leading to widespread vulnerabilities. These lapses in judgment result in massive leaks of sensitive data and leave users exposed to a variety of attacks, from identity theft to financial fraud. The responsibility lies not only with the developers but also with platform providers like Google and Apple, who should enforce stricter app review processes to prevent insecure apps from being published in the first place.

The Broader Impact

The implications of these security lapses in AI apps are far-reaching. According to Cybernews’ research, over 200 million files, totaling nearly 730TB of data, were exposed due to misconfigured Firebase and Google Cloud Storage instances. This isn't just about a few users' information being at risk—these breaches have affected millions. The exposed data includes personal conversations, user profiles, and sensitive credentials, which could be exploited by malicious actors for anything from identity theft to financial fraud.

Even tech giants like Google are not immune to these vulnerabilities. Researchers found that Google’s own AI system, Gemini, was among the affected services. In a worrying discovery, they managed to exploit a leak tied to Gemini, exposing user data. This breach highlights that, even within well-established companies like Google have security flaws, underscoring the scale and seriousness of the problem across the AI landscape.

A Call for Better Security

As AI apps grow, the need for stronger security is critical, not just for developers but for users as well. Developers must secure their code, configure cloud storage correctly, and conduct regular audits to protect data. However, users also need to be more cautious—treating AI apps with the same scrutiny as banking or social media apps. It's important for users to consider the risks before sharing personal information, download only from trusted developers, and understand that these apps may carry significant security vulnerabilities. Both sides must take responsibility to ensure AI apps are secure, minimizing risks and protecting privacy. It’s also crucial to remember that these companies usually train their models on user-provided data, including any highly sensitive information provided.

Similar Articles

0 Comments

No comments yet. Be the first one to comment!