A groundbreaking study just exposed a staggering truth: thousands of AI-built apps are sitting wide open on the internet right now, quietly leaking your medical records, company secrets, and financial data. And the people who built them have no idea.
What the RedAccess Study Actually Found
The numbers are hard to ignore. Israeli cybersecurity firm RedAccess scanned the open web and found a crisis hiding in plain sight.
RedAccess discovered roughly 380,000 applications created with vibe coding tools that were publicly accessible on the web, with about 5,000 of them leaking sensitive corporate and private data, including medical records, financial information, and business documents. Around 40 percent of those apps exposed sensitive data, including medical information, financial data, corporate presentations, and strategy documents, as well as detailed logs of customer conversations with chatbots.
The way researchers found these apps was almost embarrassingly simple. Lovable, Replit, Base44, and Netlify all allow users to host their web apps on those AI companies’ own domains. So the researchers used straightforward Google and Bing searches for those companies’ domains combined with other search terms to identify thousands of exposed apps.
The vulnerabilities were not subtle. The unprotected apps did not require a clever attacker. They required a browser.
vibe coding AI apps exposing sensitive corporate data security risk
The Real Cost of Building Without Knowing What You Are Building
Vibe coding was coined as a term by OpenAI co-founder Andrej Karpathy in February 2025. The idea was simple and genuinely exciting: describe what you want in plain English, and the AI builds it for you. It was named the Collins English Dictionary Word of the Year for 2025. But the rapid adoption has come at a steep cost.
The core problem is not the tools themselves. It is what happens when people with no security training ship real software to real users.
- Between 40 and 62% of AI-generated code contains security vulnerabilities. AI-written code produces flaws at 2.74 times the rate of human-written code, according to an analysis of 470 GitHub pull requests.
- A first-quarter 2026 assessment of more than 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination.
- GitGuardian’s State of Secrets Sprawl 2026 report documented 28.65 million new hardcoded secrets in public GitHub commits during 2025, a 34% year-over-year increase representing the largest single-year jump ever recorded.
- CVE counts attributed to AI-generated code climbed from 6 in January 2026 to 15 in February and 35 in March 2026.
What makes this particularly dangerous: if you don’t have security expertise on your team, you won’t recognize these patterns in the generated code. The app runs, the feature works, and the vulnerability sits in production until someone finds it.
The Real-World Breach That Should Have Been a Warning
The Moltbook case from early 2026 is the clearest example of what happens when vibe coding skips security entirely.
Moltbook launched on January 28, 2026 as an AI social network where autonomous agents could interact. Its founder publicly stated he “didn’t write a single line of code,” relying entirely on AI tools to build the platform.
Within three days, the entire database was exposed. Security researchers at Wiz discovered the application had exposed its entire production database, including 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents. The root cause was a misconfigured Supabase deployment.
And Moltbook is not alone. Lovable, a $6.6 billion vibe coding platform, left every user’s source code, database credentials, and AI chat histories accessible for 48 days through a basic API flaw. Lovable hit $4 million in annual recurring revenue in its first four weeks and raised $200 million at a $1.8 billion valuation in July 2025, then $330 million at $6.6 billion in December, more than tripling its valuation in five months. Growth moved at lightning speed. Security did not keep up.
“Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check. People can just start using it in production without asking anyone. And they do.” — Dor Zvi, RedAccess Co-founder
Platforms Push Back, But Experts Say That Misses the Point
The companies named in the RedAccess study were quick to respond, and none of them accepted blame easily.
Replit CEO Amjad Masad stated, “Replit allows users to choose whether apps are public or private. Public apps being accessible on the internet is expected behavior. Privacy settings can be changed at any time with a single click.” Lovable’s spokesperson said, “Lovable gives builders the tools to build securely, but how an app is configured is ultimately the creator’s responsibility.”
Security experts see this response as a structural dodge, not a real answer. Platforms that market to nontechnical builders are shifting security responsibility to users who do not know it exists. That is not a user error. That is a design failure.
RedAccess CEO Dor Zvi put it bluntly: “I don’t think it’s feasible to educate the whole world around security. My mother is vibe coding with Lovable, and no offense, but I don’t think she will think about role-based access.”
Privacy settings on some of the more popular vibe coding tools are automatically set to make apps publicly accessible unless users manually change them to private. Well-meaning and driven employees are inadvertently exposing corporate secrets.
What Needs to Change Before the Next Breach Happens
Enterprise adoption of vibe coding grew 340% year over year. Non-technical user adoption surged 520%. Eighty-seven percent of Fortune 500 companies have adopted at least one vibe coding platform. That scale makes the security gap a business-critical emergency, not a technical footnote.
The head of the UK’s National Cyber Security Centre said during the 2026 RSA Conference that the cybersecurity industry should seize the opportunity to develop vibe coding safeguards that would allow well-trained AI tooling to write software that is secure by design.
Some platforms are now reacting. Escape raised $18 million to replace manual penetration testing with AI agents that scan vibe-coded applications, citing over 2,000 high-impact vulnerabilities and hundreds of exposed secrets found in live production systems. Lovable itself partnered with Aikido to bring automated pentesting to its platform.
Experts and researchers largely agree on what the minimum safeguards should look like:
- Private by default: All new apps should require authentication unless a user actively opts out
- Automated security scanning: Every app should be scanned before it goes live, not after
- Clear data-access warnings: If an app touches real user data, a visible alert should appear
- Corporate governance policies: Organizations need clear rules on who can build and deploy AI apps internally
One finding that gets less attention than it deserves: security degrades with iteration. A controlled experiment measured a 37.6% increase in critical vulnerabilities after just five rounds of AI-assisted code refinement. Iterating on AI output does not self-correct security flaws. It compounds them.
The vibe coding revolution is not going to slow down. Gartner forecasts that 60% of all new code will be AI-generated by the end of 2026. The window to fix the defaults, build the guardrails, and force this industry to take security seriously is closing fast. Right now, thousands of apps are sitting on the open web with no lock on the door and no one watching the data walk out. The question is not whether the next big breach will happen. The question is how many people it will hurt before the platforms stop pointing fingers and start building walls.