Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

The App That Looked Fine and Wasn't: Vibe Coding's Security Reckoning

A single AI-built app leaked data on 18,000 users, including schoolchildren. The question of who is responsible is proving harder to answer than the bugs themselves.

The App That Looked Fine and Wasn't: Vibe Coding's Security Reckoning
Image: The Register
Key Points 4 min read
  • Tech entrepreneur Taimur Khan found 16 vulnerabilities, six critical, in a single app hosted and showcased on vibe-coding platform Lovable.
  • The exposed data covered 18,697 user records, including students from US universities and K-12 schools, with 870 users' full personal information compromised.
  • A core flaw in the AI-generated authentication logic blocked legitimate users while allowing unauthenticated strangers full access to the system.
  • Lovable says users are responsible for implementing pre-publish security recommendations; Khan argues a platform that promotes apps to 100,000 people shares the liability.
  • Veracode research shows 45 percent of AI-generated code contains security flaws, and the rate has not improved as models grow more capable.

The app looked polished. Teachers used it to set exam questions. Students logged in to view their grades. At some point, a researcher began poking at its foundations, and what he found was not a minor configuration slip but a structural failure running through the entire backend.

Taimur Khan, a tech entrepreneur with a software engineering background, published his findings on 27 February. He had found 16 vulnerabilities, six of which he described as critical, in a single app hosted on the vibe-coding platform Lovable, which had leaked data on more than 18,000 people. He declined to name the app during the disclosure process, though it was hosted on Lovable's platform and showcased on its Discover page, where it had accumulated more than 100,000 views and around 400 upvotes.

The human cost, measured not in statistics but in individual exposure, is considerable. Of the 18,697 records exposed, data from students at institutions including UC Berkeley and UC Davis was included, alongside records from K-12 schools where minors were likely among the users. Because the app was a platform for creating exam questions and viewing grades, its user base was naturally comprised of teachers and students. An unauthenticated attacker could, according to Khan's report, access every user record, send bulk emails through the platform, delete accounts, alter student grades, and extract administrators' email addresses.

Abstract illustration representing vibe coding and AI-generated software development
Vibe coding platforms promise to democratise app development, but security researchers warn the results are often dangerously insecure by default.

The root cause was a textbook logic inversion in the AI-generated code. The AI that built the Supabase backend implemented access control with flawed logic, blocking authenticated users and allowing access to unauthenticated ones. The intent was to prevent non-admins from accessing restricted parts of the app, but the faulty implementation blocked all logged-in users, an error repeated across multiple critical functions. Khan put it plainly in his report:

"The guard blocks the people it should allow and allows the people it should block. A classic logic inversion that a human security reviewer would catch in seconds — but an AI code generator, optimizing for 'code that works,' produced and deployed to production."

This is not a quirk of one obscure platform. Vibe coding, Collins Dictionary's Word of the Year for 2025, promised to break down software development's steep learning curve and empower anyone to bring their app ideas to life. The appeal is genuine: founders without engineering teams, teachers with limited budgets, small business owners with no developer on staff can now ship functional software without writing a single line of code themselves. The productivity argument is real.

The security argument against it is equally real, and accumulating fast. Veracode's research reveals that 45 percent of AI-generated code contains security flaws, turning what should be a productivity breakthrough into a potential security liability. What is more concerning is that this security performance has remained largely unchanged over time, even as models have dramatically improved at generating syntactically correct code. Newer and larger models do not generate significantly more secure code than their predecessors. The platforms get shinier; the vulnerabilities stay.

Separate research by Escape.tech analysed more than 5,600 publicly available applications built on vibe-coding platforms and identified more than 2,000 vulnerabilities, over 400 exposed secrets, and 175 instances of personal information including medical records, bank account numbers, phone numbers, and email addresses. In a separate incident in 2025, researchers found that 170 Lovable-hosted apps shared a common vulnerability allowing unauthenticated attackers to read and write directly to Supabase databases via publicly exposed API keys. The vulnerability affected 303 endpoints across those apps.

At the centre of the current dispute is a question of accountability that the technology industry has not yet resolved. Lovable's CISO Igor Andriushchenko told The Register that every project built with Lovable includes a free security scan before publishing, that the scan checks for vulnerabilities and provides recommendations on how to resolve them, but that it is ultimately at the discretion of the user to implement those recommendations, and in this case that implementation did not happen. Andriushchenko also noted that the app included code not generated by Lovable and the vulnerable database was not hosted by Lovable.

Regarding the reportedly closed support ticket, Lovable's CISO said the company received a proper disclosure only on 26 February and acted on the findings within minutes.

Khan's counter-argument is pointed. A platform that actively promotes an app to more than 100,000 visitors on its own discovery page, he contends, cannot simply disclaim responsibility when a researcher reports that the promoted app is leaking user data. The fundamental issue, critics argue, is that apps built on entry-tier plans ship with insecure defaults, and the target audience of non-developers is the group least equipped to identify and fix those issues. Telling non-technical users to review their row-level security policies defeats the purpose of a no-code tool.

Screenshot of Lovable's Discover page showing featured user-built apps
The app in question was featured on Lovable's Discover page with more than 100,000 views before its data exposure was reported.

There is a genuine tension here that neither side fully resolves. The progressive case for democratising software creation is not frivolous. Access to development tools has long been skewed toward those with expensive technical training, and platforms like Lovable genuinely lower that barrier for teachers, community organisations, small businesses, and start-ups in places without access to deep engineering talent. Dismissing vibe coding entirely misses the legitimate demand it meets.

The Veracode research and the Lovable incident together suggest the problem is structural rather than one of individual user negligence. As Palo Alto Networks' Unit 42 has noted, AI agents are optimised to provide a working answer, fast. They are not inherently optimised to ask critical security questions. A platform that markets itself as producing production-ready applications and then showcases those applications to a six-figure audience carries responsibilities that extend beyond a pre-publish checkbox.

The pragmatic path forward is probably not a choice between banning vibe coding or ignoring its risks. The more defensible position is that security defaults need to be secure out of the box, not opt-in. Platforms profit from the apps their users build and from the audiences those apps attract. Some portion of that commercial benefit could reasonably fund mandatory security enforcement rather than advisory recommendations. Regulators in Australia and internationally are beginning to take a closer interest in data liability as privacy obligations tighten. The question of who is accountable when an AI writes insecure code and a platform promotes it may not remain a voluntary, industry-led conversation for much longer.

Khan's finding that a student grading app, featuring schools that likely included minors, was exposing every user record to any anonymous visitor is a concrete illustration of what abstract security debates actually mean for real people. The app looked fine. It wasn't. That gap, between functional appearance and actual safety, is the defining challenge of the vibe coding era, and a platform-level responsibility no terms-of-service clause can fully transfer to a first-time builder who does not know what row-level security is.

Sources (1)
James Callahan
James Callahan

James Callahan is an AI editorial persona created by The Daily Perspective. Reporting from conflict zones and diplomatic capitals with vivid, immersive storytelling that puts the reader on the ground. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.