Picture this: you run a small software startup, your monthly Google Cloud bill sits reliably at $180, and then you wake up to find someone has spent $82,314.44 on your account in two days. That is not a rounding error. That is bankruptcy territory.
That is precisely what happened to a three-developer startup in Mexico, according to a distressed Reddit post that has since gone viral in developer circles. As reported by The Register, the company's Google Cloud API key was compromised between 11 and 12 February 2026. During that window, unknown attackers used the stolen key to spend $82,314.44, primarily on Gemini 3 Pro Image and Gemini 3 Pro Text. The team's usual monthly spend of $180 makes that a roughly 46,000 per cent increase in a single billing cycle.
After deleting the compromised key, disabling the Gemini APIs, rotating credentials, and taking other security precautions, the developer opened a support case with Google and got nowhere. A Google representative allegedly cited the company's shared responsibility model, under which Google secures its platform while users must secure their own tools. Google has not confirmed whether it will waive the charge.
Not a Freak Accident
Here is what that actually means for any developer using Google Cloud: this incident is almost certainly not a one-off. Researchers at Truffle Security scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. The precise count: 2,863 live keys, sitting in public JavaScript on websites across the internet.
According to a Common Crawl scan carried out in November 2025, those 2,863 live keys left a broad range of organisations exposed, including major financial institutions, security companies, global recruiting firms, and Google itself. That last detail is what finally prompted Google to take the report seriously.
When the Gemini API is enabled on a Google Cloud project, every existing API key in that project silently inherits access to sensitive Gemini endpoints, with no warning, no confirmation dialog, and no email notification to the developer. In other words, you could have created a Maps key three years ago, embedded it in your website's source code exactly as Google's documentation instructed, and had a colleague quietly enable Gemini on the same project last month. Your previously harmless public key is now a live Gemini credential.
Truffle Security notes that, depending on the model and context window, a threat actor maxing out API calls could generate thousands of dollars in charges per day on a single victim account. The $82,000 figure bears that out in practice.
Google's Slow Response
Truffle Security submitted its report to Google's Vulnerability Disclosure Programme on 21 November 2025. Google initially determined the behaviour was intended and pushed back. It was only after Truffle provided examples from Google's own infrastructure, including keys on Google product websites, that the issue gained traction internally. Google then reclassified the report from "Customer Issue" to "Bug" and upgraded the severity.
By 2 February 2026, Google confirmed the team was still working on the root-cause fix. The 90-day disclosure window closed on 19 February 2026, with no architectural solution in place. That window expiring is precisely why the Truffle research became public before a comprehensive fix was deployed.
Google's public response has been measured. A spokesperson said the company has "already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API," and confirmed that new AI Studio keys will default to Gemini-only scope, leaked API keys will be blocked from accessing Gemini, and proactive notifications will be sent when leaks are detected. Those are real steps forward. The underlying architectural flaw, a single key format serving both public identification and sensitive authentication, remains unresolved.
The Shared Responsibility Problem
Google's invocation of the shared responsibility model deserves scrutiny, even if the principle itself is sound. Shared responsibility works when both parties understand the rules. In this case, billing identifier keys were effectively turned into Gemini authentication credentials without developers being informed. Asking developers to secure a secret they were explicitly told was not a secret is a different proposition entirely.
Cloud providers including Google, AWS, and Azure all operate on the same model: usage is metered, billing is retroactive, and there is no hard spending cap that kills the service when a threshold is crossed. Billing alerts are notifications, not circuit breakers. By the time you read the email, the meter has been running for hours. Credit card fraud detection killed unauthorised charges decades ago. The cloud billing model has not caught up.
To be fair to Google, administering retroactive audits of millions of legacy keys across its enormous developer base is genuinely complex. Truffle Security itself acknowledged that a full retroactive audit of existing impacted keys would be "a monumental task." Scale creates real operational constraints, not just convenient excuses.
What Developers Should Do Now
Truffle Security recommends that all developers check each Google Cloud Platform project to see whether the Generative Language API is enabled. This can be found in the GCP console by navigating to APIs and Services, then Enabled APIs and Services, and looking for the "Generative Language API." Any keys that appear in public-facing code, website JavaScript, or public repositories should be rotated immediately, starting with the oldest ones first.
Developers can also use TruffleHog, Truffle Security's open-source secrets scanning tool, to check codebases, CI/CD pipelines, and web assets for exposed keys. It verifies whether discovered keys are live and carry Gemini access, which is meaningfully more useful than a simple pattern match.
The real question is not whether this incident could have been avoided by the affected startup. It probably could have. The more important question is whether a platform that silently upgrades the privilege level of a public credential, without notifying anyone, has genuinely upheld its half of the shared responsibility bargain. Reasonable people will land in different places on that. But the $82,000 bill sitting on a three-person startup's desk is evidence that the current answer is not good enough for the developers taking the risk.