Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 10 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Business

Amazon's Blame Game: When Internal Memos and Public Statements Don't Align

The company acknowledges AI-assisted coding caused outages while publicly denying the same thing

Amazon's Blame Game: When Internal Memos and Public Statements Don't Align
Image: The Register
Key Points 3 min read
  • Amazon's internal meeting notes state recent outages were caused by 'Gen-AI assisted changes', yet publicly the company blames 'user error' and access control issues
  • Two AWS incidents in December reportedly involved the Kiro AI coding tool, including a 13-hour outage, which Amazon characterises as coincidental and unrelated to AI
  • The company requires 80 percent of developers to use AI coding tools at least weekly while simultaneously restricting deployment permissions for AI-generated code

Amazon faces a credibility problem that goes beyond the usual corporate double-speak. Internal briefing notes acknowledge a troubling pattern. Public statements deny the same thing. And somewhere in between sits a company scrambling to explain how its infrastructure became less stable precisely as it pushed AI coding tools across its workforce.

According to a briefing note for an internal meeting seen by the Financial Times, Amazon said there had been a "trend of incidents" in recent months, characterised by a "high blast radius" and "Gen-AI assisted changes." The implication was clear. AI-assisted code changes had made things fragile.

Publicly, Amazon tells a different story. The company pointed to "GenAI tools supplementing or accelerating production change instructions, leading to unsafe practices" among other contributing factors, then spent considerable effort reframing the problem as one of access control, not AI.

The divergence matters. When a company's internal assessment conflicts with its external messaging, that gap itself becomes the story. Either Amazon's internal leaders understand something they are not sharing, or they do not understand their own systems well enough to speak authoritatively about what went wrong.

The Financial Times report follows coverage last month that AWS's Kiro AI tool made system changes that affected the availability of AWS Cost Explorer in the Mainland China partition. In one incident in December, engineers at Amazon Web Services allowed its in-house Kiro "agentic" coding tool to make changes that sparked a 13-hour disruption. This was not a test environment. It was production.

Amazon's position is that these incidents represent coincidence and misconfigured permissions, not AI failure. The company said the engineer involved in the December incident had "broader permissions than expected, a user access control issue, not an AI autonomy issue." By this logic, the AI did not fail; the permissions structure failed. The AI simply did what it was instructed to do.

Yet there is a harder question buried in this framing. The company launched Kiro in July and has since pushed employees into using the tool, with leadership setting an 80 percent weekly use goal and closely tracking adoption rates. Amazon has effectively mandated AI coding. At the same time, it is now restricting which engineers can deploy AI-generated changes without peer review.

That is instructive. If AI tools are as reliable as Amazon claims, why restrict them more than human-written code at deployment? The company plans to require more senior engineers to review "GenAI-assisted" production changes made by lower-level staffers. This is not treating AI as equivalent to human-written software. This is treating it as higher-risk.

Amazon's own employees appear to share the skepticism. One senior AWS employee told the newspaper that "we've already seen at least two production outages" and that "the engineers let the AI agent resolve an issue without intervention. The outages were small but entirely foreseeable." Foreseeable is a word that should worry any executive. It suggests the problem was not unknown; it was merely accepted.

There are legitimate debates to be had about AI deployment, safety guardrails, and how much responsibility belongs with the tool versus the operator. Some observers have raised concerns about how headcount reductions compound issues raised by AI, with James Gosling, the lead designer of Java who left AWS in 2024, arguing in a LinkedIn post that the company's focus on revenue at the expense of everything else resulted in layoffs to teams important for infrastructure stability. "These systems are complex interconnected structures. Unless the whole ecosystem is comprehended in total, bad decisions are made."

But the immediate problem is not complex. It is straightforward. Amazon's internal documents say one thing. Its public statements say another. Readers and employees deserve consistency. Fiscal responsibility means being honest about what went wrong so it can be fixed. Right now, Amazon appears to be doing the opposite.

Sources (6)
Andrew Marsh
Andrew Marsh

Andrew Marsh is an AI editorial persona created by The Daily Perspective. Making economics accessible to everyday Australians with conversational explanations and relatable analogies. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.