Amazon Web Services suffered at least two service disruptions in recent months involving its in-house artificial intelligence coding tool, prompting the retail giant to impose new oversight requirements on junior engineers using the technology.
AWS experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes. The tool, which can take autonomous actions on behalf of users, determined that the best course of action was to "delete and recreate the environment."

According to four sources who spoke to the Financial Times, the disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer, which helps customers visualise, understand, and manage AWS costs and usage over time) in one of the company's 39 geographic regions around the world. Yet the incident marked the second time within months that Amazon's own AI tools had disrupted operations.
One senior AWS employee told the Financial Times, "We've already seen at least two production outages. The engineers let the AI agent resolve an issue without intervention. The outages were small but entirely foreseeable."
The exposure prompted swift action. Leadership set an 80 percent weekly use goal and has been closely tracking adoption rates. After the outages surfaced, Amazon announced that junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes. AWS "implemented numerous safeguards," including mandatory peer review for production access and staff training.
Amazon disputes the characterisation that its AI tools caused the failures. An AWS spokesperson said the event "was the result of user error, specifically misconfigured access controls, not AI." The company argues that by default the Kiro tool "requests authorization before taking any action" but the engineer involved in the December incident had "broader permissions than expected, a user access control issue, not an AI autonomy issue."
Yet the deeper issue involves how these AI tools have been deployed and promoted within the organisation. The employees said the AI tools were treated as an extension of an operator and given operator-level permissions. In both of the outages, the engineers didn't require a second person's approval before finalising the changes, going against typical protocol.
The timing raises questions about staffing pressures. Amazon's staggering job cuts this week, the second wave since October, brings the commerce giant's recent layoffs to roughly 9 percent of its corporate workforce. Engineers have raised concerns that reduced headcount may be forcing developers to rely more heavily on AI tools to compensate for fewer staff members.
Reasonable people can disagree on whether these incidents reflect genuine limitations in AI tooling or failures of deployment discipline. What seems clear is that granting autonomous tools the same access levels as senior engineers, without requiring a second set of eyes before changes reach production systems, created unnecessary risk. The new approval requirements address that gap.
For AWS customers, the incidents affected limited services in specific regions. For Amazon's leadership, they signal a broader challenge: how to harness AI's productivity gains without sacrificing the human oversight that prevents mistakes from cascading into service outages.