The story unfolding in artificial intelligence development right now is not the one we were promised. Two years ago, the prevailing narrative centred on responsible development, industry self-regulation, and a race to the top where companies would compete on safety as much as capability. Today that narrative has fractured. What we are seeing instead is companies taking measured steps backward on safety when faced with real competitive pressure.
The shift is visible across multiple fronts.In July 2025, the White House released America's AI Action Plan, outlining President Donald Trump's perspective on AI and identifying specific steps to ensure the United States leads the race to achieve global dominance in AI, with three pillars of action including 90 specific policy recommendations aimed at removing regulatory barriers to AI infrastructure development. The policy message from Washington is clear: American competitiveness comes first. Safety frameworks are viewed as friction, not foundation.
This creates a genuine dilemma for companies caught between competing pressures.California's latest AI regulations, signed into law, merely require big AI companies to publish safety frameworks and create a pathway for reporting safety incidents, after another year of aggressive lobbying by tech companies. That outcome reveals something important: companies are not passive recipients of regulation. They are actively shaping it to reduce costs and complexity. Some of this lobbying is reasonable industry engagement. Some of it represents a deliberate strategy to water down meaningful oversight.
Military applications expose the stakes more clearly.The U.S. and China perceive lethal autonomous weapons as strategic assets crucial for achieving strategic superiority, and during the 2023 San Francisco Bilateral Meeting, the two countries initially agreed to hold their first AI arms control talks in 2024, but China later announced the suspension of talks. When national security concerns dominate, safety considerations take a distant second place.Since AI picks up inadvertent biases from underlying data sets used to train it, in the context of autonomous weapons the criteria that will inform who is and who is not a combatant or target will likely involve factors including gender, age, race, and ability; with AI drone swarms potentially striking multiple targets simultaneously on a large scale, challenging the principles of proportionality and precaution under international humanitarian law.
The case for moving fast is not frivolous. American technological leadership in AI genuinely matters for economic growth and national security. Excessive regulation could slow development and cede ground to competitors less constrained by safety concerns. That is a legitimate policy consideration. Yet the costs of safety shortcuts have become real, not hypothetical.Leaked Meta documents revealed that executives signed off on allowing AI to have "sensual" conversations with children; in Baltimore, an AI-powered security system mistook a student's bag of Doritos for a gun; an AI-enabled teddy bear was yanked from store shelves after reports that it discussed sexual topics and encouraged children to harm their parents; and psychiatrists across the United States increasingly warned about the growing problem of AI "psychosis," with OpenAI sued for allegedly coaching a teen to commit suicide.
The path forward requires acknowledging the genuine tension rather than pretending it does not exist.While safeguards such as post-training alignment and adjustments may reduce certain risks, they can also introduce trade-offs that may degrade model performance if applied too aggressively, meaning AI companies may have to spend more time and make greater investments in upstream data curation, validation and management rather than downstream corrective mechanisms. That investment costs money and time. For companies racing against rivals, those costs bite hard.
Yet commercial viability depends on public trust, and public trust dissolves when harm accumulates.Technology companies know that their business models rely on public cooperation, particularly when it comes to access to data, and that cooperation will evaporate if people lose confidence that their data are safe and being used responsibly, or learn that AI products are harming people. This is not sentiment. It is economics. The most competitive AI company in the long run is not the one that cuts corners fastest; it is the one the public trusts.
What this means in practice: reasonable people can disagree on the pace of development. But that disagreement should not obscure what is actually happening. Companies are lobbying to weaken safety requirements. Governments are prioritising national competitiveness over international agreements. And the promised race to the top has become, in several important cases, a race to the bottom. A pragmatic regulatory approach would acknowledge both the genuine need for innovation speed and the genuine need for meaningful guardrails, enforced not through self-regulation alone but through independent accountability. That requires accepting that some regulatory overhead is the price of sustainable competitive advantage.