When OpenAI released GPT-5.3-Codex in early February, the timing looked deliberate. Anthropic had just launched Claude Opus 4.6 the same day. Both companies released benchmarks claiming superiority. Both promised their model would reshape how developers work. Yet the developer community did not wait to debate the merits.
Anthropic's Claude Code has already won the hearts of working engineers. Claude Code now holds over half of the AI coding market, and reached a $2.5 billion run-rate by early 2026. OpenAI's Codex, by contrast, captures roughly 20 to 21% of enterprise coding workloads. This is not a close race.
The gap reflects two fundamental realities. First, Anthropic moved faster. Claude Code reached a $1 billion annualized run rate just six months after launch, a velocity that even ChatGPT did not match. OpenAI, despite inventing the category of AI coding assistants years earlier, fell behind. Second, developers chose a different tool. They were not waiting for OpenAI's next model; they were already working productively with Claude Code.
The architectural difference matters here. Claude Code emphasises a developer-in-the-loop, local workflow using the terminal, while OpenAI's Codex agent is designed for both local and autonomous, cloud-based task delegation that can handle asynchronous work. Developers working alone prefer Claude's interactive model. Teams managing multiple parallel tasks prefer Codex's cloud-based delegation. But individual developers dominate the current market.
OpenAI's new release does carry genuine improvements. GPT-5.3-Codex advances both the frontier coding performance of its predecessor and the reasoning capabilities of GPT-5.2, and is also 25% faster, enabling it to take on long-running tasks that involve research, tool use, and complex execution. GPT-5.3-Codex was the first model that was instrumental in creating itself; the Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results.
Yet speed and capability alone do not win markets. When examining the metrics that actually matter—enterprise adoption, path to profitability, product quality, developer loyalty—Anthropic is running away with it; the gap is widening. Anthropic now commands 32% of the enterprise LLM API market while OpenAI has dropped to 25%.
The reversal is striking. Anthropic displaced OpenAI as the dominant enterprise player by late 2023; OpenAI's early advantage has steadily eroded, falling from 50% enterprise market control to today's 25%, exactly half its previous share. This happened not because OpenAI stopped improving its models, but because developers made different bets about which tools fit their workflows.
OpenAI faces a genuine dilemma. Its ChatGPT platform remains the consumer gold standard. Its broader business remains larger. But in the specific category where developers write code every day, where the value is tangible and measurable, Anthropic has built the more trusted tool. Neither company can claim victory across the industry; instead, the market appears to be fragmenting into specialised domains where different tools dominate.
For developers choosing between these tools, the decision hinges on workflow. Claude Code rewards engineers who treat AI as a collaborative thinking partner, reviewing changes incrementally. Codex excels at handling a backlog of discrete tasks in parallel, with engineers reviewing results asynchronously. As benchmarks converge and both tools improve at a rapid pace, the differences that matter most are practical: execution environment, interaction style, context management, and cost at scale. The best choice is the one that fits how you actually work, not the one with the highest benchmark score.
OpenAI's challenge is not building a better model. It may already be doing that. The challenge is shifting developer habits after Anthropic has already won their trust. Loyalty in software development runs deep, and switching costs are real. OpenAI will continue to invest in Codex and release impressive capabilities. But catching up to a competitor who has already captured market position requires more than technical superiority. It requires developers to believe that the new tool will serve them better than the one they are already using. That is a much harder argument to win.