$0.20 per million input tokens. That's what OpenAI is charging developers for its new GPT-5.4 nano, a stripped-down model designed to handle repetitive, high-volume work without breaking the bank.
The release marks a strategic push to extend capability across the entire AI spectrum. While the company released full-fat GPT-5.4 early this month for professional coding and data work, it now wants ordinary ChatGPT users and budget-conscious developers to taste that same intelligence in smaller, faster packages.
ChatGPT users can start using GPT-5.4 mini today. Free and Go tier subscribers can access it through the "Thinking" option in ChatGPT's menu. For paid users, the mini version acts as a safety net: when you exhaust your rate limit on GPT-5.4 proper, the system automatically falls back to mini so you keep working.
Here's what makes this release significant. GPT-5.4 mini offers better performance than its predecessor in reasoning, multimodal understanding and tool use, meaning it's better at parsing non-text inputs such as images and audio, and has a more nuanced understanding of how to do things like search the web. In plain terms, this is capable AI. GPT-5.4 mini significantly improves over GPT-5 mini across coding, reasoning, multimodal understanding, and tool use, while running more than 2x faster.
More striking is what OpenAI claims about the mini model's ceiling. It approaches the performance of the larger GPT-5.4 model on several evaluations, including SWE-Bench Pro and OSWorld-Verified. Those are serious coding benchmarks. You're not getting a toy here.
The nano model targets a different problem altogether. GPT-5.4 nano is ideal for tasks such as data classification and extraction where speed and cost-efficiency are top of mind. Imagine developers building systems where they need to sort 10 million customer emails, extract key phrases, or flag priority items. These jobs don't require deep reasoning; they need speed and low cost. That's nano's job.
ChatGPT users won't find nano in the chatbot; instead, OpenAI is making it only available through its API service. The company envisions developers using more advanced models to delegate tasks to AI agents running GPT-5.4 nano. This compositional approach saves money: a large model handles strategic decisions and planning, while nano agents handle routine execution at scale.
The pricing signals OpenAI's confidence in the model's utility. Nano starts at $0.20 per million input tokens, which sits between legacy offerings but reflects aggressive pricing discipline. For high-volume work, this compounds into real savings.
What this release reveals is a company continuing to move downmarket while improving commodity models. The free GPT-5.4 mini access won't hurt OpenAI's paid tiers; if anything, it builds habits and locks users in. But it also signals that the frontier of capability is becoming less about exotic abilities and more about cost-effective delivery. For Australian developers and startups building on AI, the availability of both capable reasoning (mini) and cheap execution (nano) creates new economic possibilities for cash-constrained teams.