Amazon Web Services celebrated the 20th birthday of its Simple Storage Service (S3) on March 14, 2006. Two decades later, the service has become so foundational to modern computing that its success obscures a more complex reality: AWS's control of the market rests on infrastructure fragility, and its competitors are closing in faster than many realise.
Today, S3 stores more than 500 trillion objects and serves more than 200 million requests per second globally across hundreds of exabytes of data in 123 Availability Zones in 39 AWS Regions. The scale is genuinely staggering. By AWS's own playful calculation, stacking all the hard drives S3 uses would reach the International Space Station and almost back.
What makes S3's dominance particularly noteworthy is not just its size but its economic model. Even as S3 has grown to support this incredible scale, the price you pay has dropped. Today, AWS charges slightly over 2 cents per gigabyte, representing a price reduction of approximately 85% since launch in 2006. This combination of massive scale, reliability, and cost reduction created a business moat that few technologies achieve.
Yet beneath the 20-year success story lie questions about lock-in, competitive sustainability, and physical resource constraints that will shape the cloud industry's next phase.
The API that became an industry standard
One of S3's most durable achievements has little to do with storage capacity and everything to do with interface design. According to Amazon's birthday post by principal developer advocate Sébastien Stormacq, S3 initially offered "approximately one petabyte of total storage capacity across about 400 storage nodes in 15 racks spanning three data centers, with 15 Gbps of total bandwidth." This modesty belies its impact.
S3 is taking on a larger role in AI systems, and "S3 is absolutely critical as part of that fabric of AI going forward." What began as a simple object storage system for developers has become the reference architecture for cloud storage itself. Competitors do not merely offer alternatives; they offer "S3-compatible" interfaces, a tacit admission that AWS won the standards battle.
The competitive pressure beneath the surface
AWS still dominates, but figures from Synergy show that in Q2 2025, Amazon boasted a 30% market share, but this is down two points compared to the same quarter in 2024. This appears modest until you note that Microsoft Azure grew its revenue by 39% in its most recent quarter, while Google Cloud's sales leaped 48%, more than double AWS' growth, though on a much smaller base.
The competitive dynamics are sharpening in ways that favour AWS's rivals. Enterprises already have a lot of their data on AWS, but they often have to use a wide variety of external tools that require slow and expensive transport of the data to the tools. "Data migration is very hard," said Mai-Lan Tomsen Bukovec, AWS' vice president of technology for data and analytics. This stickiness is real but fragile; it depends on AWS continuing to innovate faster than competitors can erode its lead.
Infrastructure stress and the AI gold rush
The most immediate vulnerability is physical. The surging infrastructure demands of artificial intelligence and cloud computing have led Western Digital to report that its hard drive manufacturing capacity is now fully allocated for the year 2026. This unprecedented demand is reshaping the storage industry. Cloud operators prefer high-capacity platters and sign multiyear orders to secure them. Seagate's CEO William Mosley confirmed, "Our nearline capacity is fully allocated through calendar year 2026."
This constraint affects all players, but it creates real tension between AWS's stated vision and its operational reality. The vision for S3 extends beyond being a storage service to becoming the universal foundation for all data and AI workloads. "You store any type of data one time in S3, and you work with it directly, without moving data between specialized systems." Executing that vision requires infrastructure AWS cannot yet obtain in the quantities it needs.
Engineering consistency as a business advantage
What AWS does control is software excellence. Over the past 8 years, AWS has been progressively rewriting performance-critical code in the S3 request path in Rust. Blob movement and disk storage have been rewritten, and work is actively ongoing across other components. Beyond raw performance, Rust's type system and memory safety guarantees eliminate entire classes of bugs at compile time. This is unsexy infrastructure engineering that delivers tangible reliability.
The broader cloud storage market is expanding at rates that mask competitive gains. The global cloud storage market size was valued at USD 161.28 billion in 2025 and is projected to grow from USD 197.8 billion in 2026 to USD 809.99 billion by 2034, exhibiting a CAGR of 19.30% during the forecast period. Rising tides lift all boats, but the question for AWS is whether its boat is rising faster than its competitors', and whether supply constraints will eventually throttle growth across the sector.
For enterprises considering their storage strategy, S3's 20 years of operational success and API standardisation represent genuine value. The question is whether that advantage will persist once Microsoft, Google, and alternative vendors mature their own offerings and the hard drive shortage eases. AWS built something lasting, but markets are rarely forgiving of complacency, and price wars have not begun in earnest yet.
From Singapore: the question facing Australian enterprises is whether S3's dominance insulates them from tech industry consolidation risks, or amplifies them. For exporters and data-intensive operations in APAC, understanding these dynamics matters increasingly as cloud bills mount and vendor lock-in becomes a material cost.