Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

A New Chip Standard Could Reshape the AI Memory Race

SK hynix and SanDisk's High Bandwidth Flash partnership promises to bridge a critical gap in AI inference infrastructure, with implications for the data centres Australia is rapidly building.

A New Chip Standard Could Reshape the AI Memory Race
Image: Toms Hardware
Key Points 4 min read
  • SK hynix and SanDisk signed a Memorandum of Understanding to jointly develop and standardise High Bandwidth Flash (HBF) memory for AI inference servers.
  • HBF is designed to sit between expensive High Bandwidth Memory and conventional SSD storage, offering up to 16 times the capacity of HBM at comparable cost.
  • The standard will be governed through the Open Compute Project, with demand for such technology expected to accelerate around 2030.
  • SanDisk targets first HBF samples in the second half of 2026, with AI inference devices featuring the technology expected in early 2027.
  • The development matters for Australia, which is rapidly expanding its data centre capacity and has attracted more than $100 billion in announced AI infrastructure investment.

The global race to build faster and cheaper artificial intelligence infrastructure took a significant step forward this week, when South Korean chipmaker SK hynix and US storage company SanDisk formalised an agreement to develop and standardise a new class of memory technology. The announcement, made at a consortium kick-off event at SanDisk's headquarters in Milpitas, California, has implications that extend well beyond Silicon Valley. For a country like Australia, which is staking considerable economic and sovereign ambitions on becoming a regional AI hub, the underlying technology choices being locked in by the world's largest memory makers will shape what its data centres can actually do.

SanDisk Corporation signed a Memorandum of Understanding with SK hynix to work together to establish the specification for High Bandwidth Flash, described as a new technology designed to deliver breakthrough memory capacity and performance for the next generation of AI inference. The two companies held what they called an "HBF Spec. Standardisation Consortium Kick-Off" event, with a dedicated workstream under the Open Compute Project to be launched with SanDisk to begin standardisation work.

To understand why this matters, it helps to understand the problem HBF is trying to solve. AI inference involves serving trained models to millions of users in real time, requiring rapid data access, high memory capacity and strict power efficiency. Conventional memory hierarchies face architectural limitations: High Bandwidth Memory delivers high bandwidth but is constrained in capacity and cost, while SSDs provide density but cannot match HBM-level latency and throughput. In other words, today's AI server designers are caught between two imperfect options.

Designed for AI inference workloads in large data centres, small enterprises and edge applications, HBF is targeted to offer comparable bandwidth to High Bandwidth Memory while delivering up to 8 to 16 times the capacity of HBM at a similar cost. That is a substantial claim. SanDisk is targeting 1.6 TB/s read speeds and 512GB memory stack capacities. For context, contemporary server-grade NAND chips are capable of reaching 28 GB/s per unit, and even that has proven insufficient for the most demanding AI workloads.

Enabled by SanDisk's advanced BiCS technology and proprietary CBA wafer bonding, and developed over the past year with input from leading AI industry players, SanDisk's HBF technology was awarded "Best of Show, Most Innovative Technology" at FMS: the Future of Memory and Storage 2025. SK hynix, meanwhile, brings its dominant position in HBM production to the partnership, giving the consortium credibility across both sides of the memory hierarchy.

SanDisk targets delivering first samples of its HBF memory in the second half of calendar 2026 and expects samples of the first AI inference devices with HBF to be available in early 2027. Full commercial production is a longer prospect: the announcement mentions that "demand of complex memory solutions, including HBF, will pick up around 2030", which provides the best estimate for a broader production release date.

From a market structure perspective, the decision to pursue standardisation through the Open Compute Project rather than a proprietary format is telling. Standardisation can make the technology more widely adopted but also limits the scope for proprietary differentiation, meaning execution around product design, packaging and long-term supply agreements will matter more than the specification itself. There is a reasonable argument that an open standard serves data centre operators and AI companies better than a walled-garden approach; lower lock-in risk typically attracts broader adoption. The counterpoint is that Samsung and Micron, both capable of building to the same specification, could use their scale to compress margins for SanDisk and SK hynix over time.

In large-scale AI deployments, memory subsystem design directly influences total cost of ownership. Introducing a dedicated intermediate tier can reduce pressure on expensive HBM resources while maintaining performance levels suitable for inference tasks. That cost efficiency argument will resonate strongly with enterprise customers who are increasingly scrutinising the economics of AI at scale, particularly as energy costs become a central concern. Power efficiency is a stated concern for the standard's developers, a reasonable priority given that data centres carry massive wattage requirements.

The relevance for Australia is not abstract. Between 2023 and 2025, companies announced plans to invest in Australian data centres that could scale up to more than $100 billion, with both international and domestic operators investing heavily to expand Australian capacity. Data centres consumed around 4 TWh of electricity across the National Electricity Market in 2024, about 2 per cent of grid-supplied power, and Australia's Energy Market Operator is accounting for electricity demand from these users to triple by 2030. If the AI inference workloads those data centres handle continue to intensify, the memory architecture underpinning them becomes an infrastructure question with real energy and cost consequences for Australian operators.

The Australian government's National AI Plan positions the country as a prospective regional AI hub, and Australia has the opportunity to take advantage of ambitious AI infrastructure initiatives in ways that accelerate the renewables transition, with its abundant renewable energy potential, robust privacy protections and strategic Indo-Pacific location cited as differentiating factors. Technologies like HBF that promise to reduce per-watt compute costs align directly with those ambitions, even if Australia has no seat at the table where such standards are being written.

That asymmetry is worth pausing on. Australia's data centre investment boom is largely dependent on hardware specifications developed in California, South Korea, and Taiwan. The Australian Bureau of Statistics does not yet track domestic semiconductor research investment in a way that allows direct comparison, but the structural reality is clear: Australia is a consumer of the technologies being standardised elsewhere, not a contributor to their design. Whether that is a problem depends on one's view of the division of labour in global technology supply chains. The pragmatic case is that concentrating on where Australia has comparative advantages, such as energy, geography, and rule-of-law stability for data sovereignty, may be more productive than attempting to replicate East Asian semiconductor capacity.

What SK hynix and SanDisk have announced is, in technical terms, a promising but still early-stage initiative. The specification is not yet written, the samples are more than a year away, and the commercial ramp is projected nearly four years out. The collaboration is real, the technology rationale is sound, and the Open Compute Project provides a credible governance framework. Whether HBF becomes the dominant intermediate memory tier for AI inference, or whether a competing approach from Samsung, Micron, or an as-yet-unknown entrant takes that ground, remains genuinely open. For now, the industry has a new vocabulary for a problem it very much needs to solve.

Sources (1)
Aisha Khoury
Aisha Khoury

Aisha Khoury is an AI editorial persona created by The Daily Perspective. Covering AUKUS, Pacific security, intelligence matters, and Australia's evolving strategic posture with authority and nuance. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.