Australian organisations deploying artificial intelligence are entering a new phase where success will depend far less on the power of the underlying models and far more on the quality of the data feeding them. This shift marks a critical transition from the enthusiasm of early experimentation toward the harder work of building reliable, production-grade AI systems at enterprise scale.
At a Sydney technology conference, Ken Exner, chief product officer at data platform company Elastic, described this evolution through four distinct phases. Companies first experienced excitement about generative AI technology, followed by urgency as boards pressured teams to build something quickly. Then came disillusionment when early pilots failed to deliver the return on investment executives expected. Now, Exner argued, the industry is entering an acceleration phase as organisations learn what actually works.

The core insight is straightforward: a powerful artificial intelligence model without access to the right information cannot produce useful answers. Organisations cannot simply deploy a chatbot interface and expect transformation. Instead, they must design systems that connect AI to operational data across the entire business, pulling information from both structured databases and unstructured sources such as emails, PDFs, and messaging platforms.
This emerging discipline, called context engineering, focuses on delivering relevant enterprise data to AI systems in real time. It represents a fundamental shift in how developers spend their time. Rather than crafting perfect prompts, engineers now focus on figuring out what data to retrieve, how to integrate it, and how multiple AI agents should collaborate.
The challenge mirrors principles that have always underpinned search technology: relevance. In a traditional search engine, users might review multiple results and apply their own judgment. Generative AI systems typically produce a single answer. That means the underlying data quality becomes even more critical. If an AI system only sees incomplete data, it tells only part of the story. A wrong answer erodes trust quickly.

Early disappointments stemmed from a fundamental misunderstanding. Many organisations treated AI deployment as a problem of implementing ChatGPT-style assistants for internal functions such as HR, sales support, or customer service. While useful, these tools rarely delivered the operational transformation that executives anticipated. That gap between expectation and reality contributed to widespread scepticism about whether AI could deliver genuine return on investment.
However, recent advances in AI models, particularly those designed for software development and reasoning tasks, have changed the equation. Developers have recognised that these tools represent a dramatic leap forward, not merely incremental improvement. That realisation has shifted organisational conversations from whether AI will create value to how fast it will reshape core workflows.
The next frontier is agentic AI: systems that execute tasks autonomously rather than simply generating text. Building these requires new application architecture. Organisations must retrieve information from multiple data sources, generate embeddings to represent meaning, apply ranking and retrieval techniques, and orchestrate AI agents alongside traditional workflows.
For Australian enterprises, this means the real work is only beginning. Success depends not on adopting the latest model but on building the infrastructure to deliver precise, contextual data to AI systems reliably at scale. The organisations that solve this problem, and build platforms capable of retrieving and analysing data across infrastructure, applications, and operational systems, will be the ones that unlock genuine value from AI.