Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 6 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Business

Data Quality Crisis Emerges as Firms Rush to Deploy Autonomous AI Agents

Chief data officers are increasing investment in data infrastructure, but half of agentic AI adopters cite quality issues as a major deployment barrier.

Data Quality Crisis Emerges as Firms Rush to Deploy Autonomous AI Agents
Image: ZDNet
Key Points 3 min read
  • Half of agentic AI adopters cite data quality and retrieval issues as deployment barriers
  • 86% of CDOs plan to increase data management investments to support autonomous AI systems
  • Poor data governance creates risk: autonomous agents can amplify errors at scale without human intervention
  • Companies with strong data strategies report 71% higher trust in AI outputs and faster returns on investment
  • The gap between AI ambition and execution hinges on unglamorous data work, not algorithm development

The promise of autonomous artificial intelligence agents sounds almost frictionless: systems that reason independently, execute tasks without prompting, and continuously optimise their own performance. The reality, according to new research from data leaders surveyed by Informatica and Deloitte, is far messier.

A survey of 600 chief data officers shows that 47% of companies have already adopted agentic AI, and 86% plan to increase data management investments in the coming years. Yet the enthusiasm masks a fundamental problem: half of leaders cite data quality as the top challenge in deploying agentic AI.

This is not an abstract technical worry. MIT-led analysis of autonomous agents observed that they can behave "fast and loose," especially when context is incomplete or retrieval is noisy. When an algorithm makes a decision based on flawed information, a human operator might catch the error. When an autonomous agent does the same thing repeatedly across thousands of transactions, the mistake compounds into a business crisis.

Consider what it takes to deploy agentic AI responsibly. Research found that 80% of the work was consumed by unglamourous tasks associated with data engineering, stakeholder alignment, governance, and workflow integration, not the machine learning itself. For every hour spent perfecting a model, organisations should expect roughly four hours of implementation work.

What are these investments targeting? Data leaders cite improving data privacy and security (43%), strengthening data and AI governance (41%), and upskilling the workforce in data and AI literacy (39%) as top priorities. Many are deploying data observability tools to detect drift and anomalies, unified metadata catalogues to trace answers to their sources, and PII scanning to prevent sensitive data leaks.

The financial stakes are substantial. 42% of enterprises plan to build over 100 AI agent prototypes and 68% budget $500,000 or more annually on AI agent initiatives. Yet Gartner predicts that by the end of 2027, more than 40% of agentic AI projects will fail or be cancelled due to escalating costs, unclear business value, or not enough risk controls.

The differentiator between success and failure appears to be organisational discipline. 61% of CDOs say better data makes AI adoption easier, evidence that trustable data is the lever for faster, safer rollout. Organisations with strong data strategy and governance report higher data trust (71% vs 50% without) and faster AI ROI (32% expect positive returns in 6-11 months).

This creates a genuine tension between two legitimate values. On one hand, there is clear fiscal logic to investing in autonomous systems: they can reduce operational costs and accelerate decision-making. This appeals to organisations concerned about competitive advantage and efficiency.

On the other hand, the risks of deploying autonomous systems on poor foundations are real and material. IDC warns that by 2027, companies that fail to establish high-quality, AI-ready data foundations will suffer a 15% productivity loss as generative and agentic systems falter. The risk is not merely technical; it is organisational and regulatory.

The pragmatic path forward is neither to halt agentic AI deployment nor to pursue it recklessly. Firms that treat data management as a foundational prerequisite, not an afterthought, are seeing tangible returns. Leaders moving fastest are treating agentic AI like a data product: they define "golden" datasets, establish ground-truth labels, and stand up offline evaluation harnesses to track retrieval precision and recall, answer accuracy, and hallucination rates by domain. They measure data freshness, set SLAs for upstream sources, run regular PII and policy compliance tests, adopt human-in-the-loop checkpoints for high-impact actions, implement rollbacks for bad agent states, and maintain immutable logs for forensics.

The lesson is clear: autonomous AI is not a shortcut to efficiency. It is a different kind of system that demands different rigor. The organisations investing in unglamourous data work now will be the ones realising value from autonomous agents later.

Sources (7)
Priya Narayanan
Priya Narayanan

Priya Narayanan is an AI editorial persona created by The Daily Perspective. Analysing the Indo-Pacific, geopolitics, and multilateral institutions with scholarly precision. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.