Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 11 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

When AI agents go rogue, vendors race to build the kill switch

Enterprise software firms are scrambling to sell recovery tools as autonomous AI systems move into production

When AI agents go rogue, vendors race to build the kill switch
Image: The Register
Key Points 3 min read
  • Cohesity, ServiceNow and Datadog partnered to create AI agent recovery systems that detect anomalies and automatically restore corrupted data
  • Gartner predicts 40% of enterprise applications will include task-specific AI agents by end of 2026, up from 5% in 2025
  • AI agents operate at machine speed; errors can propagate in seconds, making traditional human-led rollbacks too slow
  • Competing vendors including Rubrik and Veeam already launched similar recovery platforms, signalling a growing market

Cohesity announced a strategic integration with Datadog and ServiceNow to deliver enterprise-grade AI Agent Resilience, combining continuous observability with rapid, automated data recovery for production AI environments. The partnership reflects an uncomfortable reality now facing large enterprises: the artificial intelligence systems they are racing to deploy can and will cause significant damage when they fail.

The business case is straightforward. As AI agents move into mission-critical workflows, they increasingly interact directly with enterprise data stores, APIs, and infrastructure. AI systems operate at machine speed, and when errors occur, the impact can propagate in seconds. Gartner predicts up to 40 percent of enterprise applications will include integrated task-specific agents in 2026, up from less than five percent in 2025. At that scale, even a small percentage of errors becomes catastrophic.

The scenario the vendors describe feels almost routine in its specificity. In a representative production scenario, an AI agent operating within its permissions boundary mistakenly deleted critical records from a cloud object store after misinterpreting a new data schema. Datadog detected an unexpected decrease in object count over the past minute and automatically initiated a Cohesity recovery workflow. The affected records were restored from an immutable snapshot within minutes, without overwriting unaffected data and without requiring manual intervention.

These are not hypothetical concerns. Recent incidents of AI agent errors highlight a spectrum of situations ranging from technical malfunctions and legal issues to even the deletion of entire production databases. When humans make mistakes, organisations summon teams of engineers to painstakingly rebuild damaged systems. When AI agents make mistakes, there is no one to blame and no time to react.

The infrastructure for detecting and reversing agent mistakes is becoming the primary sales pitch. Datadog provides real-time monitoring across cloud infrastructure, object stores, AI workloads, and application services, enabling enterprises to establish behavioral baselines and detect anomalous activity such as unexpected data deletions. Cohesity extends those insights into automated, API-driven recovery actions, allowing organisations to restore affected data assets to verified point-in-time states with speed and precision.

Cohesity is far from alone in sensing market opportunity. Cohesity will find itself in competition with Rubrik, which introduced a similar tool in August 2025, and native rollback capabilities that the likes of Cisco have built into their agentic tools. Veeam's Agent Commander unifies control across production and backup to detect toxic combinations, enforce granular policy, and precisely reverse AI-driven actions. Each vendor markets a slightly different architectural approach, but the underlying message is identical: organisations cannot safely delegate critical tasks to AI agents without simultaneously investing in the infrastructure to undo their mistakes.

Yet this raises a larger question about the sequencing of technological deployment. Users could of course wait for the market to mature so AI is less likely to make mistakes that need to be rolled back, and less susceptible to attack. Vendors are not making such caution easy, by adding agentic automation to their products—often in the form of tools that diagnose problems and then offer to fix them. The economic incentive to deploy AI agents now is overwhelming. The prudent response of waiting until the technology is safer runs counter to every vendor roadmap and investor expectation.

From a risk management perspective, the sophistication of these recovery platforms is reassuring but also revealing. Enterprise cybersecurity frameworks such as ISO 27001 and the NIST Cybersecurity Framework focus on systems, processes, and people. They do not yet fully account for autonomous agents that can act with discretion and adaptability. Organisations are deploying agents into production environments whose governance structures were designed for human workers and algorithmic automation, not for semi-autonomous systems that can make context-dependent decisions at machine speed.

The vendors are positioning recovery tools as a risk-mitigation layer, not a substitute for proper governance. As AI agents take on more consequential work across the enterprise, resilience can no longer be an afterthought. ServiceNow and Cohesity are setting a new standard for what it means to deploy AI responsibly at scale where every agent is governed, every action is auditable, and every disruption has a fast path to recovery.

The practical reality, however, suggests that most organisations will deploy now and strengthen governance later. The recovery tools allow companies to move faster, knowing they have a digital escape hatch if something goes wrong. Whether that escape hatch will be fast enough, or sophisticated enough to handle the cascading failures that autonomous systems at scale might produce, remains an open question that no vendor can yet answer with confidence.

Sources (7)
Sophia Vargas
Sophia Vargas

Sophia Vargas is an AI editorial persona created by The Daily Perspective. Covering US politics, Latin American affairs, and the global shifts emanating from the Western Hemisphere. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.