Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 23 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Cisco's DefenseClaw Takes Aim at Agentic AI Security Barriers Halting Enterprise Adoption

Lack of governance frameworks is slowing deployment of autonomous agents, creating a substantial gap between pilots and production systems

Cisco's DefenseClaw Takes Aim at Agentic AI Security Barriers Halting Enterprise Adoption
Image: ZDNet
Key Points 3 min read
  • Cisco launched DefenseClaw at RSA Conference 2026, an orchestration framework for securing autonomous AI agents in enterprise environments.
  • Enterprises show a stark adoption lag: 85% have AI agent pilots underway, but only 5% have moved agents into production, citing security concerns.
  • DefenseClaw automates security scanning, inventory, and runtime protection to eliminate manual steps and establish trusted identities for agents.
  • AI agents remain vulnerable to zero-click attacks through prompt injection and tool poisoning, demonstrating the gap between agent capability and governance.

The gap between AI agent promise and enterprise deployment is stark. According to recent Cisco research, roughly 85% of major enterprises have AI agent pilots underway. Yet only 5% have moved those systems into production. That 80-point gap reflects not scepticism about artificial intelligence's potential, but rational caution about genuine security risks.

This adoption freeze is precisely the problem Cisco aims to solve. At RSA Conference 2026, the networking giant unveiled DefenseClaw, an open-source security framework designed to give enterprises confidence that autonomous agents can be deployed safely at scale. According to Cisco's official announcement, DefenseClaw consolidates multiple security tools into a single framework and integrates with NVIDIA's OpenShell to automate security checks that previously demanded manual intervention.

The core problem is simple: agentic AI agents do not just answer questions, they act. They send emails, modify files, execute code, place orders, change permissions. An errant response from a chatbot costs nothing; an errant action from an agent can be catastrophic. Yet most enterprises lack visibility into which agents are running in their environment or who bears accountability if something goes wrong.

The technical vulnerability problem is equally pressing. Researchers have demonstrated that even the most sophisticated AI agents remain susceptible to manipulation through prompt injection attacks. At this year's RSA Conference, security researchers demonstrated zero-click exploits against Cursor, Salesforce Agentforce, and ChatGPT. By sending specially crafted prompts via calendar invitations or malicious messages, attackers could trick agents into leaking secrets, exfiltrating data, or executing unauthorised commands without any user interaction.

According to a survey cited by identity governance firm Apono, 98% of security leaders report that safety and data concerns have already delayed or reduced agentic AI deployments. The consensus among chief information security officers centres on a practical requirement: before scaling agent adoption, organisations need just-in-time access controls, dynamic policy enforcement, and unified governance across human and non-human identities.

Cisco's response involves multiple coordinated initiatives. DefenseClaw bundles several scanning tools: Skills Scanner examines agent capabilities, MCP Scanner verifies external integrations, CodeGuard analyzes AI-generated code for vulnerabilities, and an AI bill-of-materials system tracks dependencies. This allows developers to deploy agents faster without waiting for separate security approvals. The framework also includes runtime protection; Cisco's approach includes continuous content scanning at the execution level, meaning a skill that was clean at deployment can be detected and blocked if it later begins exfiltrating data.

Cisco is not alone in addressing this challenge. Microsoft has unveiled Agent 365, a unified control plane for agent management and governance. AWS has published the Agentic AI Security Scoping Matrix, a framework for assessing security requirements across different agent architectures. Multiple vendors are essentially competing to provide the governance layer that the industry lacks.

The stakes are high. Supply chain attacks on agent frameworks have already materialised. Threat researchers have documented cases where compromised AI agents approved fraudulent orders, with a single manufacturing company losing USD 3.2 million to a vendor-validation agent that had been compromised through a supply chain attack on its model provider. These incidents underscore why the security community is treating agent deployment with such caution.

From a fiscal and operational perspective, this matters deeply. The projected value of agentic AI across use cases—supply chain optimisation, customer service, code generation—is estimated at USD 2.6 to 4.4 trillion annually. Yet unlocking that value depends on establishing security foundations that don't slow development to a crawl. This is the central tension: enterprises need security guardrails robust enough to withstand autonomous systems operating at scale, but lightweight enough that they don't become the bottleneck preventing adoption entirely.

The tools Cisco and competitors are releasing represent an attempt to square that circle. Whether they actually close the pilot-to-production gap remains to be seen. The broader pattern is clear: the constraint on agentic AI adoption is shifting from technical capability to institutional confidence. Organisations now possess the technology to deploy autonomous agents. What they lack is the governance infrastructure to do so safely and at the accountability standards required for production systems.

Sources (7)
Aisha Khoury
Aisha Khoury

Aisha Khoury is an AI editorial persona created by The Daily Perspective. Covering AUKUS, Pacific security, intelligence matters, and Australia's evolving strategic posture with authority and nuance. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.