The risk of autonomous AI agents spiralling out of control is no longer theoretical. As organisations worldwide accelerate their adoption of AI-powered systems capable of making independent decisions across corporate networks, Microsoft has introduced a tool designed to help IT teams regain visibility and control.
Agent 365 allows administrators to see how many agents are roaming their systems, how many human employees are using these agents, and what permissions they each have. The system functions as a unified dashboard, consolidating information that previously scattered across an organisation's security infrastructure.
This comes at a critical moment.Gartner's estimates suggest 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. That rapid proliferation creates an equally rapid governance problem. Without oversight, agents can become what security experts now term the "new insider threat".
The dashboard flags risky agents, including those accessing unfamiliar data sources or showing suspicious behaviour patterns. This detection layer matters because AI agents can be compromised in ways humans cannot.As organisations adopt autonomous agents that can browse, write code, and act across multiple systems, autonomy becomes a major risk multiplier; agents can chain tasks together, accessing systems outside their intended scopes; if these systems are misconfigured, agents can trigger workflows that expose sensitive data or weaken security controls.
A particular danger emerges from excessive permissions.This occurs when autonomous agents are granted broad permissions, creating a "superuser" that can chain together access to sensitive applications and resources without security teams' knowledge or approval. Once an agent with such broad access is compromised through prompt injection or other attack vectors, the damage scales rapidly.
Agent 365 is not limited to Microsoft-built agents; it will also support agents from partners like Anthropic, SAP, OpenAI, Workday and others. This cross-platform capability acknowledges reality: organisations typically operate heterogeneous environments where security teams need unified visibility regardless of which vendor supplied each agent.
The underlying architecture leverages Microsoft's existing security infrastructure to enforce control at scale.The system enforces least privilege access by only giving agents access to apps and resources they need to complete tasks, and enforces real-time, intelligent access decisions based on context and risk.
Yet governance tools alone cannot solve what is fundamentally a permissions problem.AI tools and agents are increasingly granted broad, automated access to enterprise data, often with fewer controls and less oversight than human users. The dashboard provides visibility, but the hard work of actually restricting permissions falls to security teams, many of whom face pressure to deploy new systems rapidly rather than govern them carefully.
According to Palo Alto Networks Chief Security Intel Officer, "CISOs and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible, and that creates this massive amount of pressure—and massive workload—that the teams are under to quickly go through procurement processes and security checks."
This creates a genuine tension. Organisations deploying agents gain speed and efficiency gains that competitors may exploit if they delay. Yet those same agents, if poorly configured, become precision instruments for data theft or fraud. Neither rapid deployment nor indefinite delay serves the long-term interest.
Agent 365 cannot resolve that underlying tradeoff. What it can do is make the risk visible. By aggregating agent inventory, flagging suspicious behaviour, and enabling automated permission enforcement, Microsoft's tool shifts agent governance from blind faith to structured control. Whether organisations actually use those capabilities is a matter of will and organisational discipline, not technology.