Organisations are deploying AI agents faster than their security teams can govern them. These autonomous systems invoke APIs, move data across systems, and execute tasks without direct human intervention, yet most lack basic identity controls. For identity access management vendor Okta, this gap represents both a crisis and an opportunity.
The company announced this week a security framework for AI agents and a platform to implement it, set for general availability on 30 April 2026. The announcement targets a genuine problem: while 88% of organisations report suspected or confirmed AI agent security incidents, only 22% of organisations treat AI agents as independent, identity-bearing entities.
Traditional identity systems were built for humans and static software. AI agents differ from conventional user accounts because they can operate autonomously and trigger workflows across multiple systems, can run commands on user machines, interact with file systems, and pass data between applications. An agent deployed in development might spin up without approval from IT, run continuously with legitimate credentials, and nobody tracks its lifecycle. Employees can connect tools independently, creating "shadow" deployments that security teams may not know about.
Okta's answer centres on three operational questions: where are the agents, what systems can they access, and what can they do. The company demonstrated importing AI agents and their metadata from Salesforce, ServiceNow, Google and AWS with one click. From the same dashboard, Okta's agent discovery tool lets users find unmanaged agents and assign them owners and governing policies. The tool runs continuously in the background to help admins take inventory of agents.
The governance component is critical. With Okta for AI Agents you can trigger a universal logout if an agent starts accessing things it shouldn't. Each agent receives a unique identity in Okta's Universal Directory, not hidden behind a single service account that multiple systems might share. This matters for accountability. It is positioned as the place to assign clear human ownership for each agent, an increasingly important requirement when autonomous systems act on behalf of staff.
The competitive landscape is fragmenting. Stephanie Barnett, Vice President Presales APJ at Okta, said organisations in the region are adopting AI agents quickly while governance and security practices lag. "Across Asia Pacific, organisations are moving quickly to embed AI agents into everyday business processes, from customer engagement to internal operations. The pace of adoption is accelerating faster than most governance and security frameworks can evolve."
For Australian technology leaders and security teams, the implications are direct. Many Australian enterprises rely on Okta for identity management already. Those running AI agents without governance frameworks now face visibility blindspots and compliance exposure. The Okta announcement legitimises agent identity as a distinct governance domain, not merely a feature of a model or an API permission.
Other vendors including SailPoint and emerging specialists like Token Security and Aembit are moving in parallel. The market signal is clear: identity is becoming the control plane for AI systems, not an afterthought. Organisations that act now to bring unmanaged agents into governance frameworks will move faster later. Those that delay risk accumulating technical debt and compliance complications as regulators catch up to agentic AI deployment.
What this means for your security posture
A central element of the plan is discovering both sanctioned and unsanctioned agents. Organisations may approve certain agent platforms through IT, while employees can also connect tools independently. The directory is designed to provide a searchable inventory and lifecycle management, from onboarding to decommissioning.
The transparency piece has teeth. The approach includes detecting employee connections between AI agents and enterprise applications, plus views into permissions and "blast radius" to help security teams assess the likely impact if an agent is compromised. This contextual risk assessment is harder than traditional permission audits because agents move faster and act continuously.
For organisations not yet managing agent identity explicitly, the Okta framework provides a blueprint. Okta is adding agent-related integrations to the Okta Integration Network, extending its catalogue of more than 8,200 integrations with support for AI agent platforms including Boomi, DataRobot, and Google Vertex AI. Integration breadth matters: shadow agents multiply when governance tools don't talk to the platforms where agents actually run.