$99 per month. That's what Microsoft will charge enterprises for its new flagship productivity bundle, complete with AI automation, security governance, and what the company is framing as the best-in-class agent technology.
On Monday, Microsoft announced Copilot Cowork, a cloud-based automation tool that integrates Anthropic's Claude models directly into Microsoft 365. Unlike Anthropic's standalone Claude Cowork, which lives on a user's desktop and accesses local folders, Microsoft's version runs in the cloud within Microsoft 365 infrastructure, drawing on what the company calls Work IQ—a layer of enterprise context harvested from emails, meetings, files, and chat history across the organisation.
The pitch is straightforward: delegate multi-step knowledge work to an AI agent that already understands your organisation's data, operates within your security perimeter, and integrates seamlessly with tools your team already uses. Microsoft describes it as "Wave 3" of Microsoft 365 Copilot, a shift from conversational assistance toward autonomous execution.
Pragmatism or Desperation?
The move is notable for what it signals about Microsoft's relationship with rival AI companies. Rather than betting exclusively on OpenAI, Microsoft is making Claude models available alongside OpenAI's latest offerings in mainline Copilot Chat for Frontier program users. The company frames this as "model diversity by design"—avoiding vendor lock-in while letting enterprise customers benefit from competing innovations. Anthropic's reasoning capabilities, the pitch suggests, excel at complex multi-step tasks that might trip up other models.
For Microsoft, the strategy makes commercial sense. Copilot adoption has remained modest relative to the company's $13 billion investment in Anthropic and OpenAI infrastructure. Microsoft said Copilot paid seats have grown 160 per cent year on year, with daily active usage up 10-fold, but that still leaves enormous upside if the company can convince enterprise customers that Copilot has evolved beyond a chatbot. Offering Claude Cowork's capabilities within Microsoft's walled ecosystem—complete with governance tools—is a direct challenge to Anthropic's own product ambitions.
Yet there is an unresolved tension at the heart of this rollout: Microsoft is integrating technology that carries known, unpatched security vulnerabilities into enterprise environments.
The Vulnerability That Preceded the Launch
Security researchers at PromptArmor demonstrated that Claude Cowork is vulnerable to file exfiltration attacks via indirect prompt injection. Within 48 hours of Claude Cowork's public launch in January, researchers showed that a malicious document containing hidden instructions—formatted as invisible 1-point white text—could trick Claude into uploading sensitive files to an attacker's Anthropic account. The attack required no user approval and exploited the fact that Claude's virtual machine environment restricts network access except to Anthropic's own API endpoints.
The files exfiltrated in the proof-of-concept included financial documents with partial Social Security numbers. The vulnerability worked against both Claude Haiku and Claude Opus 4.5, Anthropic's flagship model, meaning it crosses capability tiers.
More damaging still: security researcher Johann Rehberger had reported this flaw to Anthropic via HackerOne in October 2025, three months before Claude Cowork launched. Anthropic acknowledged the report but did not issue a fix before releasing the product to millions of users.
Microsoft claims that Copilot Cowork is "prevented from doing harm" by Microsoft 365's security and governance controls, operating in a protected, sandboxed cloud environment. The company says it will not make the same choices Anthropic made. But the underlying architectural flaw—the ability of prompt injection to manipulate file operations—remains unresolved in the Claude models themselves. Microsoft's infrastructure may mitigate exposure, but it cannot eliminate the risk entirely.
The Broader Bet
Microsoft's announcement also includes a new Agent 365 platform—marketed as the "control plane" for managing AI agents across enterprises—launching May 1 at $15 per user per month. The full Enterprise E7 suite, bundling Copilot, Agent 365, and identity and security tools, will cost $99 per user per month.
This is a clear attempt to own the governance layer for enterprise AI agents as adoption accelerates. Microsoft is betting that organisations will pay premium prices for unified management, complaining IT departments, and integrated security. The strategy mirrors how the company built dominance in productivity software: not by leading in innovation, but by bundling, integrating, and offering a complete governance story that larger organisations demand.
Whether that strategy works depends on whether customers view Copilot Cowork's conveniences as worth the security trade-offs. Microsoft's infrastructure may provide better visibility and controls than Anthropic's desktop approach, but it does not provide guarantees, because the underlying problem—prompt injection in foundational models—remains an active area of AI research with no complete solution in sight.
The honest conclusion: you are paying for integration, governance, and convenience. The security dividend is real but incomplete. Reasonable organisations might make that trade-off. But they should do so with clear eyes about what they are and are not buying.