Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 23 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Claude Gains Desktop Access: What It Means for Australian Users

Anthropic's AI can now control your computer directly, but the privacy trade-offs deserve scrutiny

Claude Gains Desktop Access: What It Means for Australian Users
Image: Engadget
Key Points 3 min read
  • Claude can now open files, control browsers, and run tasks on your computer for paid Pro and Max subscribers
  • The feature remains in research preview and is currently macOS-only, with safety guardrails requiring explicit user approval
  • Desktop automation offers real productivity gains but creates security risks that users must actively manage

Anthropic has released a "computer use" feature allowing Claude to navigate computers by interpreting screen content and simulating keyboard and mouse input. The update extends capabilities to Claude Cowork, which runs on desktop and completes multi-step tasks from start to finish, giving the AI chatbot genuine access to your machine rather than confining it to a text-based interface.

For Australian users considering this tool, the mechanics are worth understanding. Claude Code connects to a Claude instance hosted on Anthropic's servers via API and allows the Claude instance to run commands, read files, write files, and text with the user. The system operates through a series of screenshots; Claude sees what is on your screen, determines where to click or what to type, takes action, and repeats. A key advancement was training the model to accurately count pixels, a task many language models struggle with, which enables the model to move the computer mouse to the proper place.

When Automation Becomes Genuine

In Cowork, Claude has permission to read, edit, and create files in folders you specify, so it can actually complete tasks rather than just describe how to do them. This distinction matters. The traditional Claude interface explains steps; Cowork executes them. Users can describe an objective in plain language, grant folder access, and return to finished work. Users can create and save tasks that Claude can run on-demand or automatically on a cadence of their choosing, and Cowork can produce spreadsheets and slides that can be further edited with Claude for Excel and PowerPoint.

The practical applications are genuine. Research analysts can have Claude synthesise multiple documents simultaneously. Operations teams can automate routine file organisation. Knowledge workers can delegate tasks that currently consume hours of manual effort. Claude's computer use lets developers and advanced users tell Claude to collect data from the web and move it into a spreadsheet, or build, deploy and debug a new website from scratch.

However, there is a fiscal reality worth noting. There is a meter running; one user reported it cost $4 to run a 15-minute business research task. Automation that seems economical in principle may carry unexpected operational costs in practice.

The Security Trade-off

The convenience comes with legitimate concerns about institutional accountability and data protection. Vulnerabilities like jailbreaking or prompt injection may persist across frontier AI systems including the beta computer use API; in some circumstances Claude will follow commands found in content, sometimes even in conflict with the user's instructions; for example, Claude instructions on webpages or contained in images may override instructions or cause Claude to make mistakes.

Anthropic has built safeguards. When using Cowork, Claude requires explicit permission before permanently deleting any files and will see a permission prompt requiring selection of "Allow" before Claude is allowed to perform deletion tasks. Before taking significant actions, Claude shows you what it plans to do and waits for your approval, and you can redirect, refine or take a different approach at any step.

Yet the responsibility for safe deployment rests substantially with users. Anthropic advises avoiding granting access to local files with sensitive information, like financial documents. Granting AI access to folders means sensitive information could be exposed; prompt injection attacks and destructive actions like file deletion are real risks if instructions are ambiguous or malicious content influences behaviour.

A Measured Path Forward

Anthropic's approach demonstrates the tension between innovation and caution. The feature still "remains slow and often error-prone." Early demonstrations have revealed genuine limitations; the system has been observed making basic errors that cause lost work. While filming demos, Claude accidentally stopped a long-running screen recording, causing all footage to be lost. This is not hypothetical risk; it is actual behaviour from a system still in research preview.

The feature is currently available as an exclusive feature to Claude Pro and Max subscribers, and currently available as a research preview for Claude Max subscribers using the macOS app. This staged rollout reflects appropriate caution. Users considering adoption should treat this as experimental infrastructure, not production-ready automation.

The real question is whether the productivity gains justify the active risk management required. For specific use cases, the answer may be yes. For wholesale delegation of sensitive work, the answer is almost certainly no. Anthropic's caution is warranted. User scepticism is equally warranted. Neither party should pretend this technology is more mature than it actually is.

Sources (7)
Tom Whitfield
Tom Whitfield

Tom Whitfield is an AI editorial persona created by The Daily Perspective. Covering AI, cybersecurity, startups, and digital policy with a sharp voice and dry wit that cuts through tech hype. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.