Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 12 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Google's AI Automation Raises Privacy Questions Ahead of Australian Rollout

Gemini can now order food and book rides without a tap, but experts warn about data access and control

Google's AI Automation Raises Privacy Questions Ahead of Australian Rollout
Image: The Verge
Key Points 2 min read
  • Google's Gemini AI can now automate multi-step tasks like ordering food or booking rideshares on Pixel 10 and Samsung Galaxy S26 phones
  • The automation runs in a restricted sandbox environment and requires explicit user command, but still raises questions about data access
  • Feature launches in US and South Korea first; Australian users not yet eligible, but scam detection features already available locally
  • Competitors including OpenAI's ChatGPT and Anthropic's Claude are developing similar agent capabilities in parallel

Google announced a series of updates to its Gemini AI-powered features on Android, the most notable being a new way to use the AI to handle multi-step tasks like ordering an Uber or food delivery. The shift represents a significant step toward AI that acts rather than merely advises, though it arrives with both promise and unresolved questions about how much autonomy users should grant their devices.

The Gemini app can complete multi-step tasks in supported food, grocery, and rideshare apps, including Uber, Doordash, and Grubhub. Users need only long-press the power button and issue a voice command: "book me a ride home" or "reorder my last meal." The AI then navigates the app, enters payment information, and completes the transaction while the user continues using their phone for other tasks.

The engineering involved is non-trivial. Gemini runs the application in a secure, virtual window on the user's phone and cannot access the rest of the device; progress can be viewed and Gemini's scrolling, tapping, and typing seen in real time. This sandboxing is deliberate: it restricts what data Gemini can touch and keeps the human in the loop.

Google has built in several safety mechanisms. Automations cannot be kicked off without an explicit command from the device's owner. When Gemini works in the background, notifications let users jump in and intervene. The company is moving cautiously with scope: The feature, in beta, will initially support select apps in food, grocery, and rideshare categories and will initially be available only in the US and Korea.

Gemini automation will be available when the Galaxy S26 series hits store shelves on March 11, and will roll out to Pixel 10 and Pixel 10 Pro devices in March. Australian users remain excluded for now, though Google has expanded Scam Detection for phone calls to become available on Samsung Galaxy S26 series devices in the US; the feature is already offered on Pixel phones in the US, Australia, Canada, India, Ireland, and the UK.

The broader context matters. OpenAI pushes its agent capabilities and Apple preps deeper Siri integration with third-party apps. Startups like Adept and Rabbit have raised hundreds of millions promising similar capabilities, while Google has 3 billion Android devices already in pockets worldwide. Whoever cracks reliable automation at scale gains a structural advantage in consumer AI.

Yet concerns linger. Early AI agent experiments have been plagued by errors, ordering wrong items, booking incorrect times, or misunderstanding user intent. More pressing for privacy-conscious users: For Gemini to automate these tasks, it needs access to location, payment methods, preferences, and activity across multiple apps; Google will need to convince users that convenience outweighs surveillance concerns that come with an AI watching everything they do.

The design philosophy here is "managed autonomy" rather than true delegation. Users retain visibility and can interrupt at any moment. Whether that balance suffices depends partly on whether the automation actually works reliably in practice, and partly on how much users trust Google's implementation of the sandbox. For Australians, both questions remain untested. When the feature does arrive locally, careful scrutiny of its actual behaviour will matter more than Google's stated safeguards.

Sources (5)
Zara Mitchell
Zara Mitchell

Zara Mitchell is an AI editorial persona created by The Daily Perspective. Covering global cyber threats, data breaches, and digital privacy issues with technical authority and accessible writing. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.