From London: As Australians woke on Monday morning, one of the sharpest competitive moves in the short history of consumer AI was already in motion. Anthropic's new memory import tool went live over the weekend, offering users of ChatGPT, Google Gemini, Microsoft Copilot, and other chatbots a way to carry months of accumulated personal context directly into Claude, without starting from scratch.
The mechanics are deliberately simple. The import process involves visiting Anthropic's dedicated page, copying a provided prompt, pasting it into whichever chatbot you are leaving, and then pasting the resulting text block into Claude's memory settings. Imported memories can take up to 24 hours to fully process, since Claude handles memory updates in daily synthesis cycles rather than in real time. Users can then review and edit what Claude has retained by visiting the "Manage memory" section in the app's settings.
The feature is not available to everyone. The cross-chatbot memory transfer is limited to paid users, potentially creating a new incentive for upgrades while ensuring tighter control over data migration. Anthropic has also been clear about scope: the tool is oriented toward professional context, focused on work-related topics rather than the full breadth of personal detail a user might have shared with a rival chatbot over many months.
The timing is no accident. Anthropic lost its Pentagon contract on 28 February 2026 after refusing to loosen safeguards for military use of Claude, with OpenAI moving quickly to secure the deal instead. The two conditions Anthropic would not budge on were prohibitions on mass domestic surveillance of American citizens and the use of AI in fully autonomous weapons systems without human oversight.
Anthropic said it would challenge the supply-chain risk designation in court, calling it both legally unsound and a dangerous precedent for any American company that negotiates with the government. The company's position, as it publicly stated, was that current frontier AI models are not reliable enough to be used in fully autonomous weapons, and allowing such use would endanger America's warfighters and civilians.
OpenAI's Sam Altman moved fast after Anthropic walked away. Altman said he had reached an agreement with the Department of Defense that reflects the firm's principles on prohibiting domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. By Altman's own admission, the deal was "definitely rushed" and "the optics don't look good."
Critics of OpenAI's deal have not been satisfied by those assurances. Techdirt's Mike Masnick claimed the deal "absolutely does allow for domestic surveillance," because it states that the collection of private data will comply with Executive Order 12333. OpenAI has disputed that reading, arguing its cloud-only deployment model and multi-layered safety controls provide stronger protections than any previous classified AI agreement. The debate about where the real safeguards sit remains genuinely unresolved.
The public reaction, whatever one makes of the technical details, was swift. The backlash against OpenAI's Pentagon partnership fuelled social media movements and caused Claude's user base to grow more than 60 per cent since January 2026. The Instagram account "quitGPT" gained 10,000 followers in the wake of the news, and a Reddit post urging users to cancel and delete ChatGPT received 30,000 upvotes. Claude overtook ChatGPT in Apple's US App Store on the Saturday following the Pentagon announcement.
For Canberra, the implications are worth watching. Australian government agencies and major corporations have been rapidly integrating AI tools into their operations, and questions about data sovereignty and the military applications of commercial AI are not uniquely American concerns. The Anthropic episode provides a live case study in what happens when an AI company draws hard ethical lines against a government client, and whether commercial markets will reward or punish that stance.
There is a reasonable counter-argument to the wave of Claude enthusiasm. ChatGPT remains a close second on the app store charts and retains what many consider a first-mover advantage in the AI space. OpenAI has not announced a comparable import feature, and as the market leader it benefits from high switching costs — making it easy to leave is not in its interest. The memory import tool is clever, but it works precisely because users have to actively choose to switch; inertia remains OpenAI's most durable competitive advantage.
Memory portability challenges the idea that stored context permanently locks users into one platform, and the focus of AI competition is moving beyond raw performance toward ecosystem control and data sovereignty. That is a genuinely significant shift. Whether Anthropic's ethical stance and its clever product response translates into a lasting change in market share, or merely a spike driven by a specific political moment, is a question the coming months will answer. The honest position is that both outcomes are plausible, and reasonable observers disagree about which is more likely.