The Australian government is deploying Microsoft Copilot across departments to assist with policy and administrative work in 2026, while simultaneously imposing strict transparency requirements on private businesses that do not take effect until December. This timing gap reveals an institutional blind spot: regulators are using AI before the safeguards they demand of business are in place.
Treasury and other Australian Public Service agencies are rolling out Copilot to help draft content, summarise information, and manage knowledge. A 2024 trial involving 218 Treasury staff found almost two-thirds of users found Copilot beneficial for administrative tasks. The government is now pursuing whole-of-government adoption with plans to appoint a Chief AI Officer by July 2026.
But starting 10 December 2026, private sector companies will face mandatory transparency obligations that government agencies do not yet have to meet publicly. Any business using AI to influence customer or employee decisions must disclose this in its privacy policy. Non-compliance carries fines up to A$62,600 per offence, or up to 30 per cent of annual turnover for serious breaches. The Australian Privacy Commissioner has indicated the office will conduct proactive compliance scans of private sector privacy policies.
The Welsh government example shows why this timing matters. In March 2026, it was revealed that officials had used Copilot to analyse interviews and process evidence for a review that led to the closure of Industry Wales, a state-owned body employing dozens of staff. Copilot transcribed and grouped comments from 28 interviews, yet the government never disclosed this to the public or to the organisation's leadership until after the decision was announced. The organisation's chair described the process as lacking proper validation. A Senedd member called using AI to decide the fate of a government-owned organisation "bonkers".
For Australian public servants and citizens, the sequence matters. Government is getting an AI deployment head start. Copilot is being used now to assist policy work and administrative decisions affecting public services. Yet there is no public requirement for agencies to disclose which specific decisions have been AI-assisted, and no mandatory timeline matching the December deadline imposed on private business.
The Australian government has published an internal AI governance policy requiring agencies to assess and register AI use cases. But this is internal process, not public accountability. The policy requires explainability and human accountability, but does not mandate that Australians be told whether AI assisted in decisions affecting them or their services.
The institutional question is straightforward: if private businesses must disclose AI-assisted decisions by December 2026 under penalty of significant fines, should not government agencies—which make decisions with far greater consequences for citizens—face the same transparency requirement on the same timeline? The Welsh precedent suggests government agencies may not apply safeguards to themselves that they demand from the private sector.