From Washington: In the hours following the US and Israeli strike on Iran, one of the most consequential military actions in the Middle East in years, the information environment on X collapsed into disorder. According to a review by Wired, hundreds of posts promoting misleading claims about the locations and scale of the attack spread rapidly across the platform, reaching large audiences before any meaningful correction could take hold.
The episode is a pointed reminder of what happens when a major geopolitical event collides with a social media platform that has significantly wound back its content moderation infrastructure. Since Elon Musk acquired X, formerly Twitter, the platform has reduced its trust and safety workforce, relaxed enforcement of its misinformation policies, and restructured the community notes fact-checking system in ways critics argue have slowed its response to fast-moving crises.
In a development that will reverberate across the Pacific, the disinformation surge on X is not merely an American problem. Australian policymakers, defence analysts, and members of the public who turned to X for real-time information about the strike would have encountered a chaotic mix of verified reporting, unverified claims, and outright fabrication. For a nation deeply embedded in the AUKUS alliance and with its own strategic equities tied to Middle East stability, the integrity of public information during such moments is not a trivial concern.
The centre-right case for treating this seriously is straightforward. Functioning markets, sound policy decisions, and effective democratic deliberation all depend on a reasonably accurate shared information base. When that base is corrupted at speed, the costs are real: investors make decisions on false premises, governments face pressure based on inaccurate public understanding, and allies struggle to coordinate messaging. This is not a question of political censorship. It is a question of basic institutional reliability.
That said, the counter-argument from free-speech advocates deserves honest engagement. Heavy-handed platform moderation during breaking news events carries its own risks. Governments and institutions have their own interests in controlling narratives, and platforms that act too quickly to suppress content have historically made errors that silenced legitimate journalism and eyewitness reporting. The ACCC's Digital Platform Services Inquiry has grappled with exactly this tension in the Australian context, noting that interventions designed to reduce harm can create new vectors for censorship if not carefully designed.
The community notes model that X has relied on since scaling back its moderation teams is premised on crowd-sourced correction. In theory, it is a decentralised, less paternalistic alternative to top-down fact-checking. In practice, Wired's review suggests the model is too slow for fast-moving crises where false claims can be seen by millions before a correction is appended. Speed, in these situations, is not a detail. It is the whole ballgame.
On Capitol Hill, legislators on both sides of the aisle have been circling the question of platform accountability for years without producing durable legislation. The Australian Parliament's review of the Online Safety Act offers a more structured domestic framework, though its reach over a US-based platform in real-time crisis conditions is limited.
Australia's approach to combating online misinformation has leaned toward regulatory frameworks and codes of conduct rather than direct government intervention in content decisions, a balance that reflects genuine uncertainty about where the line between protecting public discourse and policing it should fall.
The honest conclusion here is that no single model resolves the tension cleanly. Platforms like X will always face pressure to act fast during breaking events, and they will always risk getting it wrong in both directions. What the Iran strike disinformation surge shows is that the current settings on X are not calibrated well for crisis moments. Whether the answer is better platform design, stronger regulatory pressure, or more media literacy among users, the problem is real and the costs of ignoring it accumulate with every major event.
Reasonable people disagree about how much responsibility platforms should bear for the content that flows through them. But the evidence from this week suggests that disagreement should be informed, not resolved by pretending the problem does not exist.