From Washington: X suspended 800 million accounts in 2024 for breaching rules on platform manipulation and spam, according to testimony delivered to UK lawmakers. The disclosure arrives as democratic nations worldwide grapple with foreign interference campaigns that exploit social media to undermine elections.
Wifredo Fernández, a government affairs executive at X, during a video link session with UK Members of Parliament on the Foreign Affairs Committee, detailed the scope of the company's enforcement action. Russia was allegedly behind most of the accounts that were flooding the website with spam, followed by state actors from China and Iran.
The Russian accounts pursued a deliberate strategy. The Russian accounts were trying to "stoke division" and disseminate a "particular type of narrative" to manipulate the 2024 US Presidential Elections, he told MPs on the foreign affairs committee during a video call. "There are efforts every single day to create inauthentic networks of accounts," Fernández added.
The figures underscore both the scale of state-sponsored manipulation and the challenges X faces in policing its platform. For perspective, the scale is staggering: Fernández told MPs he is "quite confident" that the remaining active accounts on X are authentic. Yet independent researchers have documented cases of bot networks evading detection for years, raising questions about how thorough the platform's monitoring actually is.
Recent research by the American Sunlight Project found what researchers called "sleeper agent" bot accounts that had been operating for up to 15 years. These accounts amplified misinformation about the 2024 US election, including false narratives about Democratic candidate Kamala Harris. The findings suggest that despite enforcement efforts, sophisticated networks continue to exploit the platform.
The tension between X's enforcement claims and independent research findings reflects a broader debate about platform accountability. Much of the change can be traced to specific decisions the platform made under Musk's leadership, particularly laying off 80% of the platform's trust and safety team. "If you look at X circa 2024 and compare it to where it was during the U.S. election in 2020, the trust and safety team that was once there is gone," one researcher noted.
Critics argue the volume of suspensions paradoxically demonstrates the problem rather than its solution. Critics argue that the sheer number of suspended accounts—nearly a billion in a single year—proves that the platform remains a primary target for sophisticated, automated misinformation campaigns. If so many accounts required suspension, some argue, how many more escaped detection?
For Australian readers, the implications are clear. All three social media giants said they had taken down scores of networks trying to interfere with other countries' democratic processes, with Russian, Iranian and Chinese most commonly behind the malicious campaigns. Australia's democratic processes and defence partnerships, particularly with the United States through AUKUS, remain targets for the same state actors identified in X's testimony.
The company faces legitimate criticism on two fronts. First, can it credibly claim to have eliminated the problem when independent researchers continue finding active bot networks? Second, did staffing decisions after the 2022 ownership change impair its ability to detect and remove these accounts in real time rather than retrospectively suspending them in bulk?
X claims its enforcement is robust. Independent researchers and democratic accountability advocates argue the evidence suggests otherwise. The truth likely lies in acknowledging that the platform's manipulation problems are both real and persistent, that enforcement efforts are occurring, and that neither effort has yet proven sufficient to secure the platform from state-backed interference. Reasonable observers can disagree about whether X's current trajectory represents meaningful improvement or merely cosmetic enforcement.