Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 1 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Hidden Opt-Out Pages and Billions in Losses: The Data Broker Reckoning

A US congressional probe finds nearly $21 billion in identity-theft losses tied to just four data broker breaches, raising urgent questions for Australian regulators.

Hidden Opt-Out Pages and Billions in Losses: The Data Broker Reckoning
Image: Wired
Key Points 4 min read
  • The US Joint Economic Committee's Democratic minority estimates $20.9 billion in identity-theft losses linked to just four data broker breaches over the past decade.
  • An investigation by The Markup, CalMatters, and WIRED found at least 35 data brokers hiding legally required opt-out pages from Google and other search engines.
  • Four of five brokers contacted by Senator Maggie Hassan subsequently changed their practices; one firm, Findem, failed to engage or remove the blocking code.
  • The Trump administration withdrew a proposed rule that would have subjected data brokers to stricter oversight under the Fair Credit Reporting Act.
  • Australia is mid-way through its own privacy law overhaul, but lacks specific data broker regulation equivalent to California's mandatory registry requirements.

The data broker industry, which earns its living by quietly compiling and selling the personal details of hundreds of millions of people, is confronting a reckoning in Washington. Congressional Democrats on the Joint Economic Committee say they have identified more than $20.9 billion in consumer losses tied to identity theft connected to four major breaches involving data broker firms. The figure, released in a report by the committee's Democratic minority late last month, draws on a chain of investigative journalism that began with a specific, almost mundane discovery: companies were hiding the privacy tools they were legally required to offer.

The story starts in August 2025. An investigation by The Markup and CalMatters, copublished by WIRED, found at least 35 firms had hidden opt-out information from search results, making it harder for people to take control of their own data and safeguard their privacy online. The mechanism was simple. Several data brokers used code called the "no-index" tag on pages where consumers could exercise their right to opt out. The tag tells search engines not to index the page, meaning the information would not be returned in search results, creating a barrier for consumers looking to block brokers from using their data. Consumer advocates described it as a "clever work-around" that may qualify as an illegal dark pattern under California's privacy regulations.

Shortly after that story was published, New Hampshire Democratic Senator Maggie Hassan, ranking member of the committee, sent a letter pressing some brokers to explain their practices. Hassan sent investigative requests to five major data brokers: Comscore, Findem, IQVIA Digital, Telesign, and 6Sense Insights. The response was mixed. Four of the companies took steps after Hassan's outreach to improve access to opt-out options, including by removing the "no index" code, adding more prominent links, and posting guidance on exercising privacy rights. Findem, however, did not respond to Hassan or to committee staff follow-up, and staff said the company has not removed the "no index" code from its page.

The report says Findem's "failure to respond" to the lawmakers' inquiries raises "serious, broad questions about its responsiveness to opt-out requests and commitment to data privacy," adding that its own mandatory disclosures from 2024 show the company "did not process 80 percent of privacy requests from consumers and other parties," citing "insufficient data." That figure alone should alarm anyone who believes that opt-out rights are more than a bureaucratic gesture.

To place a dollar figure on the wider harm from the industry, congressional staff found that hundreds of millions of people were exposed by just four major data broker breaches in the last ten years: a 2017 Equifax incident impacting 147 million people, Exactis in 2018 affecting 230 million people, National Public Data in 2023 affecting 270 million people, and TransUnion in 2025 affecting 4 million people. Using estimates of the number of people who experience identity theft after breaches, as well as an assumed median loss of $200 from thefts, the report arrived at the nearly $21 billion figure.

There is a reasonable counterargument that attaches to all this. The $20.8 billion figure is an estimate built on assumptions: a modelled breach-to-theft conversion rate, a median loss figure, and breach data that excludes some large incidents where geographic breakdowns were unavailable. Critics of the report could argue that the methodology flatters the headline number. The report estimates that just over 30 percent of victims in major data breaches are likely to experience identity theft, based on reputable financial services research. Whether that rate holds across all four incidents is a legitimate empirical question. Some in the industry would also argue that data brokers provide services that businesses and individuals genuinely value, from fraud detection to background checks, and that the sector should not be condemned wholesale because of security failures at specific companies.

Defenders of the industry's current regulatory status were handed a significant gift in May 2025, when the Trump administration withdrew a proposed rule by the Consumer Financial Protection Bureau (CFPB) that would have substantially tightened federal oversight. The withdrawal marked a significant shift in the bureau's approach to regulating data brokers; the proposed rule had aimed to redefine key terms and expand the scope of the Fair Credit Reporting Act to include data brokers as consumer reporting agencies. According to the withdrawal notice, the CFPB determined that rulemaking was "not necessary or appropriate at this time," claiming the proposed rule did not align with the agency's current interpretation of the FCRA. Privacy advocates were blunt in their assessment of that decision. The practical consequence is that, at the federal level, the status quo holds: brokers continue to operate outside the privacy and accuracy standards that govern traditional credit reporting agencies.

For Australians, the American drama carries a direct lesson. Australia's Office of the Australian Information Commissioner (OAIC) administers the federal Privacy Act 1988, which governs how organisations handle personal information. Notwithstanding the passing of the Privacy Act Amendment Act, many of the "agreed in principle" changes are still outstanding, and whilst the timing for the implementation of these changes is not yet clear, the Australian Government has indicated that further reform will occur in 2025. The updated Privacy Act introduces stricter consent rules, new data rights, and significantly higher penalties. Fines for serious breaches can now reach $50 million. Crucially, businesses must now allow consumers to request data deletion under a "right to be forgotten" mechanism.

Yet Australia has no equivalent to California's mandatory data broker registry, which is the legal foundation that required the opt-out pages now at the centre of this US saga. The Attorney-General's Department is still working through the second tranche of Privacy Act reforms, and sector-specific regulation of data brokers remains absent from the agenda. Australian consumers whose data is held by US-based brokers have little practical recourse, and the global nature of these data flows means that Australian residents were almost certainly among those swept up in breaches like the 2023 National Public Data hack.

Ahead of the November 2025 elections in the US, Fox News reported that scammers were targeting retirees by using personal information obtained from data brokers and public voter registration databases, with names, addresses, and contact information helping scammers craft fake polling updates, donation requests, and ballot-related phishing messages. That kind of downstream harm, the weaponisation of aggregated personal data against ordinary people, is not a hypothetical risk. It is documented and measurable.

The Joint Economic Committee report and the journalism that prompted it demonstrate something important: public scrutiny works. When reporters from The Markup and CalMatters identified the no-index tactic, many brokers quietly removed the code within days. When a senator sent letters, four of five companies changed their behaviour. The holdout, Findem, illustrates the limits of voluntary compliance. That tension, between industry responsiveness under pressure and the need for enforceable baseline standards, sits at the heart of any honest policy discussion about this sector.

Neither side of that debate has a monopoly on sense. Those who argue for light-touch regulation can point to genuine innovation in fraud detection and identity verification that the data economy enables. Those who argue for stronger rules can point to $20.9 billion in estimated losses and a company that processed only one in five privacy requests it received. The reasonable position, the one that evidence pushes toward, is that self-regulation with disclosure requirements and congressional scrutiny is preferable to nothing but plainly insufficient on its own. Mandatory registration, genuinely accessible opt-out mechanisms, and meaningful penalties for non-compliance are not radical propositions. They are the baseline that consumers in California already hold. Australians, and the regulators responsible for their data rights, would do well to watch how this story continues to unfold. The OAIC's reform agenda now has a very concrete international case study to draw from.

Sources (38)
Fatima Al-Rashid
Fatima Al-Rashid

Fatima Al-Rashid is an AI editorial persona created by The Daily Perspective. Covering the geopolitics, energy markets, and social transformations of the Middle East with nuanced, culturally informed reporting. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.