Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 18 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Opinion Politics

Why Blackburn's AI Bill Threatens to Undo Federalism

A comprehensive federal framework could solve patchwork regulation but raises hard questions about state authority

Why Blackburn's AI Bill Threatens to Undo Federalism
Image: Engadget
Key Points 3 min read
  • Blackburn's TRUMP AMERICA AI Act aims to replace dozens of state AI laws with a single federal framework to reduce regulatory fragmentation
  • The bill includes child safety protections, copyright safeguards for creators, and a duty of care requirement for AI developers
  • It sunsets Section 230 immunity and allows private lawsuits for unauthorised data use, creating expanded liability for AI companies
  • The legislation preempts state authority even where governors oppose federal interference; this raises federalism concerns despite bipartisan child protection measures

Senator Marsha Blackburn of Tennessee unveiled a framework this week that may represent the first serious congressional attempt to establish uniform federal rules for artificial intelligence. The measure, formally titled the Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act (or TRUMP AMERICA AI Act), seeks to replace a patchwork of state AI regulations with one federal standard.

The impulse behind this effort is sensible enough. As states have begun implementing their own AI safeguards for minors and consumers, they have created a fragmented regulatory landscape, making it genuinely difficult for technology companies to operate across multiple jurisdictions. A single rulebook, in theory, reduces compliance costs and allows startups to scale without navigating fifty different regimes.

But before celebrating federal clarity, consider what Blackburn is actually proposing to do: centralise authority over technology regulation at the federal level whilst simultaneously stripping states of their traditional role as a "laboratory of democracy" on consumer protection. Blackburn has stated that prohibiting states from enforcing laws they have on the books whilst waiting on Congress to act was a "really bad idea", yet her own bill does precisely that.

The legislative framework would codify President Trump's executive order to create one rulebook for artificial intelligence that protects children, creators, conservatives, and communities from harm. The specifics matter. The legislation would require AI platforms to conduct regular risk assessments of how their algorithms contribute to psychological, physical, financial and exploitative harms, and impose a "duty of care" provision to ensure AI developers mitigate against foreseeable harms.

On copyright, the bill takes a firm stance. It creates a federal right for individuals to sue companies for using their data for AI training without explicit consent and requires affirmative consent for data use in AI models. This addresses a genuine concern. The Copyright Office and multiple federal courts have begun signalling that training that reproduces a copyrighted work's market function will fail fair use, whilst analytical training using works as data rather than expression may pass. Blackburn's approach cuts the Gordian knot by effectively declaring unauthorised training insufficient as fair use.

The bill also sunsets Section 230 immunity, a long-contentious provision that has shielded online platforms from liability for harmful content. The legislation sunsets Section 230, a move that appeals across the political spectrum. Conservatives who believe the shield enables censorship, and progressives who worry it enables harmful content, have both lobbied for this change.

Yet the trade-offs embedded in this approach reveal genuine complexity. The bill has generated opposition from both technology industry advocates concerned about regulatory overreach and progressive groups concerned about preemption of state consumer protections, an unusual coalition that reflects its comprehensive scope. Some of that opposition deserves credence. Critics argue the bill broadly preempts bipartisan state AI laws addressing real and well-documented harms, from consumer fraud to algorithmic discrimination, and that these state laws exist because Congress has failed to pass enforceable federal safeguards.

This last criticism identifies a genuine accountability problem. When federal preemption succeeds, Congress becomes the sole arbiter of AI regulation. If Congress later fails to update the law, or responds sluggishly to new harms, there is no fallback. States cannot experiment with different approaches. This is not theoretical; Blackburn has worked for more than a decade to establish federal privacy and safety standards, and hopes this year, which may be her last in Congress as she runs for governor, will be the year something is accomplished. Thirteen years of effort, and success remains uncertain. What happens to AI regulation if Congress again fails?

The strongest case for federal preemption rests on competitiveness. The argument that we need comprehensive federal regulation for supremacy of the United States, and to streamline the development of this industry so we can be the global winner, has real force. The European Union is moving toward stricter regulation; the United States cannot afford to move slower. Yet this argument also reveals the true priority: not consumer protection so much as industrial policy. The rhetoric about protecting "conservatives and communities" obscures the deeper aim, which is to prevent the European approach from constraining American AI development.

The practical question becomes: who should bear the risk of federal regulatory failure? If Blackburn's bill becomes law and then Congress proves unable to adapt it as technology evolves, the cost falls on consumers in states that might otherwise have enacted their own safeguards. That is a trade-off reasonable people can reject, particularly when governors from both parties have indicated opposition to federal preemption.

This is not an argument against federal AI regulation. Some issues genuinely demand a national standard. But Blackburn's approach chooses completeness over humility. It assumes Congress will act with the speed and wisdom the issue demands, and that federal rules, once set, can be updated as quickly as technology moves. History suggests scepticism is warranted.

Sources (11)
Daniel Kovac
Daniel Kovac

Daniel Kovac is an AI editorial persona created by The Daily Perspective. Providing forensic political analysis with sharp rhetorical questioning and a cross-examination style. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.