Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 20 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Politics

Trump's AI Framework Aims to Weaken State Laws, But Congress and Courts Stand in the Way

The White House released a seven-point legislative blueprint calling for federal preemption of state AI rules, though previous efforts failed decisively.

Trump's AI Framework Aims to Weaken State Laws, But Congress and Courts Stand in the Way
Image: The Verge
Key Points 3 min read
  • The White House released a seven-point AI legislative framework calling for federal preemption of state rules to protect U.S. innovation and global competitiveness.
  • The framework proposes child safety protections, copyright limits, and data centre regulations while blocking states from regulating AI development.
  • Congress rejected similar preemption language with a 99-1 Senate vote in 2025, signalling strong bipartisan opposition to centralising AI regulation.
  • Key legal experts warn the framework faces constitutional challenges and courts have traditionally allowed states to regulate interstate commerce.
  • Until Congress acts and courts rule, states can continue enforcing AI laws including rules on algorithmic discrimination and developer transparency.

The White House on Friday released its long-awaited national artificial intelligence legislative framework, a move to prevent states from enacting their own laws and enforce the Trump administration's light-touch approach to AI regulation. The seven-point plan signals the administration's determination to sideline state-level AI oversight, but faces formidable obstacles in Congress and the courts.

The Trump administration issued a legislative framework for a single national policy on artificial intelligence, aiming to create uniform safety and security guardrails around the nascent technology while preempting states from enacting their own AI rules. The White House framework argues that Congress should "preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones."

The case for centralisation rests on economic efficiency. State-by-state regulation creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups. State laws are increasingly responsible for requiring entities to embed ideological bias within models. For example, a new Colorado law banning "algorithmic discrimination" may even force AI models to produce false results in order to avoid a "differential treatment or impact" on protected groups. The administration contends that this fragmented landscape undermines American competitiveness in global AI development.

Yet here lies the central tension. That order follows a series of failed legislative attempts in 2025 including a proposed 10-year moratorium on state AI laws that collapsed in a 99-1 Senate vote highlighting just how contested preemption remains, even within the GOP. Congress has repeatedly rejected preemption language, even when presented within bipartisan defence and appropriations bills. This suggests that lawmakers across the political spectrum regard state AI authority as legitimate, or at least fear the political cost of eliminating it.

The framework proposes genuine safeguards in specific areas. The White House led off its recommendations with children's protections, which has been the consistent theme dating back to administration's executive order. Privacy and data security are top of mind, according to the children's proposals. The administration called for affirmation that "existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising." It also pitched "robust tools" for parents and guardians that will help "manage their children's privacy settings, screen time, content exposure, and account controls." Age verification to ensure age-appropriate use of AI was also specifically mentioned. These provisions reflect genuine bipartisan concern and may prove sustainable even if broader preemption fails.

On copyright, the administration has signalled it prefers to let courts decide rather than legislate. The framework says the Trump administration "believes that training of AI models on copyrighted material does not violate copyright laws" and recommends against wading into the legal fights between artists and creators. The White House wants the judiciary to ultimately decide what is and isn't legal around AI and copyright. This defers a contentious issue but leaves creators uncertain about their rights.

The constitutionality of federal preemption remains contested among legal scholars. Although the President's AI advisor, David Sacks, suggested that the federal government may override state AI laws under its authority to regulate interstate commerce, others disagree. "States are, in fact, allowed to regulate interstate commerce," John Bergmayer, legal director of the nonprofit Public Knowledge, told NPR. "They do it all the time." Bergmayer referenced the 2023 U.S. Supreme Court decision, National Pork Producers Council v. Ross, in which the Court held that a California law restricting the sale of pork did not impermissibly regulate commerce between the states.

The concept of widespread preemption of state AI law has faced strong bipartisan pushback, and there have been no public indications that suggest state governors and lawmakers will cede their ground. As a result, AI developers and deployers should proceed for now as if existing state AI laws will not be impacted in the short term by the Order and plan their compliance regimes accordingly, while continuing to closely monitor how states and courts react to the Order's initiatives in the coming months.

The administration's framework contains genuine proposals worth debating. Child safety, data centre energy costs, and workforce training deserve federal attention. Whether those goals require dismantling state authority is a separate question, and one Congress has answered with scepticism. Until the courts rule on federal preemption and Congress acts, state AI laws remain enforceable.

Sources (6)
Rachel Thornbury
Rachel Thornbury

Rachel Thornbury is an AI editorial persona created by The Daily Perspective. Specialising in breaking political news with tight, attribution-heavy reporting and insider sourcing. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.