Back to Blogs
The White House Just Drew America's AI Rulebook - Here's What's Actually in It

The White House Just Drew America's AI Rulebook - Here's What's Actually in It

March 21, 20269 min read
AITechnologyPolicyOpinionResearch

Released today, the Trump administration's National AI Legislative Framework is the most consequential AI policy document since the EU AI Act. It touches copyright, state law, data centers, children's safety, and the US-China race - and almost everyone has a problem with at least part of it.

Key Takeaways

  • The Trump White House released a seven-pillar AI policy framework on March 20, 2026, calling it legislative recommendations for Congress.
  • Pillar 7 — federal preemption of state AI laws — is the most contested provision, actively opposed by 36 state attorneys general and California's governor.
  • The copyright stance (that training on copyrighted material does not violate copyright) is a legal intervention, not neutral policy. Over 50 copyright lawsuits against AI companies are currently pending.
  • The China competition frame packages every deregulatory provision as a national security imperative, foreclosing nuanced debate about specific harms.
  • The path through Congress is narrow; the DOJ's AI Litigation Task Force may be the more immediate enforcement mechanism, challenging state AI laws in federal court.

Today, March 20, 2026, the Trump White House released its National Policy Framework for Artificial Intelligence: Legislative Recommendations - a four-page document that Michael Kratsios and David Sacks presented as the blueprint for what Congress should do with AI this year. It is built around seven pillars, and no fewer than five of them are either legally contentious, politically divisive, or directly opposed by organized constituencies.

That is not necessarily a criticism. Policy documents that offend nobody usually accomplish nothing. But this one is worth reading carefully, because it will shape the next several years of AI governance in the United States - either by passing, by failing, or by setting the terms of the fight.

The Seven Pillars, Decoded

Pillar 1: Protecting Children. The one with genuine bipartisan support. Builds on the TAKE IT DOWN Act, which bans nonconsensual publication of intimate images including AI-generated ones. Calls for parental controls on children's devices and features to combat self-harm and exploitation content. Hard to argue with; likely to pass in some form regardless of the fate of the rest of the framework.

Pillar 2: Safeguarding Communities. Streamlines permitting so that data centers can generate their own power on-site - bypassing the traditional utility interconnection queue, which currently runs two to five years. Includes a consumer protection pledge: residential electricity ratepayers should not bear the cost of AI's enormous energy demands. Also strengthens federal authority to combat AI-generated scams. Relatively uncontroversial on substance; the permitting streamlining has industry support and limited organized opposition.

Pillar 3: Intellectual Property. This is where it gets complicated. The framework simultaneously says that "the creative works and unique identities of American innovators, creators, and publishers must be respected" and that "training AI models on copyrighted material does not violate copyright laws." These positions are in active legal conflict. More than fifty copyright lawsuits against AI companies are currently pending, including NYT v. OpenAI in the Southern District of New York. The administration says it "supports allowing the Courts to resolve this issue" - but by publicly endorsing the AI industry's fair use defense, it is doing something courts will notice. This is an intervention dressed as neutrality.

Pillar 4: Free Speech. Directs Congress to give Americans recourse when AI platforms engage in "censorship activity" and to prevent AI systems from being used to "silence lawful political expression." The mechanism for this is undefined, and its interaction with Section 230 is unexplored in the document. It reads as a political gesture more than a policy.

Pillar 5: Innovation and Dominance. Remove "outdated or unnecessary barriers," accelerate deployment, facilitate testing environments. The operative framing is that deregulation is a national security imperative: every day the US delays, China closes the gap. The AI Progress coalition - which includes Amazon, Anthropic, Google, Meta, Microsoft, Midjourney, and OpenAI - welcomed this language.

Pillar 6: Workforce. Congress should expand AI literacy programs and skills training. The least specific pillar in the document. No dollar figures, no program structures, no timeline.

Pillar 7: Federal Preemption. The most consequential and contested provision. States would be barred from regulating AI development or holding AI developers liable for third-party misuse of their models. California, Colorado, Texas, and at least thirty-three other states have active AI legislation underway. A coalition of thirty-six state attorneys general sent Congress a letter opposing federal preemption of state AI authority as recently as November 2025. California's governor called the framework "yet another attempt by Donald Trump to gut laws in California that keep our residents safe." This is not hyperbole - California has passed more AI-related legislation than any other state, and the preemption clause would override most of it.

The carve-outs are meaningful: states retain authority over child safety, fraud, consumer protection, zoning, and state government procurement. But the core prohibition - states cannot impose liability on AI developers for the downstream effects of their models - is the central demand the AI industry has sought in Washington for three years.

How This Compares to What Came Before

President Biden signed a comprehensive AI Executive Order in October 2023. President Trump revoked it on his first day back in office. The philosophical difference is stark. Biden's framework treated AI governance the way we treat pharmaceuticals or financial instruments: verify safety before broad deployment, mandate testing, build oversight structures. Trump's framework treats AI governance the way Congress treated the early internet in the 1990s: get out of the way, let innovation happen, and trust the market and the courts to sort out harms.

Neither framework is obviously correct. The Biden approach risked slowing a genuinely transformative technology at a moment when geopolitical competition is intense. The Trump approach risks permitting harms to accumulate faster than any institution can address them. Where you land depends largely on how much you weight the risk of moving too slow versus moving too recklessly.

The Copyright Question Is the Most Underappreciated Provision

The preemption clause gets all the attention because it is the most politically charged. But the copyright stance may be more consequential in the near term, because the lawsuits are already filed and the judges are already reading.

In NYT v. OpenAI, currently in discovery in the Southern District of New York, the central question is whether AI training on copyrighted material constitutes fair use. The administration has now publicly stated its view: it does. That is not binding on Judge Sidney Stein. But federal judges pay attention to the executive branch's interpretation of legal questions, and an amicus brief from the DOJ supporting OpenAI's position would not be surprising.

The creative industry - writers, photographers, illustrators, musicians - reads this pillar and sees the government telling them that their work can be ingested at industrial scale to train systems that will replace them, and that this is legal. The AI industry reads it and sees a green light. Both readings are accurate.

The Path Through Congress Is Narrow

Kratsios told reporters the administration wants legislation "this year." Most experienced AI policy observers think that is unlikely, for several reasons. Republicans hold thin, fractious majorities in both chambers. The preemption clause - the provision most important to the AI industry - has opponents inside the Republican caucus, most notably Senator Marsha Blackburn of Tennessee, who has her own AI bill and has previously blocked similar preemption efforts. Democrats are unified in opposition. Reaching sixty votes in the Senate is essentially impossible without bipartisan buy-in on the framework's most contested provision.

The more immediate mechanism may be the DOJ's AI Litigation Task Force, launched in January 2026, which is actively challenging state AI laws in federal court on Commerce Clause preemption grounds. That path does not require Congress.

The China Frame

Every deregulatory provision in the document is packaged in the same argument: the United States is in a race with China for AI dominance, and anything that slows American AI development gives Beijing an advantage. This framing is not wrong - China is closing the gap on frontier model capability, has advantages in energy infrastructure and talent pipeline, and has demonstrated willingness to deploy AI at scale without the liability concerns that constrain American companies.

But the framing also does significant work to foreclose debate. When every regulatory question is cast as a choice between American dominance and Chinese dominance, the political space for nuance collapses. Child safety provisions pass because they fit the frame. Consumer protection provisions that might slow AI deployment struggle to find an audience.

What we can say with confidence: the framework is real, it reflects genuine industry priorities, it has a credible path to partial implementation through executive action even if legislation stalls, and it marks a fundamental departure from the bipartisan precautionary approach that characterized AI governance as recently as two years ago.

Whether that departure is the right call is the central policy question of the current moment. Anyone building with, investing in, regulating, or simply using AI systems in the United States has a stake in how it gets answered.

Frequently Asked Questions

What is the Trump White House AI policy framework?

The National Policy Framework for Artificial Intelligence: Legislative Recommendations, released March 20, 2026, is a seven-pillar document from the Trump administration outlining what Congress should legislate on AI. The pillars cover child protection, community safeguards, intellectual property, free speech, innovation, workforce development, and federal preemption of state AI laws.

What does federal preemption of state AI laws mean?

Federal preemption would bar individual U.S. states from regulating AI development or holding AI developers liable for third-party misuse of their models. This would override AI legislation already passed in California, Colorado, Texas, and dozens of other states. A coalition of 36 state attorneys general has publicly opposed federal preemption of state AI authority.

What is the status of the NYT v. OpenAI lawsuit?

NYT v. OpenAI (case 1:23-cv-11195, S.D.N.Y.) is in discovery as of early 2026. The central question is whether training AI models on copyrighted material constitutes fair use. The Trump administration has publicly endorsed the AI industry's fair use defense, which is not binding on the court but represents a notable executive branch intervention.

How does the Trump AI framework differ from Biden's?

Biden's AI Executive Order (October 2023) required safety testing, mandated disclosures, and built oversight structures before broad deployment. Trump revoked it on his first day back in office and replaced it with a deregulatory framework — get out of the way, let innovation happen, and trust the market and courts to handle harms.