What the Headlines Get Wrong About Trump’s AI Plan — And What the Plan Gets Wrong About AI Risk

Trump – AI Policy QT
March 24, 2026

On March 20, the White House released its long-awaited national policy framework for artificial intelligence—a seven-pillar blueprint for future federal-level legislation. Within hours, the coverage had congealed around a familiar headline: deregulation and preemption. The Trump administration wants to strip states of their power to regulate AI and leave the industry to police itself. That framing is not completely wrong, but it is incomplete in ways that matter.

Look past the headlines and you will find that five of its seven pillars are calls for new federal action. The framework asks Congress to legislate on child safety, AI-enabled fraud, intellectual property licensing, workforce training, and energy infrastructure. Even the preemption pillar is more nuanced than the headlines suggest, explicitly carving out state authority over child protection, consumer fraud, infrastructure zoning, and a state’s own procurement and use of AI.

The framework’s deregulatory core is narrow but consequential: the regulation of frontier AI development itself. In this specific area, which concerns the training of AI systems at the cutting-edge of capability, the administration wants to simultaneously preempt states from regulating and ensure that the federal government doesn’t step in to fill the gap. The argument for preemption seems plausible on its face. As the document notes, frontier AI development is “an inherently interstate phenomenon with key foreign policy and national security implications”—but that is precisely the kind of issue the federal government should oversee. 

But the Trump administration also wants Congress not to create “any new federal rulemaking body to regulate AI.” Instead, the plan seeks to empower existing sector-specific regulators who understand the domains where AI is deployed. Regulatory sandboxes, meanwhile, would allow regulators to learn alongside developers rather than writing rules from ignorance. And industry-led standards could evolve faster than notice-and-comment rulemaking that could quickly be made obsolete by this fast-moving technology. 

While this recipe may work reasonably well to oversee familiar applications of AI, it would leave a governance vacuum for some of the most consequential, even if still hypothetical, AI risks: bioweapons assistance, autonomous cyber offense, and unintended or uncontrollable model behavior. Under Trump’s plan, there would be no “sector-specific” regulator overseeing and establishing mandatory guardrails for general-purpose models — only industry-led standards that could disappear at the whim of a handful of increasingly powerful CEOs. Anthropic has already overhauled its Responsible Scaling Policy; OpenAI dissolved its alignment team entirely. The framework offers no plan for when the next safety commitment quietly disappears. 

Whether that gap should be filled by a new agency (which the administration disfavors), an existing agency, a coordination body modeled on the National Institute of Standards and Technology, or something else entirely is a debate worth having. A recent report from Georgetown’s Center for Security and Emerging Technology offers a useful framework for policymakers to evaluate competing proposals by stress-testing their underlying assumptions. But the Trump framework forecloses that debate before it begins. For all its specificity on child safety, intellectual property, and workforce training, it offers no answer on the one question where the stakes are likely to be the highest. The silence itself is a policy choice—and a consequential one.

Related

See all