Why the Trump Administration’s Latest Approach to AI Deregulation is Dangerous
November 26, 2025
The battle over federal artificial intelligence regulation has heated up again. While some lawmakers, especially at a state level, are advocating for stronger oversight to protect citizens, the Trump administration is signaling a dramatically different approach—one that prioritizes corporate-friendly deregulation over consumer protection.
Back in May, during the budget reconciliation negotiations, Republicans attempted to push through an amendment that would have banned any individual US state from regulating AI for a decade. The amendment was eventually struck after Senator Marsha Blackburn (R-TN), who has introduced her own piece of Big Tech regulation, withdrew support for it.
“Until Congress passes federally preemptive legislation like the Kids Online Safety Act [introduced by Blackburn] and an online privacy framework, we can’t block states from making laws that protect their citizens,” Blackburn said in a statement.
Examples of such laws include California’s SB 243, introduced in October, which is the first state-level attempt to regulate chatbots by requiring AI companies implement safety protocols for them.
But the Trump administration is now back with new plans to de-regulate AI.
On November 19, The Information reported that the White House was working on an executive order that would direct the Justice Department to sue states that passed their own regulation on AI. Earlier in the week, Representative Steve Scalise (R-LA) had said that the GOP were considering adding an amendment to the National Defense Authorization Act (NDAA) that would override states’ ability to legislate AI.
Both the Executive Order and the NDAA amendment face significant implementation challenges. Senator Brian Schatz (D-HI) promised that Democrats would block the NDAA if the amendment was included. It’s also unclear whether or not the Executive Order would ever be signed and—even if it was—how it would deal with the likely numerous lawsuits from state Attorney Generals.
The plans are nonetheless ominous, as they re-emphasize the Trump administration’s approach to AI, which is to minimize as many regulations as possible, whether current or proposed. Two news stories over the weekend re-emphasized why these regulations are in fact needed; As Big Tech continues to prioritize commercial growth over user safety.
On November 22, TIME reported on a newly-unsealed court filing against Meta, which claimed that sex trafficking on Instagram was endemic, and that features designed to help limit use by teenagers were shelved as they would negatively impact growth. On November 23, the New York Times reported that OpenAI, despite repeated evidence of dangerous attachments between chatbots and users, was committed to letting users control the chatbots’ personality in a bid to increase daily active users.
The “metric still matters, maybe more than ever,” the article noted.
There’s also the fact that AI companies’ increasingly unpredictable finances means that they could rapidly find themselves in a position where they need federal governmental support. Earlier in November, OpenAI CFO Sarah Friar floated the possibility of a federal backstop as the company builds out the computing structure necessary for AI development. The comments were quickly walked back. But the expenditures from AI companies remain massive. Bank of America estimated recently that Amazon, Microsoft, and Google are on track to spend 94% of operating cash flow on AI hardware. There’s also increasing speculation from investors about an AI “bubble” forming.
This creates a scenario in which AI companies are asking to have their cake and eat it too. On the one hand, they want to be free from any regulation—however minor—in order to pursue their growth-at-all-costs mindset. On the other hand, they may seek guarantees from the federal government that, if their spending is unsustainable, they can be bailed out on the taxpayer’s dime.
These developments illustrate why AI regulation remains critical. The tension between innovation and protection continues to define policy debates, with real, ever-evolving consequences for public safety and privacy. The administration’s plans for an executive order and the prospect of an amendment to the NDAA shows that the federal government has no coherent approach to AI apart from exponential growth, whatever the results might be.
Bearing in mind the documented harms, states should continue fight to improve AI regulation—and not be deterred by this latest push from the Trump administration. The need for thoughtful oversight that balances technological advancement with citizen protection has never been more urgent.
Technology & Democracy


