Trump's Effort to Ban State AI Laws
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning again, everyone. Coming up later this hour, your stories and feelings about self-checkout. There's an article in The Atlantic that asks why some people use it even when the line for human cashiers is shorter. We'll talk to the author of that article and take your calls. Next up, the emerging strange bedfellows politics of artificial intelligence. The news hook is an announcement from President Trump on Monday that he plans to sign an executive order placing a moratorium on all state AI legislation. He wants only the federal government to make the AI rules of the road. Here's some confounding context.
As Puck News reports today, late last week, Florida Governor Ron DeSantis revealed a citizen bill of rights for artificial intelligence, which he calls a stocking stuffer list of proposals aimed at protecting consumers from every anxiety-inducing aspect of AI, deep fakes, data collection, chatbots engaging with minors in a creepy way, other chatbots posing as mental health expert, Chinese AI tools, the fraudulent use of names and likenesses, noisy and grid strained data centers and so on. That from Puck News.
That's a lot, right? A lot of concerns that Democrats and Republicans apparently share, because contrary to Trump's wishes, all 50 states, all 50 states plus Puerto Rico have either introduced, adopted, or passed some form of AI regulation. The polls also show that Americans of all generations share AI concerns. Here's a little text of what the president said at the US Saudi Investment Forum a couple of weeks ago-- Oh, do we have this? We have an audio clip of this? All right. Sorry about that. Here's Trump himself.
President Trump: We are going to work it so that you'll have a one approval process, not have to go through 50 states in the United States. You can't go through 50 states. You have to get one approval. 50 states is a disaster.
Brian Lehrer: His Truth Social post on Monday, telescoping the executive order to come, echoed that idea, obviously. Who pushed Trump toward this step, toward a federal takeover of AI regulation? Well, according to the news stories we've seen, primarily, as you might expect, big tech. Why is the allegedly anti-elite president also aligned with them? There's a lot to unpack. We're joined by Tina Nguyen, who is a senior reporter at The Verge and author of their Regulator newsletter. She's all over this. Hi, Tina. Welcome back to WNYC.
Tina Nguyen: Great being here. Thank you so much.
Brian Lehrer: I want to start by asking, can Trump just do this with such a proposed executive order? Because he's not saying he's proposing a bill for the Congress to consider. He's saying executive order. Can he just do that?
Tina Nguyen: That's a giant legal question for-- Everyone I've talked to who studies tech policy and also the Constitution have said he really can't just dictate what states should do when it comes to AI. A leaked executive order that came out a couple weeks ago laid out what they plan to do in order to punish states that have their own AI laws on the books already. California has one of the strictest laws on AI safety. This Florida Bill of Rights, you just mentioned, really focused on child safety, national security, anti-China stuff.
What the President would like to do is find a way to make it very, very, very difficult for states to make those laws without running afoul of the federal government. In this executive order that leaked a while ago, it gave a couple of examples, like they would have the attorney general create an AI litigation task force to sue these states. They wanted Congress to draft up a list of states that would seem to be in violation of those laws.
The biggest threat that they have over these states is being able to withhold federal discretionary funding. The big one is rural broadband funding, which is, I think, like $50 billion off the top of my head. For a lot of smaller states and red states, that's a critical lifeline to the rest of the world. That's obviously a massive threat. No, Trump cannot actually say what AI companies can and cannot do, but he can make it really, really hard for other states to put guardrails around those AI companies.
Brian Lehrer: Let me play a clip of Florida Governor Ron DeSantis. We've been talking a little bit already about his Citizens' Bill of Rights regarding AI, which addresses that whole slew of things that I mentioned in the intro. Here is DeSantis responding to Trump's potential executive order last week.
Ron DeSantis: This is just basically putting every state in handcuffs, not letting them do anything. Here's the thing, they say, "Well, Congress can do some type of standard." Well, how good has that worked out for you lately, right? I mean, come on.
Brian Lehrer: Then there's this clip from Democratic Governor Jared Polis of Colorado.
Jared Polis: When I look to the future, I'm excited by the many opportunities that AI and emerging tech brings, including supporting small businesses and making government more efficient, and making sure that Colorado continues to be a hub for growth and activity in this truly transformational sector. To do this right, we really need federal action to establish a national regulatory framework for AI to preempt the states and avoid a patchwork of state laws that would deter from innovation and make it less efficient for consumers as well.
Brian Lehrer: Wait, what? Tina, if we listen carefully to these clips, we have the Republican DeSantis saying that what Trump wants is basically against Republican values, having states be able to do stuff on their own, while the Democratic Colorado governor is basically agreeing with Trump that the framework has to be national. What gives?
Tina Nguyen: This is actually a quasi-complicated argument, but I'll try to distill it for general listeners. They both essentially do want their-- Hold on, let me dial it back. Preemption is a fairly common concept in the government where laws that are written on the federal level supersede all the laws that are written at the state level. It would be fantastic if Congress were to pass a framework at the top of the pyramid, saying, "Here's how we would like the nation to go about making AI laws and regulating AI."
However, Congress has not been doing that. Instead, what they've been trying to attempt to pass is a moratorium on laws, so a blanket ban on any state writing or enforcing their own state laws. They argue that it's for the companies to be able to innovate at the speed that they need to beat China, and all these onerous 50-state regulations will get in the way. However, the first time they tried to get a moratorium passed through Congress, they asked for a 10-year blanket ban. That's so long. Who knows what could happen in the span of 10 years? That got defeated by not just the entire Democratic Senate but also several Republicans supporting the Democrats on this.
Then there's a couple other attempts to get it through Congress, where the overall ban, not a law, but a ban, was pushed again and again and again by the AI industry and people in Congress who supported them. This is a really crucial thing that listeners and people following this need to understand. There is a difference between writing a law at the top that sets regulations on AI and saying in a law, "No, no state can regulate AI. There will be no regulation on AI at the state level. Give us 10 years to maybe put something together." Look, you saw how fast AI came into our lives within the past two years. What the heck's going to happen in 10?
Brian Lehrer: Right. So many concerns. Listeners, we can take a few phone calls on this and text. What AI legislation might you like to see out of that long laundry list of concerns about AI that I read out in the intro? Maybe I should go over them again. What are your primary concerns about AI, and do you want to see that legislation at the state or federal level, or is that too bureaucratic a question for you to really care about? 212-433-WNYC for Tina Nguyen from The Verge. 212-433-9692, call or text. I think Joan in Montclair is on that track. Joan, you're on WNYC. Hi there.
Joan: Hi there. Good morning. I was just curious if there's any thought around legislation to regulate what the HR technology industry is referring to as digital twins, which is, as I understand it, the ability to essentially copy a person's work product, the ability for them to do their work, and it's being promoted that you can increase efficiency, then your digital twin can fill in for you while you're out. It seems like the extension of that could just be a completely digital workforce.
There are also digital workers in the technology ecosystem that is being promoted as part of the "external workforce" in addition to what's called agentic AI, but even just digital workers that could fill in for people while they're out or extend your workforce.
Brian Lehrer: Great couple of questions and general topic to raise. Are you familiar with that term, digital twins, Tina?
Tina Nguyen: Yes. Basically, it's a person outsourcing whatever they're able to do as a human employee to AI, LLM. As far as I can tell, there hasn't been any recent law at the federal level that's even tried to address that, or recent bill, not actual written law. We don't have that yet. Right now, at the federal level, the only real concrete pieces of legislation that I have seen, to my knowledge, came prior to Trump, and they were both bipartisan.
One of them, I believe, was child safety bills. One of them, I think, was impersonating political figures or public figures. That seems to be a pretty easy law to pass at the state level. Tennessee has one. Ever since Trump came into office and the AI industry began to gain a lot of political power in Washington, the only real AI legislation that's been attempted to be passed through Congress has been let's just ban AI state laws. That's it. That's the only one that's actually gained momentum.
As much of an actual salient and real problem the agentic AI workforce is to a lot of Americans, it does not seem to be a concern that is facing this current Congress imminently. I don't know what that says about Congress, but that's what I've been seeing.
Brian Lehrer: I want to read some texts that are coming in. Interestingly, we're having this debate, or we're discussing the debate that states versus the federal government are having over who gets to regulate AI. One listener writes, "We need a global framework for regulation." A number of people are writing about the hypocrisy of when the president wants federal control versus state control. This one says, "He is dismantling federal oversight of schools but demanding federal oversight on AI. To me, hypocrisy," writes that listener. Several are writing versions of this. "My primary concern is the environmental impact of AI energy needs."
Let me go to that one next with you, Tina, because some of the big Democratic wins from the November election were by candidates who used AI regulation in their platform. Virginia's Governor-elect, Abigail Spanberger, talked about regulating data centers, as did New Jersey Governor-elect Mikie Sherrill. We covered the Sherrill thing here as kind of New Jersey versus Virginia, because why are energy prices, utility prices going up so much in New Jersey? Sherrill blames them, to some degree, on so many data centers in Virginia that use electricity from the same provider that New Jersey has.
Are there conversations in the Democratic Party, or in the Republican Party, for that matter, about trying to seize AI regulation as their issue area, where it comes to utilities and, by extension, the environment?
Tina Nguyen: It's certainly a big point of discussion within the Democratic Party. It falls directly into their stated values. Within the Republican Party, you see a pretty strong split internally there, which I have found absolutely fascinating. Ironically, it is the more hard-right, deep-red Republicans who have voiced their concerns about AI data centers coming in and taking their jobs. The first concern is not necessarily the environment, but it is like, what jobs are these data centers going to absorb? Will we soon be replaced by a machine workforce?
I heard this one person, I believe, on Steve Bannon's War Room, argue-- He referred to the Great Replacement Theory, which is sort of a white supremacist, white nationalist talking point that immigrants are being brought in to dilute white people's control over society and take their jobs. He goes, "If you're worried about the Great Replacement, AI is the greater replacement."
Brian Lehrer: Right. Steve Bannon and Marjorie Taylor Greene are among those who we might consider MAGA right people who are concerned about the proliferation of AI and its various hazards. One reason why it is breaking out as a bipartisan issue. There are also some really upsetting stories coming out about the impact of chatbots on young people. There's a pretty widely talked about story from 60 Minutes about what they call character AI and how several adolescents who had been talking to the bots about suicidal ideation were never referred to the proper resources and did end up taking their own lives.
These are real material life and death examples of the dangers of letting this stuff run amok without any oversight. Democrats and Republicans, understandably, are both concerned about that. That brings us to the question of why does Trump seem to want less regulation than a lot of the states do? Here's a cynical take on that from a listener who texts, "Didn't big tech AI pay for Trump's new ballroom?" writes Tommy from Richmond, Virginia. What's the list of reasons that Trump is landing where he is on this, as far as you can tell from your reporting at The Verge, Tina?
Tina Nguyen: It's because he's now friends with a lot of tech billionaires who have so much sway and influence over him and are able to say, "Here is what would be beneficial to our industries, but also we're going to give you millions and millions of dollars for your ballroom."
Brian Lehrer: Oh, you mean Tommy in Richmond, Virginia, is right to be that cynical?
Tina Nguyen: Tommy in Richmond, Virginia, is absolutely correct on this issue. Going into this administration, you saw the Trump administration install a lot of officials and empowered a lot of people who were ostensibly anti-big tech, people who've been really aggressive towards trying to regulate Google, Meta, what have you. For a while, my assumption had been, these guys are going to have to bend to Trump's will in order to survive, say, an antitrust lawsuit or whatever.
However, what the tech billionaires have realized is that what they can do is go around those regulators, talk directly to Trump, billionaire to billionaire, and then just give him money, and he'll suddenly go, "Oh, I like this person, he's great. Just do whatever he says." The big person at the top of this, who has been the most influential in pushing Trump in the direction of this AI executive order, is a guy named David Sacks. He's a venture capitalist. He was one of the original people who funded PayPal, I believe, and he is now the AI and crypto czar in the White House.
He's basically quieter Elon Musk, much, much lower profile, clearly, but he's still this special government employee whose term in the White House has sort of been we don't know whether he's hit that legal limit, maybe he hasn't. He's been able to talk to Trump fairly regularly, very persistently, and is telling him, "Look, what you can do, Trump, is be part of this industrial revolution. You will be able to build out this infrastructure. You'll be able to beat China if you do this." He's hitting all of these talking points that really deeply speak to Trump's sense of self and ego and legacy, and then also you throw in a couple million dollars for the ballroom, and he'll do anything.
Brian Lehrer: On he'll do anything, we end it with Tina Nguyen, senior reporter at The Verge and author of the Regulator newsletter that they publish, our latest conversation about AI, which obviously, we're going to continue to cover and discuss. Tina, thank you so much for joining us today.
Tina Nguyen: It's been great. Thanks for having me.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
