Deep Fakes, Data Centers, and AI Slop — Are We Cooked?
Micah Loewinger: Trump's latest executive order has cleared regulation from the path of artificial intelligence. Except.
Maria Curi: Well, here's the thing. There was no regulation to begin with. When you say that it was clearing the path, there was no path.
Micah Loewinger: From WNYC in New York, this is On the Media. I'm Micah Loewinger.
Brooke Gladstone: I'm Brooke Gladstone. Meanwhile, more and more data centers are needed to feed this ravenous AI beast.
Stephen Witt: I mean, we've never seen anything need this much electricity, need this much computing power. Then no matter how much you give it, they always, always ask for more.
Micah Loewinger: Plus, 2025 was the year of AI slop. Sometimes it's even convincing.
Craig Silverman: It is not an intelligence thing, it is not a class thing, it is not an education thing. It's really important to operate with humility because we can get taken in.
Brooke Gladstone: It's all coming up after this.
Micah Loewinger: From WNYC in New York, this is On the Media. I'm Micah Loewinger.
Brooke Gladstone: I'm Brooke Gladstone. In the weeks since President Trump announced an executive order aimed at blocking states from regulating AI, some members of his MAGA base have groused loudly. Florida Governor Ron DeSantis.
Governor Ron DeSantis: First of all, an executive order can't block the states. Clearly, we have a right to do this.
News clip: Governor Ron DeSantis outlined new proposed legislation aimed at putting stronger safeguards on artificial intelligence.
News clip: By pushing a so-called AI Bill of Rights.
Brooke Gladstone: Utah Governor Spencer Cox on NPR.
Governor Spencer Cox: I'm very worried about any type of federal incursion into states' abilities to regulate AI again at the personal level.
Brooke Gladstone: Steve Bannon, host of the podcast War Room, posted that tech bros are doing the utmost to turn POTUS MAGA base away from him while they line their pockets. One of Trump's closest tech bros, his AIs are is venture capitalist David Sachs.
David Sachs: We have over 1,000 bills going through state legislatures right now to regulate AI. You've got 50 states running in 50 different directions. It just doesn't make sense. We're creating a confusing patchwork of regulation. What we need is a single federal standard.
Brooke Gladstone: A New York Times investigation revealed that he still has hundreds of investments in AI, the performance of men, any of which could clearly be affected by decisions he makes in his official position. I asked Maria Curi, tech policy reporter for Axios, exactly how Trump and Sachs have cleared AI's path of regulation that would bar this kind of conflict of interest.
Maria Curi: Well, here's the thing. There was no regulation to begin with. When you say that it was clearing the path, there was no path. Really, the biggest effort that the Trump administration is making right now to make sure that regulation does not get in the way of AI is making sure that the states don't regulate.
Brooke Gladstone: That's really the substance of this AI order.
Maria Curi: Yes. This executive order is the brainchild of David Sachs. It essentially says the Department of Justice is going to create an AI litigation task force to examine which state-level laws right now are overly burdensome for AI development.
Brooke Gladstone: Lots of states have already passed 100-plus AI laws around safety and deepfakes, mental health. Industry leaders, they hate having to go state by state to comply with varying regulations. They've struggled mightily to kill them. Have they done that with this executive order?
Maria Curi: I would say no. We have a long way to go here. This executive order is going to apply a lot of pressure for law lawmakers over on the Hill to pass a federal standard. We are fully expecting those efforts to ramp up next year, but the president does not have the authority in the Constitution to stop states from regulating. That is just going to be very easily challenged in court. We expect lawsuits to start from state attorneys general, from advocacy organizations, and that's state attorneys generals on both sides of the aisle.
Brooke Gladstone: This is where the political problem comes in, though, right? Because guardrails around AI are popular. In a Gallup poll from September, 80% of US adults said that governments should prioritize rules for AI safety and data security, even if it means developing AI more slowly. The 10th Amendment doesn't allow presidents to just preempt state laws. Then why make this executive order?
Maria Curi: The Trump administration is arguing that this might be constitutional because states are not allowed to regulate interstate commerce in a way that is overly burdensome. It's a long shot, but they're going to try that out. We have seen throughout this administration that they're very willing to test legal theory and the bounds of these laws. Secondly, this might have a chilling effect. States are still full steam ahead on wanting to regulate artificial intelligence, but they might not want their federal grants to be caught up in all of this. That's what the executive order says. If we find that your laws are overly burdensome, we might take back Internet grants, and we're going to examine other types of federal grants that we might be able to claw back. That, by the way, is also potentially unconstitutional because it's Congress that has the power to spend money and allocate it.
Brooke Gladstone: Really? First time I've heard that. No. You mentioned that there's a notable lack of GOP unanimity here. Florida Governor Ron DeSantis, Utah Governor Spencer Cox, other Republican lawmakers. They openly disagree with the order. In fact, DeSantis is crafting, I guess, a proposal for an AI Bill of Rights that would implement parental controls, data privacy, consumer protection, restrictions on non-consensual use of a person's name or likeness. Steve Bannon, the MAGA guru who helped get Trump into office the first time, calling the New Tech alliance crony capitalism, warning about the technocratic elite taking away the jobs of the MAGA base.
I mean, how critical a sign is this of internal strife? Is it worse than courting war with Venezuela, but not as important as the Epstein files?
Maria Curi: This executive order may be implemented in blue states and not red states. That might be enough to appease.
Brooke Gladstone: You can't do that.
Maria Curi: Well, a lot of this you can't really do, but it's happening. TBD on what ultimately breaks the Republican Party. What I do know is that this is an opportunity for Democrats in the upcoming midterm elections and in 2028 to have a clear and united message against the Republicans who are in power now.
Brooke Gladstone: What will the signs be if it starts to fracture the Republican Party?
Maria Curi: When Steve Bannon directs his criticism at the President himself instead of his AI and crypto czar, David Sachs? Steve Bannon has said that it's David Sachs misleading the president. I'm looking out for the elections. If this is a winning issue for Democrats, I think that might wake certain people up within the Trump administration.
Brooke Gladstone: Now let's talk about national security and start with Jensen Huang, the CEO at Nvidia, the richest company on the globe. He's been making frequent trips to the White House, and this month, Trump said he'd allow the sale of H200 advanced AI chips in China, which I guess is advanced, but not the most advanced, and the US would take a 25% cut of the profits from that deal. What kind of bet is the president making here?
Maria Curi: In President Trump's view, this deal is just a matter of making America rich. That 25% cut of Nvidia sales to China is coming back to the United States. That's a good thing, the President would argue. He's a deal maker. We don't know the details of how exactly such an agreement would work. You'll recall there was a separate agreement that was made for the H20 chip, and that is kind of on hold right now.
Brooke Gladstone: Fill me in.
Maria Curi: Before the H200 chip, there was another deal for H20 chip sales to go to China. It's just another type of AI chip that is less advanced. Again, the US would take a 25% cut.
Brooke Gladstone: China didn't like it, right? They thought that was an insult to get the lesser chip.
Maria Curi: Yes. They don't want to necessarily be reliant on US chips and not their own chips. They have Huawei, and they have--
Brooke Gladstone: But it's not as good as the H200 currently on offer, right?
Maria Curi: That's right.
Brooke Gladstone: The argument on the Trump side is it's going to make America rich. While some people in the Trump administration embrace that argument, those involved in national security over decades see this as a big risk. It isn't just coming from the folks at the bipartisan US-China Commission, which is made up of former national security officials that advises Congress. There are also concerns from Republicans on the Hill that this is a risk. We don't usually do that. We don't sell advanced technologies to potential foes like this.
Maria Curi: The way that Jensen Huang was able to convince folks in the Trump administration that this is not going to pose a national security risk is by basically making the argument that China is already moving ahead on AI, that they're still far behind the US and so if China's doing this anyway, it would be better for them to be doing that on US technology than on Chinese technology. That argument one, President Trump is saying that they'll be able to do this in a way that protects national security. They're going to figure out some sort of mechanism that is going to make this secure. Nobody really knows what that means.
Brooke Gladstone: Yes. This sounds like his plan on healthcare. That'll be out any day now.
Maria Curi: Yes. There are no details.
Brooke Gladstone: In the New York Times this week, David Sanger said that if the chips that power the most advanced technology can be sold to the United States' chief technological, military and financial competitor, where's the new line drawn? By the same logic that it is better to have China using American technology. Should we sell it F-35s, advanced missiles?
Maria Curi: I would be shocked if they allowed these chips to come in for their own national security and government purposes.
Brooke Gladstone: Really? You think that they won't take the offer?
Maria Curi: No, I think that relying on US chips for their own national security and government purposes would be risky for them, but certainly for commercial use within China, it might be appealing.
Brooke Gladstone: Do you feel that we're missing something in the coverage?
Maria Curi: The media sometimes gets pushback because we are not highlighting the ways that artificial intelligence might actually be benefiting society and humanity, for example, through drug discovery and genomics or quicker and more accurate weather forecasting. I think the reason why those wins have not been able to be highlighted as much is because this technology is being released so quickly, and there is zero appetite to slow down all of the other unintended consequences. Chatbots leading to people dying by suicide are just too big to ignore, and it is ultimately what people are going to focus their attention on.
Brooke Gladstone: Maria, thanks so much.
Maria Curi: Thank you, Brooke. This was a pleasure. Thanks for having me.
Brooke Gladstone: Maria Curi is a tech policy reporter at Axios.
Micah Loewinger: Coming up, why your 401k needs AI to succeed.
Brooke Gladstone: This is On the Media. This is On the Media. I'm Brooke Gladstone.
Micah Loewinger: I'm Micah Loewinger. This week, an unexpected merger.
News clip: Trump Media & Technology Group breaking news this morning, announcing a merger with TAE Technologies, a $6 billion deal. Now, what is TAE Technologies? It's a closely held fusion. That is nuclear fusion developer founded in 1998. This is a type of technology that creates hardly any radioactive waste, and it is a technology of the future.
Micah Loewinger: A technology of the future, in that there are not currently any power plants that actually run on nuclear fusion. Now we why, you may ask is Trump's media company merging with a nuclear fusion company?
Sam Altman: I do guess that a lot of the world gets covered in data centers over time.
Micah Loewinger: Sam Altman, CEO of OpenAI, the creators of ChatGPT, has personally invested hundreds of millions of dollars into a nuclear fusion startup.
Sam Altman: Maybe we build a big Dyson sphere on the solar system and say, "Hey, actually makes no sense to put these on Earth."
Micah Loewinger: He expanded on that idea four months ago on comedian Theo Vaughn's podcast.
Sam Altman: I can say with conviction the world needs a lot more processing power. I don't know, it sounds cool to try to build them in space.
Micah Loewinger: Cool is one word for it. Meanwhile, some of the people living near these proposed data center projects here on Earth are pushing back.
News clip: In Michigan, residents today gathered outside the state capitol to protest a string of recent data center proposals.
News clip: We are concerned for the health of our local environment, our natural resources, sustainable energy future. We are concerned that there are individuals in our community that will not have the ability to keep their lights on.
Micah Loewinger: Then, earlier this week.
News clip: Three Democratic senators, Elizabeth Warren, Chris Van Hollen and Richard Blumenthal, are investigating Big Tech's role in rising power costs.
News clip: They point to data centers that are already consuming more than 4% of US electricity, with government estimates showing that that could triple to 12% by 2028.
Micah Loewinger: To understand a bit more about where this massive infrastructure project is at, we called up Stephen Witt, author of the book The Thinking Machine, who's been visiting the centers and writing about the companies behind them. Stephen, welcome to the show.
Stephen Witt: Thank you so much, Micah. It's great to be here.
Micah Loewinger: You wrote a book about Nvidia. You have visited some of the data centers being built to train AI, and you've researched their impact. How much power does a person use when they ask, say, ChatGPT a question?
Stephen Witt: For regular garden variety queries? It's really very little power. It's when you move into more industrial uses, like make me a three-minute TikTok video using AI, that you really start to consume a lot of power. Right now, actually, most of the power used is in training and developing these things. That's where you really see these huge power draws. I was in the data center, and I was shown the standard Nvidia rack of computing equipment. It's about the size of a refrigerator, but this thing will, over the course of one year, use the equivalent of 70 single-family homes' worth of electricity. There's 10,000 racks in the data center as far as the eye can see.
One big data center can use as much power as, like, Kansas City. It's really, really power-intensive to train these things. It's power-intensive to use them, too.
Micah Loewinger: Former Google CEO Eric Schmidt has said that.
Eric Schmidt: The current expected need for the AI revolution in the United States is 92 gigawatts of more power.
Micah Loewinger: Give me some perspective. How much electricity is 92 gigawatts?
Stephen Witt: 1 gigawatt is like Philadelphia. It's like adding 92 Philadelphias to make this stuff work. We really almost have to double the capacity of electricity production and distribution to meet the forecasted needs for AI.
Micah Loewinger: There's no debate that data centers use a ton of energy and that many of these tech tycoons would like to see significantly more energy usage to accomplish their goals. There have been some conflicting reports about how much water they use. A bestselling book called Empire of AI indicated that a proposed data center in Chile was going to have a huge water footprint. The journalist who wrote the book, Karen Howe, has since come out saying that there was an error in the calculation she published and that the impact in that case was overstated. That example has kind of taken up a lot of space in the discourse. Can you help me understand, like, how water-intensive these centers are in general?
Stephen Witt: AI in general does not use more water than any other industrial process, and it uses a lot less than, say, agriculture.
Micah Loewinger: Of course, we need food to survive. We don't need chatbots to survive. I guess it's a question of what you see as valuable resources.
Stephen Witt: There's been a couple of examples of an AI data center moving in, tapping into the municipal water supply and draining a portion of it, but that is always true of any heavy industry. If you look at the kind of planning they have to do for electricity, talking about building dozens of new nuclear power plants, of kind of upgrading the entire electrical grid across the country to meet this demand, I don't see anything like that happening on the water utility side. Which suggests to me that the people operating the data centers are not too concerned about the water problem.
Micah Loewinger: Estimates vary, but by 2030, it's predicted that data centers will increase energy consumption by somewhere between 60 to 150%. Forget the future. How is all of this impacting electricity costs for us now? Even those of us who don't live near a data center.
Stephen Witt: Yes, it doesn't matter if you live near it. The grid is a giant system. Electricity rates have gone up a lot lately. Part of that is inflation, but a huge part of that is just the industrial demand for training and inference for AI. We all are bearing the costs of that right now.
Micah Loewinger: The other cost to all of us is, of course, the damage that is being done and will be done to our environment.
Stephen Witt: It really is a big factor in contributing to carbon emissions and climate change. Already catastrophic climate change curve has been shifted upwards by this basically second industrial revolution for AI. I don't know how you fix that. I mean, the thing you could do is just stop building these.
Micah Loewinger: You could just stop building them.
Stephen Witt: You could stop building them in the United States, but you could not stop building them in the Middle East, China and around the world. That is already happening.
Micah Loewinger: I want to talk about what's at stake for our economy in all this. According to Harvard economist Jason Furman, investment in information processing equipment and software was responsible for 92% of the growth of the US economy this year. At this point, the US economy has basically just hitched its wagon to AI and to data centers in particular. Are these data center projects actually profitable?
Stephen Witt: The data center projects are profitable. What people in the data center industry told me is like when we build a data center, we sign a 10-year supply contract with our customer, who is usually either Microsoft or Amazon. That's how they finance it. The true risk here is that AI just doesn't pan out. All of this money being dumped into these data centers does not result in useful AI products. So far, what has happened is stuffing more microchips into the data center has produced better AI, basically a direct relationship.
It is not an immutable fact of the universe that that will always happen. We may hit a brick wall, we may plateau, where suddenly putting more microchips in the shed does not result in better AI, or even if the pace of growth were simply to slow down, that would have catastrophic effects for the data center industry, and it would probably have cascading effects for the American economy as a whole.
Micah Loewinger: Yes. Nvidia is worth something like 8% of the S&P 500.
Stephen Witt: I think you can make a strong case that this is, in fact, the single most valuable company of all time, even once you adjust for inflation. Their concentration in the stock market is unprecedented. It's probably not that safe to have all of this money concentrated in one player.
Micah Loewinger: People like Jeff Bezos, Sundar Pichai, Bill Gates, Sam Altman of OpenAI, they've all started to admit that we're in a bubble.
Stephen Witt: Yes. This is not like NFTs. AI is a real thing that's going to transform our economy, but that doesn't mean it's not a bubble. Railroads transformed the American economy, and they also produced several very damaging speculative bubbles. The Internet transformed the American economy, but we all remember the Internet bubble in the dot-com bubble, too. So much of it comes down to not what the technology can do, but have we arranged the timing of these cash flows in such a way that we haven't overinflated our assets? I think to many people it now looks like we're reaching kind of a bubble-like activity.
Micah Loewinger: Yes. One of our past guests, Ed Zitron, has written extensively about how he thinks the media have done too much to hype this AI revolution, to hype companies like OpenAI and its ChatGPT service, which he believes just isn't that great of a product and is on very, very shaky financial footing.
Stephen Witt: Empirically. ChatGPT is a great product. It has 800 million weekly average users. People are using this thing constantly. I mean, constantly. ChatGPT is the single fastest-growing product in human history. Its adoption curve is faster than Facebook. It's faster than Google. It is just one of the most successful products of all time.
Micah Loewinger: As I understand it, these AI companies are betting much more on wooing large businesses as customers than, like, regular users like you and me. Recent data from the Census Bureau found that the adoption of AI at large businesses is falling. An analysis of this data in The Economist concluded that the demand for generative AI seems "surprisingly flimsy."
Stephen Witt: So far, enterprise AI has been a disappointment, and that could cast a shadow on this data center boom. If the CFO of the company is saying, God, I commissioned a $300 million enterprise AI training run and then nobody at my company used that product at all, you can imagine demand starts to evaporate very quickly.
Micah Loewinger: Yes. This report in The Economist found that, "from today until 2030, big tech firms will spend $5 trillion on infrastructure to supply AI services." To make those investments worthwhile, they need on the order of $650 billion a year in AI revenues. JP Morgan Chase, the bank, up from about 50 billion a year today.
Stephen Witt: Yes, and it doesn't look like it's materializing. Now, when you talk to people in biotech, when you talk to people in drug discovery, they are ecstatic about these products. They're using them all the time, and they're going to bring more online, but that is just one sector. For the rest of the economy, it has been tricky to get people to use AI at their job. They just don't trust it. Honestly, sometimes it's just not good enough.
Micah Loewinger: Shouldn't that inspire some hand-wringing, anxiety, fear? I mean, to get to this point, many of these companies have had to lie and exaggerate about their products, steal massive amounts of data. Why should we trust that using all these resources to build this AI infrastructure, tying up our economy in a knot, is going to pan out and not fall down like a house of cards?
Stephen Witt: I remember in the '90s people would always talk about Amazon, and they would say, well, for Amazon to justify its current market valuation, it basically has to absorb all brick and mortar retail, and we have to buy everything on the Internet. They were very skeptical about that. In fact, that is exactly what occurred. I think a lot of investors have come to believe that over the long term, the promises of these tech companies actually will come true one way or the other. It's not going to be a smooth path, but I think we're going to be having this conversation in 10 years, and everyone is going to be using AI all the time for everything. That's the bet.
Micah Loewinger: That's Wall Street's bet, but it sounds like you believe it as well.
Stephen Witt: I lived through it. All of the Internet skeptics were dead wrong. Even the biggest hype merchants for the Internet, basically every promise they made, did eventually come true. I think now FOMO, fear of missing out on a tech wave, influences money managers just as much, if not more of fear of a crash.
Micah Loewinger: In order to achieve prosperity and avoid crashing our economy to beat China in the never-ending AI race, these companies claim that we need to allow them to expand as widely and as quickly as possible, doing whatever damage to our environment they desire. Is that worth it?
Stephen Witt: I don't know. I don't know. I mean, look, they ripped me off. I'm part of the class action lawsuit against Anthropic for basically mass copyright infringement, mass tort lawsuit, and I'm collecting from it. They have been completely reckless in the development of this stuff. They have their eyes on the prize, which is an AI company that has the kind of platform dominance in software that Nvidia has in hardware. If one company were to achieve that, they would almost certainly be the most valuable company on earth, and it would probably disempower average individuals to some extent.
I just don't know how you stop it. I guess the way to stop it is very aggressive antitrust action against one of these companies or all of them, breaking them into smaller companies and putting it subject to some sort of consumer regulatory board. The status quo today is very far away from that. Of course, anything can happen, but you just need the political will from average citizens to organize to make that real. I think we're far away from that right now.
Micah Loewinger: Stephen, thank you very much.
Stephen Witt: Thank you so much, Micah. It was great talking to you.
Micah Loewinger: Stephen Witt is the author of a book about the history of Nvidia titled The Thinking Machine.
Brooke Gladstone: Coming up, 2025 was a banner year for AI fakery, and I fell for it.
Micah Loewinger: This is On the Media. This is On the Media. I'm Micah Loewinger.
Brooke Gladstone: I'm Brooke Gladstone. This week, we wanted to cozy up by the fire, grab a mug of hot cocoa and review an absolutely insane year for artificial intelligence. Silicon Valley's heavy hitters, Google, Meta, Apple, and OpenAI, to name a few, have poured billions of dollars into the industry, creating products for people, businesses, and even the government, but there's also been a growing tug at the AI Superman's cape. Reports of unchecked chatbots giving dangerous advice, scammers armed with better tools, and political deep fakes increasing in quality and quantity. To figure out the status of AI now, to try and trace what's happened in the last 12 months, we went all the way back to January 17th, 2025.
Mark Zuckerberg: We're going to get back to our roots and focus on reducing mistakes, simplifying our policies and responsibilities, restoring free expression on our platforms.
Brooke Gladstone: Meta CEO Mark Zuckerberg.
Mark Zuckerberg: First, we're going to get rid of fact-checkers and replace them with community notes similar to X, starting in the US. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US.
Craig Silverman: The trigger for a lot of this rollback and for Mark Zuckerberg doing what some people have referred to as a hostage video of him sort of at camera was, of course, Donald Trump gets re-elected.
Brooke Gladstone: Craig Silverman is co-founder of Indicator, a publication dedicated to understanding and investigating digital deception. He argues that this video marked the beginning of a swift but forceful transition to a less moderated and less regulated AI world.
Craig Silverman: It becomes very clear to these platforms that they need to make sure they get out of the way so that whatever content he wants to have out there can be out there. Same from his supporters and from his like-minded politicians leading countries around the world, because I think Trump made it clear he will come after them. He mused about putting Zuckerberg in jail. I think that's the big trigger, and because most of these platforms are based in the US, what happens in the US cascades around the world.
Brooke Gladstone: It wasn't just Meta pulling back on fact-checking. In January, Google told the European Commission it wouldn't integrate fact-checking into its search bar or YouTube. In February, researchers found that Google had given up its data void. Those small warning signs saying, "Hey, there really aren't credible results for your search." They don't exist anymore. Why not?
Craig Silverman: The Republicans labeled content moderation and fact-checking as censorship and been pretty effective. The platforms, in a world where Trump is the President and the Republicans control the House and the Senate, they realize we have to move away from this stuff, whether we think it's censorship or not.
Brooke Gladstone: I mean, it was, oh, you with your fact-checking, it's like you're a liar.
Craig Silverman: It's a strange thing because fact-checking inherently, it's not removing anyone else's speech, it is critiquing it, and so the reframing of fact-checking and speech of a journalistic nature as censorship was a pretty good judo trick, and it was effective in the US. I'm Canadian, and so I'm going to use a hockey reference here, which is that a lot of people say the NHL is a copycat league. The team that wins the Stanley Cup one year. Everybody tries to sort of copycat their roster, and it probably happens in other sports leagues. I think it's true with tech platforms. Look at how much they have all invested in running and building the same types of AI products.
Brooke Gladstone: Fact-checking is censorship is one half, the other half is about business. The fact that these companies are actually pushing technologies that create fakes and that are actively used to deceive, not just entertain.
Craig Silverman: The platforms like Meta, like TikTok, they are building their own AI tools. One of the things you can do with a lot of these AI tools is impersonation. One of the things you can do is generate massive amounts of slop false claims about celebrities. They are allowing this to happen at a scale using their own tools, in many cases, directly paying people cash every month based on the AI content they have created.
Brooke Gladstone: Well, give me an example of that.
Craig Silverman: There is a guy, a dad who lives in Maine, he goes by the online moniker Busta Troll, and for really about a decade or more he has run what he calls satire pages and associated websites where he just shares 100% false made-up quotes attributed to celebrities, political figures, where he'll share an image and it's got a photo of Gavin Newsom with some crazy quote he never said that makes him look like an idiot. Then there'll be a link to an article he wrote with the same hoax claim that's written like a news story, and it's red meat for the right-leaning audience on Facebook.
Used to be he would get people to go to the website, and he'd earn money from ads on the website, but this has changed because this year he got accepted into Meta's content monetization program, where they will pay you based on the engagement your content gets. He is getting paid by Meta now every single month for how viral his hoaxes are.
Brooke Gladstone: OpenAI used to ban photorealistic images of real people. That ban is over.
Craig Silverman: Impersonation is in. OpenAI makes the change in March, where these photorealistic things representing real people, public figures, they're okay with. They said we can't maintain a database of all the public figures in the world. It's just not possible. We'll just try to prevent people from doing harmful representations. Then they launched Sora, which is a dedicated app that does amazing deep fakes of anyone. Google also rolled back some of its policies on impersonation because it started building some really powerful image and video generation models.
Something that in the early days these companies said, "Listen, we're going to make sure that impersonation and deep fakes are really reined in. At a certain point, they just said, "You know what, let's let it go." The most common use that is really dangerous around this type of technology is for scams, where someone can impersonate another person. They can impersonate Elon Musk, they can impersonate a famous movie star or politician, and they can convince people that there's an amazing investment offer or something like that.
There are people who are literally losing their entire life savings and having to sell their homes because they get sucked in by these things. People are cloning the voices of executives and calling up other people in the company and getting them to transfer money.
Brooke Gladstone: This stuff is legal?
Craig Silverman: As long as you are not infringing on someone's personal rights, putting defamatory things in their mouth. Typically, we don't see people suing around this. There is a case right now in California, Andrew Forrest, nicknamed Twiggy, he made his money in mining in Australia. He's suing Meta because his face, his voice have been used in hundreds of thousands of scam ads over the years. It's not an accident, I think that it's a billionaire suing them and actually seeing this through, because that's the kind of money you need to actually get past Meta's formidable legal teams of filing a lot of motions for dismissal and other things.
Brooke Gladstone: As you went through the year, you say the next phase saw "agents of deception baked AI into their workflows." Who are these agents? What is this workflow that is making use of AI?
Craig Silverman: Let me put them into a couple of buckets here. Let's put a hustler bucket over here, and then let's do sort of state-backed or propaganda, politically oriented stuff on the other side. On the politics side or on the state operations side, Russia has been doing this for a long time. They've had operations masquerading as European or American news organizations spreading false articles to sort of push a particular propaganda narrative. Attack Zelensky in Ukraine, or what have you. They've looked at AI, and they said, "Oh, now we don't need to use Photoshop and take a lot of time to create a whole bunch of fake headlines or fake front pages or fake videos."
They can put out more crap much faster than ever before and overwhelm any defenses that are out there on these platforms. It might be a video claiming that there's a villa that Zelenskyy owns that's worth $20 million.
Clip: Ukrainian President Volodymyr Zelenskyy bought a $20 million mansion on the Florida coast in 2020. Boasting 200ft of ocean frontage, the 13,000 square foot mansion features 6 bedrooms and 10 bathrooms, each with stunning ocean views.
Craig Silverman: A lot of the state-backed operations are now churning out, in some cases, thousands and thousands and thousands of articles or pieces of content a day. That stuff could end up getting hoovered up into AI models, where the AI models start spitting out the exact same disinformation. The snake is like eating its tail and regurgitating it.
Brooke Gladstone: And the hustlers?
Craig Silverman: I have a soft spot for the hustlers, I have to say. They're clever, they're creative, they're very innovative. For example, I did a story this year about one kind of AI-infused study app and their strategy for marketing was they recruit young women, in some cases, at least one high schooler. They have them create brand new TikTok accounts. Post after post after post talking about this amazing study hack or this trick they found that in the end leads to this one app, which I'm not going to mention to give them free advertising. The people's bios did not say they were a paid creator for this company. On top of all of these young creators churning this stuff out, they also had a channel where they had filmed a bunch of confrontations between students and professors to generate engagement. All of them again included a mention of the product without saying that these were ads.
Brooke Gladstone: Were these fake arguments?
Craig Silverman: Fake arguments.
Clip: I sent this to you a week ago, and you're still using ChatGPT. All your work looks like [censored] it's wrong, and you're all getting the same [censored] answers. I email this to you, you upload your lectures, your readings, whatever the hell you want, and then you actually get insights from the AI tutor.
Craig Silverman: One of the most frequent people featured as a TA or professor is actually the head of growth marketing, and they're very proud of this strategy. They have boasted online about how many views they got and the fact that they're paying absolutely zero in advertising. I brought these thousands and thousands of undisclosed ads, which by the way, they, the FTC, has rules around this, and they're clearly outside the rules of what the FTC has set, and TikTok didn't remove any of them.
Brooke Gladstone: Tell me, which of this hustler-generated slop, you have a soft spot?
Craig Silverman: Well, my weak spot is sometimes they come up with stuff that's like genuinely clever, where I'm like, "Oh, you're terrible, but wow, you really figured that out." One of the things that stood out to me this year was that Andreessen Horowitz, one of the most reputable, most important venture capital firms in the world, invested $15 million with other investors in Cluely, a company that its first product was to help programmers cheat on the job interview coding tasks that they get assigned when they're applying for a job.
If you think about it like, Andreessen Horowitz is potentially funding an app that people used to deceive some other Andreessen Horowitz-funded companies into hiring programmers who weren't as good as they actually claimed to be. I haven't seen that before.
Brooke Gladstone: There were 26 channels on YouTube dedicated to AI-generated videos about the P Diddy trial. The videos would have an uninvolved celebrity in the thumbnail and then some salacious headline about some fake testimony or a fight. These videos ranked 70 million views, and none of it was real.
Craig Silverman: There was a golden age of Diddy slop this year, which again, a phrase I never thought I would say. People realized that there was a lot of attention around the Diddy trial that was going on. They also realized that the average person didn't really know who was testifying, and so you had folks who would generate thumbnails and, in some cases, entire long videos claiming that someone had just showed up and testified. P Diddy's mom testified against him, or the Rock showed up and testified against him. Then there would often be a bait and switch, where the video would actually be a lot of AI-generated stuff and clips of news reports.
Brooke Gladstone: Wait a minute. I saw something that I totally believed, and now I wonder whether it's completely wrong, which was that they found a recording of Prince condemning P Diddy, saying that it was disgusting what was going on, and he was actually afraid to say anything about it. Now I think maybe it was totally made up.
Craig Silverman: It might be. I don't know the Prince one specifically, but Prince has been dead for a while, unfortunately.
Brooke Gladstone: Yes, this was sort of found in his house or something.
Craig Silverman: Yes, I feel like--
Brooke Gladstone: Oh, my God, I'm such an idiot.
Craig Silverman: I feel like we're in slop town right now. Yes, but look, any of us are susceptible if it's the right message, piece of content delivered in the right moment. It is not an intelligence thing. It is not a class thing, it is not an education thing. Any of us at any time can be persuaded of something. It's really important to operate in this insane information environment with that element of awareness and humility, because with these amazing tools and with the lack of the kind of oversight by the platform, someone will create something that works great for me and works great for you, and we're going to take it in and there's less of a chance that something is going to intervene and say, hold on a second, you may want to know this about this thing that just loaded in front of you.
Brooke Gladstone: Here's the real problem. Ultimately, if you develop the skepticism that you think is going to protect you, you're going to start not just disbelieving the stuff that's false, but also the stuff that's true.
Craig Silverman: A lot of people who were taken in by the QAnon conspiracy theory believe they were engaged in really deep, serious Internet research, that they were actually the ones doing the fact-checking and the media literacy. Simply to say, somebody like, hey, you should check that, or hey, don't believe everything you read, that's not actionable advice. We actually do need to be equipping people with, if you see something here, might be the three steps you take, look at that thing in front of you, but then you might go and search online for a wide variety of different sources and make sure those sources aren't all from the same kind of point of view or the same location.
You know, you mentioned earlier that Google stopped putting data void warning, saying like, hey, you're searching for something where we're not seeing a lot of good quality information. That's the place where the people who are trying to misinform or jump on and get traffic to monetize, that's where they jump in.
Brooke Gladstone: You say the whole thing is the quintessence of easy money, but obviously, the social cost can be alarmingly high. Case in point is this moment on Joe Rogan's podcast. Can you describe it?
Craig Silverman: This fall, there was a video pretty popular among right-wing social media users, where it showed former vice presidential nominee Tim Walz. He's on an escalator, he's singing, he's dancing.
[music-The Pussycat Dolls-Don't Cha]
Don't you wish your girlfriend was hot like me?
Don't you wish your girlfriend was a freak like me?
Don't you?
Craig Silverman: He's got a white T-shirt on in black letters that says [censored]. They basically took his face and superimposed it on the body of a creator who had actually originally shared this video, which again, a very easy thing to do. Joe Rogan is talking about it as if it's real and he ends up getting corrected by his producer, but his reaction isn't like, "Oh, sorry, whoops, I fell for that." He did not feel the guilt and the shame that you did. What Rogan did was basically explain it away, and he said, do you know why I fell for it? It's because I believe he's capable of doing something that dumb.
Brooke Gladstone: Truthiness lives. Just as Colbert discussed it back during the Bush administration. If it feels true, it is true, because there's no such thing as facts anymore.
Craig Silverman: Shame is no longer a factor in all this stuff. It's almost like tech and the platform's caught up this year, and we're basically like, yes, why should we do things that the President is not restraining himself around?
Brooke Gladstone: 404 Media's Jason Koebler wrote last month that America's polarization has become the world's side hustle.
Craig Silverman: 100% true. A perfect example from this year was I found a whole network of foreign-run pages that only spread 100% false hoaxes and AI-generated images about global celebrities. It was in English and other languages, but they hit upon a lot of culture war topics like trans stuff and political stuff and what have you. The first time that I brought those pages to Meta, they removed almost all of them. Then I brought a second group of more than 100 pages to Meta, and this was after they had gotten rid of the fact checkers and after they had sort of rolled back some of their moderation policies. Meta actually removed almost none of them and basically said, this doesn't violate our policies anymore.
Brooke Gladstone: Let me get this straight. Scammers can use AI to generate tons of ads and videos quickly to either sell or to pretend to sell users something. Tech companies like TikTok and OpenAI are creating tools to integrate AI into videos because that allows people to make content faster, more speed, more videos, more engagement. VC firms are investing either in the AI tools themselves or bot armies that high bidders can deploy to manufacture online content and maybe manufacture consent. It's all juiced by Meta and the like who embrace these ads, reward people for using these kinds of tools, because that's even more money in their pocket. What am I missing?
Craig Silverman: Well, I think that's a pretty good, heartbreaking, disappointing summary. I think at the end of the day, if people are wondering the why piece of it, it's that the business imperative now is to show that you are a leader in the creation of AI tools and the advancement of AI tools. You need to show that your users are using them and that there is value from what you are creating. To have policies that put you, from their perspective, at a disadvantage to your competitors, who maybe are allowing impersonation and are allowing this and allowing that. The race for AI supremacy has led to some of these rollbacks, and the pressure from the new administration has led to the other piece of it.
That is the recipe that led to the laundry list of things that you have just mentioned. This is where we are going into 2026.
Brooke Gladstone: Looking ahead to next year.
Craig Silverman: Oh, God, no.
Brooke Gladstone: All this is dizzying. What in God's name are we supposed to do about it?
Craig Silverman: There isn't an easy answer. I think in spite of all this, which seems very overwhelming, you can feel disempowered and small. The truth is that we individually are the atomic units that these big tech platforms need. They need our attention and they need us to spend time on them, and so I really encourage people to think very consciously about where you are giving your attention and where you are spending your time. I'm not going to tell you to get rid of all the social media on your phone. I have it on my phone too. Think about the fact that anytime you slow down on a piece of content and watch it, or like it or share it, that sends a signal to the system and it might get more people shown the same thing. Being conscious of all of this stuff, all of these threats, all these risks, and thinking about what you reward with your attention, it's valuable. Take your power, put your attention where you feel good about it. Control where your eyes go and what you listen to, and patronize the stuff that you think is worth it.
Brooke Gladstone: I think I'm going to run some patriotic music underneath that, Craig.
Craig Silverman: Please let it be O Canada or hockey highlights.
Brooke Gladstone: Perhaps O Canada. Thanks again so much. It was great to have you back.
Craig Silverman: Thank you so much for having me, Brooke.
Brooke Gladstone: Craig Silverman is co-founder of Indicator, a publication dedicated to understanding and investigating digital deception.
[music- O Canada-Canadian national anthem]
O Canada! Our home and native land!
True patriot love--
Micah Loewinger: That's it for this week's show. On the Media is produced by Molly Rosen, Rebecca Clark-Callender and Candice Wang. Travis Mannon is our video producer.
Brooke Gladstone: Our technical director is Jennifer Munson, with engineering from Jared Paul. Eloise Blondiau is our senior producer, and our executive producer is Katya Rogers. On the Media is a production of WNYC Studios. I'm Brooke Gladstone.
Micah Loewinger: I'm Micah Loewinger.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
