Avoiding Fake News in the AI Era
[music]
Amina Srna: It's The Brian Lehrer Show on WNYC. Good morning again, everyone. I'm Amina Srna, sitting in for Brian today. Now we're going to turn to something that's getting harder and harder to avoid online, AI-generated content. In a lot of ways, 2025 felt like the year AI really hit a fever pitch. Suddenly, it wasn't just a novelty; it was everywhere, in our feeds, our messages, our search results, even our news. At the same time, many of the guardrails that were supposed to keep things in check started coming off. Facebook ended its fact-checking program in the US, with Mark Zuckerberg saying facts had become too politicized, and replaced it with community notes instead.
OpenAI and Google both dropped bans on photorealistic images of real people, opening the door to far more convincing deep fakes. The industry as a whole remains largely unregulated. The result is that we're seeing much more AI-generated content than ever before, and it's getting harder to tell what's real, what's fake, and what's deliberately misleading. As we kick off the new year here on The Brian Lehrer Show, we want to start making sense of this digital confusion and to give you some practical tools to navigate it more carefully. Joining us now is Craig Silverman, co-founder of Indicator, a publication dedicated to investigating digital deception. Craig, welcome to WNYC.
Craig Silverman: Hello. Thanks for having me.
Amina Srna: You wrote an article last month that described 2025 as a turning point in the tech world, the year a lot of the guardrails around AI and misinformation came off. What actually changed, and why did it happen so quickly?
Craig Silverman: Yes, I think there's a few key things that changed, and you mentioned some of them in the introduction here. An overall comment would be that platforms like Meta and others that had been investing for almost a decade now in different programs, whether it's working with third-party fact checkers or building systems to identify and detect false information or overly sexualized or overly violent information, they started to roll those back a little. The why of that is very much related to the election of Donald Trump.
In the years preceding all of these decisions, Trump and a lot of key figures in the Republican Party and in the MAGA movement had really been labeling fact-checkers and people involved in trust and safety work as sensors. When he got reelected, I think a lot of these platforms realized, "Hey, he will come after us if we don't adhere to the line that he wants. Mark Zuckerberg, on January 7th of last year, did a video basically saying, "Hey, we're getting rid of fact-checkers in the US. They were too partisan. We're restoring free speech."
The results has been really an avalanche across platforms on Meta and other places of a wide variety of deceptive, misleading, and what people often refer to as AI slop content. Really low-quality content that is not really providing information, or in some cases is actively misleading people. That really ended up being the theme of the year set by Zuckerberg on January 7th. We saw other platforms and other technologies fall in line with that.
Amina Srna: There's a lot of AI junk online, but I want to focus on the political and news content. You wrote about a fake video claiming Ukrainian President Volodymyr Zelensky bought a luxury mansion in Florida. There's tons of other examples just like that one. How much of this kind of content is being driven by political agendas, and how much of it is just people or bots chasing engagement and ad money?
Craig Silverman: I mean, those are the two really big categories that you laid out. I think it's important for all of us to understand, as we're navigating all of this information that we have access to, which of course is a good thing, more access, more information. As we're navigating these feeds and these apps and websites that are coming to us, we have to realize that there is a battle, a war for our attention that's going on out there. There are different people and different entities with different motivations. The two big buckets are definitely the politically oriented, ideologically oriented actors who are trying to convince, who are trying to win over votes, who are trying to change people's minds in some cases.
Those might be individual people operating on their own, or they could be working in more coordinated ways, either domestically or foreign-based state actors. Then the other category are the financially-oriented ones. Honestly, it's actually very difficult at times on the face of it to tell which is which, because a genuine partisan sharing what they think, sharing what they feel, might actually say the same kind of thing as a fake account run by a foreign state that is trying to push a particular point of view for their own interest.
Somebody who is purely doing this to earn money might also push that same view, that same narrative, because they see it getting really big engagement. It's hard to know the motivation behind things right away because it's an environment that is so easy to manipulate. It's so easy to manipulate because it is much more open. People can join and create accounts on social networks.
While there are some protections and oversight in place, the reality is there are pretty sophisticated operations that are able to run large numbers of fake accounts that are hiring people to pretend to be Americans and run stuff, or people who are buying and selling accounts, pages, and channels, and these kinds of things. It is a convoluted and complicated system that we're all trying to just navigate. On the face of it, it's really hard to know how much of one or the other is happening. Just to add quickly, the Zelenskyy example you gave.
That was a false claim that was spread by a long-running Russian-affiliated disinformation operation that is consistently pushing out narratives in many languages, across many countries, across many platforms, to undermine Ukraine to push Russian-favored narratives. It just keeps running. As much as it's being documented by fact checkers and investigators, there is no way, really, to stop it, according to the platforms or anyone else. It just keeps going and going and going. This is the new persistence of the information environment.
Amina Srna: Listeners, we want to hear from you. Have you ever fallen for something online? A video, a screenshot, a quote, only to later realize it wasn't real? Or you found yourself staring at a post thinking, "I honestly can't tell if this is fake." Maybe you've been sent a video or post by family or friends that turned out to be AI-generated. How could you tell? What sort of conversations did that spark with your friends and family? Or maybe you have any questions about how to spot the signs? You can call or text with your questions or your stories at 212-433-WNYC. That's 212-433-9692.
Craig, before we get into more of your reporting on this, I want to ask about the video that's been spreading all over the internet today and yesterday evening of a woman getting shot in her car by US Immigration officer in Minnesota. It's been shared by most major news outlets, by the president himself. It's clearly filmed on a cell phone. There's no real question about whether it's real. We're living in a moment when there's just an incredible amount of really convincing fake content circulating online. What makes a video like that one so clearly not AI? Maybe another interesting layer of this is that, depending on your political leanings, could the video, maybe, come into your algorithm or feed in a different way?
Craig Silverman: Yes, it's a good question, and it's timely. This kind of pattern is going to play out all the time now, which is because we have these powerful cameras in our pockets, so many of us, so many more things are being filmed now, which is really, really great to have that evidence. The challenge is that these AI models for generating images and generating video are advancing in a really, really rapid rate, and they become more and more convincing.
The first time you encounter a video like that, it is entirely reasonable to sit there and ask yourself, and probably a good practice to ask yourself, "Okay, I haven't seen this before. This is claiming to be something from a really important, big, newsworthy event. How can I know that this is what it claims to be?" I think that that approach is really, really fundamental that we all have to kind of do. In this case, the truth is that, looking at the video, for me, the first look at it, it wasn't necessarily looking at it and instantly deciding, "Oh, yes, this is clearly not AI-generated," because there are ways to make AI videos that are very convincing-looking.
For me, what made it more convincing in the early stages was that it was being shared by a wide variety of people who were, in some cases, on the ground, so local reporters, people who had access to sources and folks. It was coming from people that were on the ground that had direct access. There was not a lot of disputation, even among people who saw it in very different ways. Trump administration was not denying that this was the video at all. That kind of concurrence and the fact that it was coming from credible people, from the location with access to witnesses, and that kinds of things.
That, more than the actual content of the video itself, for me, led me in the early kind of minutes and times I was seeing it, to say, "Okay, this is probably an actual video of the event. The content of the video itself, obviously it does look convincing. It doesn't have obvious glitches in it. There aren't people who suddenly appear and disappear. Everything in terms of the motion conforms and fits the human physics and the world that we live in. Those are some of the other checks. I think it's also the providence of the video. Where is this coming from? Who is sharing? Also, are people with very divergent views of what happened all acknowledging that this is actually an accurate video? Those were all good clues.
Amina Srna: Let's talk about the deepfakes that you mentioned, because there seems to be a real boom in deepfake content right now. Until last year, OpenAI and Google banned photorealistic images of real people. Those bans are gone now. Where does that leave us? Can we expect to see a ton more fake videos that look very real?
Craig Silverman: Yes. The short answer is yes. One reason is these companies that are building the models that are some of the best and most convincing models for generating this stuff, them throwing in the towel, saying, "Yes, you know what? We can't stop this. We can't stop the photorealistic representations of real people." That element is one piece of it, so that they're not actively trying to restrain that, they're not actively saying, "We won't allow this." They'll never be able to stop at 100%.
At least deciding that that's a policy and something they wanted to do meant there would have been some restraints. Then the second part of it is just how easy it is to create really convincing deepfakes now. There was a time when you needed to have a decent amount of technical knowledge and processing power, know how to set some things up. Even then, the stuff that came out wasn't super, super realistic and convincing. It was good enough in a lot of cases, but today you don't need any expertise. You don't even need to be paying for a service in a lot of cases. You also don't need a lot of the raw training material.
What I mean by that is in order to produce a deep fake of me, you would need to have some photos and some video footage of me so that you can train the system on it. There was a time again where you needed to have a decent amount of stuff to feed into the system. Today, you don't. It's trivial, it's easy. There are apps. There's the Sora app from OpenAI, which is really easy to make these of people. We're seeing so much more of it because the companies have basically allowed a certain element of this. They're being very permissive. Then the second is that the technology is really easy, really cheap, widely available, and pretty darn convincing.
That's a pretty bad recipe, I guess, if you think about it, of "We're just going to see a lot more of this." I think for the average person, it means you have to understand that and realize that that should guide the way you are navigating this information environment. Understanding that it is easy to impersonate anyone, not just public figures, but someone for whom there might not be a lot of publicly available photos and videos. You just need a little bit, and you can actually do it not only with video, not only with images, but also even with audio to mimic their voice.
Amina Srna: A listener texts, "Please emphasize that women and girls have been attacked by deep fake "revenge porn" and how horrific and sexist this is." Craig, I also wanted to ask you about the "nudifier" apps that you've written about. Those are apps that allow you to upload a photo of someone and remove their clothing or place them in a sexually explicit video. Do you have any advice on how people can protect themselves against these tools, or maybe is there any legal recourse if this happens to you or someone you know?
Craig Silverman: Yes. I thank the listener for making what is a very true and very important point. If you look at when deepfake technology first became somewhat available, the first victims of it, the first targets of it, were women and girls whose faces were put into pornographic videos. if you think about the people who had access to this earliest on, one of the biggest things they started doing was abusing women. It has only continued. Now it's again following the trend we've already talked about. It's become even easier to do it now.
You mentioned these nudifier or undresser tools and apps. There are apps that you can download from the Apple App Store or the Google Play Store.
There are websites you can go to where you upload just one image of a real person, and they will produce tons of images of them undressed or even in a pornographic video. This has been a real scourge and a real problem. It's one that, at Indicator, we've been looking at and investigating for some time now. We actually did an estimate of what the market size is for these nudifier apps and how much money they've been earning. We believe they've earned somewhere around $36 million in revenue.
Most of that revenue is earned because they have been able to run tens of thousands of ads on Facebook, on Instagram, through Google to reach people and sell these apps and sell these services that fundamentally are abusive. You ask about legal recourse. There are laws against the sharing of non-consensual intimate imagery. That's often the term that's used in this case. Those laws are valid for real content. If me and a romantic partner exchange intimate images and one of us was to share them somewhere without consent, that is an offense, and there could be charges related to that. There's also new legislation.
There's the TAKE IT DOWN Act in the United States, where the platforms are now required for non-consensual intimate imagery; they've got a 48-hour period where they have to remove it once receiving a request. There are penalties and punishments that can face them. The toll for platforms are increasing. There are cases where individuals are being charged with the creation or spread of this material. There's actually a case right now where a teenager, a young woman in the United States, she had a classmate in high school create and share nude images of her using AI, and she has actually decided to file a lawsuit against the company that provided this tool.
We're seeing some attempts to rein it in. Honestly, I wish I had better advice on how people can protect themselves because all someone needs is literally one photo of you, and they could go to any of these tools and create this. Even more concerning right now, what's happening literally as we're speaking, Grok, the AI tool that is built into X, the platform formerly known as Twitter, owned by Elon Musk, is being used to create tons of these non consensual intimate images, or in some cases even undressed images, and in some cases of children on this public platform.
The company, while saying anyone creating illegal content on the platform will face consequences, they haven't actually said they're going to stop people from being able to simply take a woman sharing a photo on Twitter or X, and basically, someone can go in the comments and say, "Hey Grok, put her in Saran Wrap," or, "Hey Grok, put her in a dental floss bikini," and the AI will do it. We're actually seeing this happen on a massive scale right now on a platform owned by the richest man in the world. It's really quite astounding and horrific.
Amina Srna: Let's go to a caller. AJ in La Mesa, California, has a procedure of how they go about seeing if things they see online are factual. Hi, AJ, you're on WNYC.
AJ: Hi, good morning. Let me just start with a quote from the late Senator Daniel Patrick Moynihan, who famously said, everyone is entitled to his own opinion, but not to his own facts to it. When presented with something from a friend, or anyone for that matter, who sends me an email with something that appears to be possibly true but is questionable. [inaudible 00:18:38] There's factcheck.org, the Associated Press, Reuters, The Guardian, The New York Times, period, full stop.
Amina Srna: Got it. AJ, thank you so much for your call. Craig, do you want to weigh in? Maybe you want to tell us how you personally go about fact-checking the things that you see online as the expert?
Craig Silverman: Yes, absolutely. Thank you, AJ, for calling in. Look, I think this sort of method that he laid out is one, generally, to think about. There's actually a name for it. It's called lateral reading. The first thing that I would say to folks is when you are encountering information, and you're seeing something, number one, don't feel like you have to be rushed to judgment, and don't rush to retweet or reshare it. What a lot of times people who are creating false, misleading content are doing is they're trying to appeal to our emotions and trying to appeal to our views to either make us happy or sad or feel great that our views are being confirmed.
The more we have that emotional reaction, the less we pause and do the kind of thinking he was talking about. Number one, just have that awareness. You don't need to act. You don't need to make up your mind right away. Have some patience about it. Then the second thing is, if you are seeing something in front of you and they're claiming someone said something or claiming this event happened or what have you, this is where lateral reading is really helpful. Let's say you see it and it's posted on Facebook. Well, there's a world available out to you out there outside of Facebook.
For AJ, it's going to some trusted sources and seeing, "Well, is there anything in these places that I know do good work about this, that I could compare and contrast?" Another option would be there's Google News, which is a free place you can search where it has content from lots of news organizations around the world. Now, look, they're not all perfect. No news organization is. You're going to have some good quality stuff in there and lower quality. If you were to search that quote or that claim in there, you could get a sense of the broad, different types of coverage that are out there about that particular topic or about that event.
If there's nothing in there, well, then it tells you, like, "Maybe this didn't happen, or maybe it just happened, and there's not a lot about it yet." Again, patience and waiting is really a virtue in this. Thinking about the claim and information in front of you, not rushing to any kind of judgment or action on it. Then potentially moving and going lateral and searching to see what else is out there from sources that you're familiar with and you trust. Then, also paying attention to where the information is coming from. Are these places you've ever heard of? Is there information in it that seems credible, or are people just repeating the same thing over and over again?
Those skills of pausing and being patient and then moving laterally to look what else is out there, and again, not making up your mind right away, those are really good, simple practices. We don't really need a lot of technical tools or things like that. What we need is the ability to think and to engage our critical thinking in a moment when people are really trying to influence us and trying to get us to have an auto response, which reduces our ability to be skeptical and to wait and to look around.
Amina Srna: Unlike Facebook and Google, TikTok said last year it wants to give users tools to filter out AI content. That seems like a step in the right direction, probably. I'm also curious, if it's hard for trained journalists and regular users to spot this stuff, how realistic is it to expect platforms to reliably detect it?
Craig Silverman: There's definitely a role for platforms here of helping people navigate all this information in their feeds. If we look at what these platforms have said they're going to do, one of the fundamental things, and this is something that not only the platforms are doing, but other types of technology companies and also news organizations and others, they have come together to create these technical standards so that if I was to go to Google Gemini or to ChatGPT and generate an image, there would be data embedded in that image itself, metadata that would travel with it.
If I were to go and upload that image to Facebook, Facebook should be able to recognize that, "Oh, hey, this was generated with Google Gemini." What a lot of these platforms have committed to is to labeling, in some way, content that has been AI-generated. You have the choice to look at it and to see, "Oh, this was AI-generated." You can factor that into your decision-making. There is, as you mentioned, some that are going a step further, which is they're also saying, "Hey, we're going to give people a toggle, a switch in their settings, where if they want to see less AI-generated content, we will put less in their feeds." TikTok, it says it's doing that. Pinterest has already integrated that.
This gets down to the question of, "Okay, so that's what they say they're going to do. Now, what are they actually doing?" We did a test, we did an audit earlier this year where we uploaded a couple hundred AI-generated images and videos using different tools to generate them, and then putting them on different platforms. We looked to see how often they were being labeled. I can tell you that everyone pretty much did a pretty awful job. The platform that performed best at labeling AI-generated content was Pinterest, which did it only about 55% of the time.
The thing that was kind of crazy and disappointing to me was that in some cases, we used Meta's tools to generate AI images and then we uploaded them to Instagram, which is owned by Meta. In those cases, Meta most of the time couldn't actually properly label AI content that was generated using its own tools. I absolutely think they need to be thinking about ways of how do we help people navigate and guide, how do we put these cues, these labels, how do we give people options to customize their feed?
As it stands today, they're doing a very poor job of being able to identify AI-generated content. In fact, companies like Google and Meta, which are building the tools to generate this stuff, they seem to be putting, obviously, a lot more effort into, hey, getting people to use their AI tools to generate things, rather than thinking about identifying and potentially reducing the exposure of people to this stuff, if that's what they want.
Amina Srna: I want to sneak in a really quick call before we run out of time. Rachel in Manhattan, can you share your story with us for about 15 seconds if you can?
Rachel: Okay. Yes. On my phone, popped up Anderson Cooper, who I listen to all the time on CNN. He said he had some breaking news about a product that Sanjay Gupta has formulated. It was a product to help stabilize Alzheimer's or even reverse it. Sanjay Gupta got on, and it was his voice; it was Anderson Cooper's good voice. He gave a perfectly rational explanation of how he did the research, and he formulated this medicine. I, like a fool, I sent for six bottles of it. I woke up in the middle of the night, and I said, "I have been scammed, because if this had been true, it would have been all over the news and the newspapers that they had invented this."
I felt like a fool, and I realized this is AI. It was unbelievable because it was their voice, and it didn't look like it was dubbed or anything. Anyway, I did call. I eventually got hold of them, and I threatened them that I was going to write an article about this scam. They actually did send my money back. I said, "I'm going to return those bottles." They said, "Don't bother." I realized then, of course, it was just junk.
Craig Silverman: Oh, Rachel, thank you so much.
Rachel: It scared me, and I will never believe anything I see again.
Amina Srna: Rachel, thank you so much for your call, and I'm sorry to hear about your experience. Greg, as we wrap up, you've reported extensively on undisclosed ads like this and misleading ads. Are there any regulating bodies making any efforts to catch this kind of stuff, or is it kind of a free-for-all at the moment?
Craig Silverman: It's kind of both. The Federal Trade Commission often does go after unscrupulous marketers who do things like was just described. I really appreciate her being willing to come on and share this story because too many people think, "I'm not going to get scammed, it won't happen to me." You have to understand that the people on the other side are extremely crafty, extremely intelligent. They have a lot of money that they are putting into this, and they are testing these ads like crazy to figure out what works. You are up against, in many cases, very sophisticated, organized crime operations or just very sophisticated deceptive marketers.
It's not necessarily a case of nobody is safe, but you need to really guard your attention and be aware about the offers and the ads that come your way. This is a huge problem. The Department of Justice, the Federal Trade Commission try to go after this stuff, but they are fighting a battle against extremely well-organized and well-financed operators. Unfortunately, a lot of it is up to us to fend for ourselves and to talk about this amongst our friends and family members, to really be cautious before you hand over your credit card numbers.
To also understand one of the other scams to mention quickly is just if you get a phone call and you know it sounds like a relative, maybe exactly like them or similar to them, if you see a pop up telling you you owe money to the government or your antivirus has expired, or someone calls and says they're calling from the government and they tell you to go get money or they tell you to go get gift cards, these are the hallmarks of scams. You don't need to act right away. You can always wait, you can always talk to people around you, and run things by them. Don't have the urgency to act because that's what they're creating in you.
Yes, there are some efforts on this, but it's never going to get rid of it completely. The truth, unfortunately, is that a lot of times these scams come through ads and companies that earn money like Meta and Google from ads. There's recent reporting that Meta is earning billions of dollars a year from scam ads, and they say they don't do that intentionally, but it is happening. It's a really, really difficult and dangerous scenario that we're all in in that respect.
Amina Srna: It seems like the takeaway from this conversation is whether it comes to whether it's the deep fakes or scams online using deepfakes, maybe the best thing is to take a quick beat, take a pause, and think about it for a sec. Craig Silverman is the co-founder of Indicator, a publication dedicated to investigating digital deception. Craig, so much for coming on the show today and all this good information.
Craig Silverman: Thank you for having me.
Amina Srna: Coming up next, the disappearing Southern accent. Stay with us.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
