Big Tech Embraced Fakeness in 2025
[music]
Brooke Gladstone: Hey, you're listening to the On the Media midweek podcast. I'm Brooke Gladstone. This week, we wanted to cozy up by the fire, grab a mug of hot cocoa, and review an absolutely insane year for artificial intelligence. Silicon Valley's heavy-hitters Google, Meta, Apple, and OpenAI to name a few, have poured billions of dollars into the industry, creating products for people, businesses, and even the government, but there's also been a growing tug at the AI Superman's cape. Reports of unchecked chatbots giving dangerous advice, scammers armed with better tools, and political deepfakes increasing in quality and quantity. To figure out the status of AI now to try and trace what's happened in the last 12 months, we went all the way back to January 17th, 2025.
Mark Zuckerberg: We're going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.
Brooke Gladstone: Meta CEO Mark Zuckerberg.
Mark Zuckerberg: First, we're going to get rid of fact-checkers and replace them with community notes similar to X, starting in the US. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US.
Craig Silverman: The trigger for a lot of this rollback and for Mark Zuckerberg doing what some people have referred to as a hostage video of him at camera was, of course, Donald Trump gets reelected.
Brooke Gladstone: Craig Silverman is co-founder of Indicator, a publication dedicated to understanding and investigating digital deception. He argues that this video marked the beginning of a swift but forceful transition to a less-moderated and less-regulated AI world.
Craig Silverman: It becomes very clear to these platforms that they need to make sure they get out of the way so that whatever content he wants to have out there can be out there. Same from his supporters and from his like-minded politicians leading countries around the world, because I think Trump made it clear. He will come after them. He mused about putting Zuckerberg in jail. I think that's the big trigger. Because most of these platforms are based in the US, what happens in the US cascades around the world.
Brooke Gladstone: It wasn't just Meta pulling back on fact-checking. In January, Google told the European Commission, it wouldn't integrate fact-checking into its search bar or YouTube. In February, researchers found that Google had given up its data void, those small warning signs that are signified maybe by gray banners, sometimes red bars saying that, "Hey, there really aren't good, credible results for your search." They don't exist anymore. Why not?
Craig Silverman: Before Trump was reelected, the Republicans labeled content moderation and fact-checking as censorship, and been pretty effective once they took over the House. The platforms, in a world where Trump is the president and the Republicans control the House and the Senate, they realize, "We have to move away from this stuff whether we think it's censorship or not."
Brooke Gladstone: It was, "Oh, you with your fact-checking." It's like you're a liar.
Craig Silverman: It's a strange thing, the idea that fact-checking doesn't count as free speech because fact-checking, inherently, it's not removing anyone else's speech. It is critiquing it. It is fact-checking it. Meta was the one who chose to take a fact-check from someone they paid to produce it and to put a label on the content that had been fact-checked. That was Meta's decision.
The fact-checkers actually had no control over what Meta did or didn't do with the fact-checks and content moderation of trying to enforce the rules that the platforms themselves or that the laws of a country have created in its worst scenarios can be censorship, but on a day-to-day basis like this is what they are supposed to legally be doing. The reframing of fact-checking and speech of a journalistic nature as censorship was a pretty good judo trick. It was effective in the US.
I'm Canadian, and so I'm going to use a hockey reference here, which is that a lot of people say the NHL is a copycat league. The team that wins the Stanley Cup one year, everybody tries to copycat their roster. It probably happens in other sports leagues. I think it's true with tech platforms. Look at how much they have all invested in running and building the same types of AI products. It's the same thing around the so-called censorship. They're all copying each other because they think that is the safe thing to do, but also, they think maybe that's where the next big boom is coming from.
Brooke Gladstone: Fact-checking as censorship is one-half. The other half is about business. The fact that these companies are actually pushing technologies that create fakes and that are actively used to deceive, not just entertain.
Craig Silverman: Yes, this is no longer just a scenario where you have people coming on to Facebook or TikTok or what have you and sharing and spreading false content. The platforms like Meta, like TikTok, they are building their own AI tools. They have a business imperative and a share-price imperative to get people to be adopting these AI tools and showing that they are actually getting traction.
As part of that, one of the things you can do with a lot of these AI tools is impersonation. One of the things you can do is generate massive amounts of slop false claims about celebrities, and they are allowing this to happen at a scale using their own tools. They are, in many cases, directly paying people cash every month based on the AI content they have created.
Brooke Gladstone: Well, give me an example of that.
Craig Silverman: The craziest example for me was there is a guy for a long time. He's a dad who lives in Maine. He goes by the online moniker Busta Troll. For many, many years, really about a decade or more, he has run what he calls satire pages and associated websites, where he just shares 100% false, made-up quotes attributed to celebrities, political figures. It's like memes, where he'll share an image. It's got a photo of Gavin Newsom with some crazy quote he never said that makes him look like an idiot. It's red meat for the right-leaning audience on Facebook.
Then there'll be a link to an article he wrote with the same hoax claim that's written like a news story. His model used to be that he would get people to go to the website, and he'd earn money from ads on the website. Just like the teens and the young men in North Macedonia that I wrote about doing pro-Trump stuff back in 2016. This has changed because this year, he got accepted into Meta's content monetization program, where they will pay you based on the engagement your content gets. He is getting paid by Meta now every single month for how viral his hoaxes are.
Brooke Gladstone: In March, OpenAI used to ban photorealistic images of real people. That ban is over?
Craig Silverman: Impersonation is in. OpenAI makes the change in March, where these photorealistic things representing real people, public figures, they're okay with. They said, "We can't maintain a database of all the public figures in the world. It's just not possible, so we'll just try to prevent people from doing harmful representations." Then months later, they launched Sora, which is a dedicated app that does amazing deepfakes of anyone.
Google also rolled back some of its policies on impersonation because it started building some really powerful image and video generation models. Now, something that in the early days of this technology, these companies said, "Listen, we're going to make sure that impersonation and deepfakes are really reined in." At a certain point, they just said, "You know what? Let's let it go." This has really, really serious consequences. The most common use that is really dangerous around this type of technology is for scams, where someone can impersonate another person.
They can impersonate Elon Musk. They can impersonate a famous movie star or politician. They can convince people that there's an amazing investment offer or something like that. There are people who are literally losing their entire life savings and having to sell their homes because they get sucked in by these things. Impersonation is also stealing money from corporations. People are cloning the voices of executives or cloning the faces of executives and calling up other people in the company and getting them to transfer money.
Brooke Gladstone: This stuff is legal?
Craig Silverman: As long as you are not infringing on someone's personal rights, putting defamatory things in their mouth. Typically, we don't see people suing around this. There is a case right now in California. Andrew Forrest, who's often nicknamed Twiggy, he made his money in mining in Australia. He's suing Meta because his face, his voice, have been used in hundreds of thousands of scam ads over the years. It's not an accident, I think, that it's a billionaire suing them and actually seeing this through, because that's the kind of money you need to actually get past Meta's formidable legal teams of filing a lot of motions for dismissal and other things.
Brooke Gladstone: I'm not going to say that this year, there were some really positive signs, but maybe less negative ones. TikTok didn't make any grand proclamations in the style of Zuck about amending its misinformation guidelines. Bluesky actually put extra money into verifying the people behind accounts. Meta, along with TikTok, beefed up their Community Notes features.
Craig Silverman: TikTok was notable because I don't think they did as public of a bending of the knee. It's strange because, obviously, they're tied up in this process where the government has been trying to force TikTok to be sold for some time to get rid of its China-oriented ownership. Yet, they managed to navigate this in a way without doing a big video, saying, "Hey, we're rolling back this, we're rolling back that." They still actually work with fact-checkers around the world. I should note, Meta, outside of the US, continues to work with fact-checkers. That's continued, and it's positive in that sense. I think Community Notes is really important. I don't think, in and of itself, doing Community Notes is a bad thing.
Brooke Gladstone: Let's explain how that works.
Craig Silverman: Basically, you can sign up to be part of the Community Notes program. If you see something online that you think is inaccurate or requires a bit of context around it, you can submit a note. If enough people from different political points of view who are registered in the program actually think it's a helpful note and vote that it's helpful, it will show up as a label on that piece of content, just like the fact-checks used to on Meta.
It is collaborative, it is open, it's participatory, and it's supposed to also have some things built in to combat bias a little bit. All of that's positive. The problem is when we at Indicator looked into the state of these Community Notes programs, the big one on X, and then the big new one on Meta, we found that there are some real problems. Meta hasn't invested much in it. There's very few people in the program. There's very few notes being applied. As a replacement for fact-checkers in the US, it seems like it's really far away from doing that.
Then there's X, which is the template. X similarly doesn't seem to be investing a lot in it. There are fewer and fewer notes being rated helpful. People who are participating are actually not seeing the end result. They've opened it up to bots now so that it won't be very long before most of the notes that are getting appended are actually coming from automated AI-oriented systems. I think as a model, it's turning into more of a fig leaf than an actual real good-faith effort.
Brooke Gladstone: Now, as you went through the year, you say the next phase saw "agents of deception baked AI into their workflows." Who are these agents? What is this workflow that is making use of AI? Here, I want you to pause for a moment on a word that's come up a lot. I'll be surprised if it isn't in the Oxford Dictionary Word of the Year, which is "slop."
Craig Silverman: Yes, so let me put them into a couple of buckets here. Let's put a hustler bucket over here, and then let's do state-backed or propaganda, politically oriented stuff on the other side. Sometimes the hustlers do the politics because it makes them money, but let's divide them. On the politics side or on the state operations side, it's a cliché to mention it, but Russia has been doing this for a long time. They've had operations masquerading as European or American news organizations spreading false articles to push a particular propaganda narrative, "Attack Zelensky in Ukraine," or what have you.
They've looked at AI, and they said, "Oh, now, we don't need to use Photoshop and take a lot of time to create a whole bunch of fake headlines or fake front pages or fake videos. We can actually generate these at scale." They can put out more crap much faster than ever before and really overwhelm any defenses that are out there on these platforms. The state-backed operations, it might be a video claiming that there's a villa that Zelensky owns that's worth $20 million, right?
Reporter: Ukrainian President Volodymyr Zelensky bought a $20 million mansion on the Florida coast in 2020. Boasting 200 feet of ocean frontage, the 13,000-square-foot mansion features 6 bedrooms and 10 bathrooms, each with stunning ocean views.
Craig Silverman: A lot of these state-backed operations are now churning out, in some cases, thousands and thousands and thousands of articles or pieces of content a day. That stuff could end up getting hoovered up into AI models, where the AI models start spitting out the exact same propaganda narrative or disinformation. The snake is eating its tail and regurgitating it all for bad things.
Brooke Gladstone: The hustlers?
Craig Silverman: I have a soft for the hustlers. I have to say. They're clever. They're creative. They're very innovative. They actually embody a lot of what Silicon Valley talks about, which is rapid iteration. Just build and put your head down. For example, I did a story this year about one kind of AI-infused study app. Their strategy for marketing was they don't really pay for ads. What they do is they recruit young women. In some cases, at least one high schooler.
They have them create brand-new TikTok accounts. All they do is just post after post after post, talking about this amazing study hack or this trick they found that, in the end, leads to this one app, which I'm not going to mention to give them free advertising. Almost none of these, and there were thousands of them, said they were an ad. The people's bios did not say they were a paid creator for this company.
On top of all of these young creators churning this stuff out, they also had a channel where they had filmed a bunch of confrontations between students and professors. A student yelling at a professor, or the professor yelling at the student. They uploaded these to generate engagement. All of them, again, included a mention of the product without saying that these were ads.
Brooke Gladstone: Were these fake arguments?
Craig Silverman: Fake arguments.
Speaker: I sent this to you a week ago, and you're still using ChatGPT. All your work looks like shit. It's wrong, and you're all getting the same goddamn answers. I email this to you. You upload your lectures, your readings, whatever the hell you want. Then you actually get insights from the AI tutor.
Craig Silverman: One of the most frequent people featured as a TA or professor is actually like the head of growth marketing, who recruits all the influencers. They're very proud of this strategy. They have boasted online about how many views they got and the fact that they're paying absolutely zero in advertising. I brought these thousands and thousands of undisclosed ads, which, by the way, the FTC has rules around this. They're clearly outside the rules of what the FTC has set. TikTok didn't remove any of them.
Brooke Gladstone: Tell me, which of this hustler-generated slop you have a soft spot?
Craig Silverman: Well, my weak spot is sometimes they come up with stuff that's genuinely clever, where I'm like, "Oh, you're terrible, but, wow, you really figured that out." One of the things that stood out to me this year was that Andreessen Horowitz, one of the most reputable, most important venture capital firms in the world, let alone Silicon Valley, it invested $1 million in a company that is building bot farms for you to run ads on TikTok with.
It invested $15 million with other investors in Cluely, a company that its first product was to help programmers cheat on the job interview coding tasks that they get assigned when they're applying for a job. If you think about it, Andreessen Horowitz is potentially funding an app that people used to deceive some other Andreessen Horowitz-funded companies into hiring programmers who weren't as good as they actually claimed to be. I haven't seen that before.
Brooke Gladstone: You reported in June for The Guardian on another example of what this can look like. There were 26 channels on YouTube dedicated to AI-generated videos about the P. Diddy trial. The videos would have an uninvolved celebrity in the thumbnail, and then some salacious headline about some fake testimony or a fight. These videos ranked 70 million views, and none of it was real.
Craig Silverman: There was a golden age of Diddy slop this year, which, again, a phrase I never thought I would say, but people realized that there was a lot of attention around the Diddy trial that was going on. They also realized that the average person didn't really know who was testifying, and so you had folks who would generate thumbnails and, in some cases, entire long videos claiming that someone had just showed up and testified. P. Diddy's mom testified against him, or The Rock showed up and testified against him. It would be in the thumbnail of the video. It would be in the title of the video. Then there would often be a bait-and-switch, where the video would actually be a lot of AI-generated stuff and clips of news reports.
Brooke Gladstone: Wait a minute. I saw something that I totally believed. Now, I wonder whether it's completely wrong, which was that they found a recording of Prince condemning P. Diddy and saying that it was disgusting what was going on. He was actually afraid to say anything about it. Now, I think maybe it was totally made up.
Craig Silverman: It might be. I don't know the Prince one specifically, but Prince has been dead for a while, unfortunately.
Brooke Gladstone: Yes, this was found in his house or something.
Craig Silverman: Yes, I feel like--
Brooke Gladstone: Oh, my God. I'm such an idiot. [laughs]
Craig Silverman: I feel like we're in slop town right now, yes. Look, any of us are susceptible at any moment, if it's the right message, piece of content delivered in the right moment. It is not an intelligence thing. It is not a class thing. It is not an education thing. Any of us at any time can be persuaded of something. It's really important to operate in this insane information environment with that element of awareness and humility, because we are not too smart.
As much as we can joke about some of this crazy slop stuff with these amazing tools and with the lack of the kind of oversight by the platform, someone will create something that works great for me and works great for you. We're going to watch it, and we're going to take it in. There's less of a chance that something is going to intervene and say, "Hold on a second. You may want to know this about this thing that just loaded in front of you."
Brooke Gladstone: Here's the real problem. This goes back to the earliest days of Photoshopification. Ultimately, if you develop the skepticism that you think is going to protect you, you're going to start not just disbelieving the stuff that's false, but also the stuff that's true.
Craig Silverman: A lot of people who were taken in by the QAnon conspiracy theory, a lot of those people believe they were engaged in really deep, serious internet research, and that they were actually the ones doing the fact-checking in the media literacy. Simply to say somebody like, "Hey, you should check that," or, "Hey, don't believe everything you read," that's not actionable advice. We actually do need to be equipping people with like, "If you see something, here might be the three steps you take."
You might go and look at that thing in front of you, but then you might go and search online and look for a wide variety of different sources to see if there is some alignment. Make sure those sources aren't all from the same kind of point of view or the same location to see if they're not all just in one place. You mentioned earlier that Google stopped putting data void warnings saying, "Hey, you're searching for something where we're not seeing a lot of good-quality information." That's the place where the people who are trying to misinform or trying to jump on and get traffic to monetize, that's where they jump in.
Brooke Gladstone: You say the whole thing is the quintessence of easy money. Obviously, the social cost can be alarmingly high. Case in point is this moment on Joe Rogan's podcast. Can you describe it?
Craig Silverman: This fall, there was a video that started spreading, pretty popular among right-wing social media users, where it showed former vice-presidential nominee Tim Walz. He's on an escalator. He's singing. He's dancing.
[MUSIC - The Pussycat Dolls: Don't Cha]
Don't you wish your girlfriend was hot like me?
Don't you wish your girlfriend was a freak like me?
Don't you?
Craig Silverman: He's got a white T-shirt on in black letters that says "Fuck Trump." What happened is that they basically took his face and superimposed it on the body of a creator, who had actually originally shared this video, which, again, a very easy thing to do. Joe Rogan is talking about him as if it's real. He ends up getting corrected by his producer. His reaction isn't like, "Oh, sorry, whoops, I fell for that." He did not feel the guilt and the shame that you did. What Rogan did was basically explain it away. He said, "Do you know why I fell for it? It's because I believe he's capable of doing something that dumb."
Brooke Gladstone: Truthiness lives. Just as Colbert discussed it back during the Bush administration. If it feels true, it is true, because there's no such thing as facts anymore.
Craig Silverman: Shame is no longer a factor in all this stuff. I think it's almost like tech and the platform's, in some ways, caught up this year. We're basically like, "Yes, why should we do things that the President is not restraining himself around?"
Brooke Gladstone: 404 Media's Jason Koebler wrote last month that America's polarization has become the world's side hustle.
Craig Silverman: It's 100% true. A perfect example from this year was I found a whole network of foreign-run pages that only spread 100% false hoaxes and AI-generated images about global celebrities. It was in English and other languages, but they hit upon a lot of culture war topics like trans stuff and political stuff and what have you. The first time that I brought those pages to Meta, they removed almost all of them. Then after I brought a second group of more than 100 pages to Meta, and this was after they had gotten rid of the fact-checkers and after they had rolled back some of their moderation policies, Meta actually removed almost none of them and basically said, "This doesn't violate our policies anymore."
Brooke Gladstone: Let me get this straight. Scammers can use AI to generate tons of ads and videos quickly to either sell or to pretend to sell users something. Tech companies like TikTok and OpenAI are creating tools to integrate AI into videos because that allows people to make content faster, more speed, more videos, more engagement. VC firms are investing either in the AI tools themselves or bot armies that high bidders can deploy to manufacture online content and maybe manufacture consent. It's all juiced by Meta and the like who embrace these ads, reward people for using these kinds of tools, because that's even more money in their pockets. What am I missing?
Craig Silverman: Well, I think that's a pretty good, heartbreaking, disappointing summary. I think at the end of the day, if people are wondering the why piece of it, it's that the business imperative now for tech platforms is to show that you are a leader in the creation of AI tools and the advancement of AI tools. You need to show that your users are using them and that there is value from what you are creating.
To have policies that put you, from their perspective, at a disadvantage to your competitors, who maybe are allowing impersonation and are allowing this and allowing that, the race for AI supremacy has led to some of these rollbacks. The pressure from the new administration has led to the other piece of it. That is the recipe that led to the laundry list of things that you have just mentioned. This is where we are, going into 2026.
Brooke Gladstone: Then what do you make of the report that you cite from The Economist saying, "Despite all that, adoption of AI amongst businesses is down"? Here's a quote. "The employment-weighted share of Americans using AI at work has fallen by a percentage point and now sits at 11%. Adoption has fallen sharply at the largest businesses, those employing over 250 people."
What's more, The Economist predicts that, "From today until 2030, Big Tech firms will spend $5 trillion on infrastructure to supply AI services." To make those investments worthwhile, they will need, according to JPMorgan Chase, on the order of $650 billion a year in AI revenues, up from about $50 billion a year today. People paying for AI in their personal lives will probably buy only a fraction of what is ultimately required. Businesses must do the rest. Looks like they're not going to.
Craig Silverman: [laughs] This goes back to our conversation about the hustlers. They are early adopters. The scammers, the hustlers, the bleeding-edge marketers, the tech entrepreneurs, these are categories of people who are absolutely adopting this kind of stuff and pressure testing it and using it. It's not to say that average people aren't. My wife uses ChatGPT to ask questions far more than I am comfortable with.
There is absolutely a gap here between the amount of investment, the valuations on the companies in the AI space, how much the platforms are pushing on this, and where things stand today. If they don't close that gap by getting actual businesses and people in their daily lives to integrate this stuff, then a lot of this is going to have just supercharged scams and fraud and impersonation and slop not actually translated into the business value that would lead to a sustainable AI boom. This is part of the argument some people make that there is inevitably going to be some kind of a bust around this.
Brooke Gladstone: Looking ahead to next year. [laughs]
Craig Silverman: Oh, God no.
Brooke Gladstone: [chuckles] There's been a ton of press about the potential popping of the AI bubble, that it's reminiscent of the dot-com sparkle and fizz of the 2000s. Now, let us consider that AI right now is propelling our economy. If that weren't in the equation on the stock market, it would be completely flat. The New York Times reporter David Streitfeld, writing of the dot-com bubble compared to the AI bubble, said this week, "For all the similarities, there are many differences that could lead to a distinctly different outcome. The main one is that AI is being financed and controlled by multi-trillion-dollar companies like Microsoft, Google, and Meta that are in no danger of going kaput." You've been watching this for a year. Any thoughts on the financial stability of this industry?
Craig Silverman: It's absolutely true that Meta isn't going to go bankrupt if AI and its superintelligence team doesn't pay off the way they want. It could end up for Meta being their big bet on the Metaverse. Do you remember that?
Brooke Gladstone: Yes.
Craig Silverman: He spent around $70 billion saying the future is the Metaverse, and that has amounted to almost nothing. I do think a company like Meta and Google, they're really so big, and they have such good foundational, strong advertising businesses that they can make a bunch of bad bets. There will be some carnage. There are a lot of AI startups that have been funded. There are people who left senior positions at OpenAI, started new companies with multi-billion-dollar valuations right away.
Those are the things that may go away. At the end of the day, from my specific perch and my specific bias of wanting to see some better guardrails around, what honestly is technology that I use and enjoy at times is if the business hype reduces a little bit, then them actually saying, "Okay, so what do we have here, and what are actually some reasonable rules?" right now, I feel like a lot of these companies feel like they can't put too many constraints on their models and on the use of them, because that means they might lose.
Brooke Gladstone: Craig, all this is dizzying. What in God's name are we supposed to do about it?
Craig Silverman: There isn't an easy answer. I think in spite of all this, which seems very overwhelming, you can feel disempowered and small. The truth is that we individually are the atomic units that these Big Tech platforms need. They need our attention, and they need us to spend time on them. I really encourage people to think very consciously about where you are giving your attention and where you are spending your time. I'm not going to tell you to get rid of all social media on your phone. I have it on my phone, too.
Think about the fact that anytime you slow down on a piece of content and watch it or like it or share it, that sends a signal to the system. It might get more people shown the same thing. Being conscious of all of this stuff, all of these threats, all these risks, and thinking about what you reward with your attention, it's valuable. Take your power. Put your attention where you feel good about it. Control where your eyes go and what you listen to, and patronize the stuff that you think is worth it.
Brooke Gladstone: I think I'm going to run some patriotic music underneath that, Craig.
[laughter]
Craig Silverman: Please let it be O Canada or hockey highlights.
Brooke Gladstone: [laughs] Perhaps O Canada will do. Thanks again so much. It was great to have you back.
Craig Silverman: Thank you so much for having me, Brooke.
Brooke Gladstone: Craig Silverman is co-founder of Indicator, a publication dedicated to understanding and investigating digital deception.
[music]
Brooke Gladstone: Thanks for listening to the On the Media midweek podcast. Tune in to the big show on Friday for the lowdown on how AI shaped 2025. I'm Brooke Gladstone.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
