Meta Prepares for Trump 2.0

( Jonathan Raa/NurPhoto via / Getty Images )
Title: Meta Prepares for Trump 2.0
[MUSIC]
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning, everyone. Coming up on today's show, a New York City Council member tells people how to use high-tech vandalism to disable congestion pricing cameras, and Mayor Adams dismisses it as a joke. Also, 100 years of modernism in our 100 Years of 100 Things series, this will be about everything from abstract art to the parallels between the Gilded Age robber barons and today's mega-billionaires. Can't wait for that. In our second hour today, 100 years of modernism.
We begin here. One of the ways we're covering the second Trump administration era is to center the question, is this what democracy looks like? Questions of democracy versus authoritarianism or, oligarchy, or other anti-democratic trends, aren't just about the government, but also what happens in the private sector. Today we ask, is this what democracy looks like, with respect to the announcement by Mark Zuckerberg yesterday that Facebook, Instagram and anything else produced by his company Meta will scale back fact-checking that's been used to remove dangerous disinformation and expressions of hate.
It's the kind of change that President-Elect Trump has wanted. Trump praised it after Zuckerberg's announcement. For a little historical context, though, Facebook took a lot of criticism after the 2016 election for allowing so much Russian and other election disinformation to spread the way it did. He appeared before Congress in 2018 and apologized.
Mark Zuckerberg: It's clear now that we didn't do enough to prevent these tools from being used for harm as well. That goes for fake news, for foreign interference in elections and hate speech, as well as developers and data privacy. We didn't take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I'm sorry.
Brian Lehrer: Mark Zuckerberg in 2018, quite the opposite of what he said yesterday. Did Facebook overcorrect after 2018, or is Zuckerberg just toadying up to the president-elect for financial gain or to avoid persecution? Before we discuss this with two guests and invite your calls and texts, I'm going to play the full five-minute video that Zuckerberg posted yesterday morning so you can hear it for yourself and start to come to your own conclusions. Here it is. To be precise, this runs 5 minutes and 17 seconds.
Mark Zuckerberg: Hey everyone, I want to talk about something important today because it's time to get back to our roots around free expression on Facebook and Instagram. I started building social media to give people a voice. I gave a speech at Georgetown five years ago about the importance of protecting free expression. I still believe this today. A lot has happened over the last several years. There's been widespread debate about potential harms from online content. Governments and legacy media have pushed to censor more and more.
A lot of this is clearly political, but there's also a lot of legitimately bad stuff out there. Drugs, terrorism, child exploitation. These are things that we take very seriously, and I want to make sure that we handle responsibly, so we built a lot of complex systems to moderate content. The problem with complex systems is they make mistakes. Even if they accidentally censor just 1% of posts, that's millions of people. We've reached a point where it's just too many mistakes and too much censorship. The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech.
We're going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms. More specifically, here's what we're going to do. First, we're going to get rid of fact-checkers and replace them with community notes similar to X, starting in the US. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the US.
Over the next couple of months, we're going to phase in a more comprehensive community notes system. Second, we're going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it's gone too far. I want to make sure that people can share their beliefs and experiences on our platforms. Third, we're changing how we enforce our policies to reduce the mistakes that account for the vast majority of censorship on our platforms.
We used to have filters that scanned for any policy violation. Now we're going to focus those filters on tackling illegal and high-severity violations. For lower severity violations, we're going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn't. By dialing them back, we're going to dramatically reduce the amount of censorship on our platforms. We're also going to tune our content filters to require much higher confidence before taking down content.
The reality is that this is a trade-off. It means we're going to catch less bad stuff, but we'll also reduce the number of innocent people's posts and accounts that we accidentally take down. Fourth, we're bringing back civic content. For a while, the community asked to see less politics because it was making people stressed, so we stopped recommending these posts. It feels like we're in a new era now, and we're starting to get feedback that people want to see this content again, so we're going to start phasing this back into Facebook, Instagram, and Threads, while working to keep the communities friendly and positive.
Fifth, we're going to move our trust and safety and content moderation teams out of California, and our US-based content review is going to be based in Texas. As we work to promote free expression, I think that it will help us build trust to do this work in places where there is less concern about the bias of our teams. Finally, we're going to work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world.
Europe has an ever-increasing number of laws institutionalizing censorship and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US Government. That's why it's been so difficult over the past four years, when even the US Government has pushed for censorship. By going after US and other American companies, it has emboldened other governments to go even further.
Now we have the opportunity to restore free expression, and I am excited to take it. It'll take time to get this right, and these are complex systems. They're never going to be perfect. There's also a lot of illegal stuff that we still need to work very hard to remove. The bottom line is that after years of having our content moderation work focused primarily on removing content, it is time to focus on reducing mistakes, simplifying our systems, and getting back to our roots about giving people voice. I'm looking forward to this next chapter. Stay good out there, and more to come soon.
Brian Lehrer: Mark Zuckerberg's video posted yesterday. With us now, Mike Isaac, who covers tech companies for The New York Times. He's also author of the book Super Pumped: The Battle for Uber. His latest article in The Times is called Mark Zuckerberg's Political Evolution, From Apologies to No More Apologies. Mike, thanks for joining us. Welcome back to WNYC.
Mike Isaac: Thank you so much for having me. Great to be here.
Brian Lehrer: Listeners, you're invited to weigh in, too, of course. This may break down politically, but let's see. Did Facebook help initiate the era of mass disinformation in the first place? Did it overcorrect to the point of too much censorship after Trump's first election and the January 6th attempted insurrection? What do you think about his announcement from yesterday now that you've heard it? 212-433-WNYC. Call or text, or with any questions, of course, 212-433-9692. Mike, before we get to Zuckerberg being sorry, not sorry, or judgments of Zuckerberg one way or another, and the context of various companies seeming to adjust their policies now to align more with Trump, help us report the facts of this story. What was Zuckerberg apologizing to Congress for, specifically, in the clip we played first from 2018?
Mike Isaac: Yes, it was really a trip to go back down memory lane on some of these explanations of his. It seems like a long time ago now, but it wasn't that long ago that Zuckerberg was being essentially hauled into Congress to explain how Facebook works, why certain types of information or misinformation from both folks in the United States as well as foreign governments were traveling far and wide, and why the company didn't really detect it for a long period of time because they weren't particularly looking for it and didn't have the resources to root that out.
Only six or seven years ago was he explaining, "We fell down on our duties. I take responsibility for this, and this is something that I take seriously. We're spending literal billions of dollars and bunch of hires to remedy this problem." That was the line for three or four years up until, I want to say, 2021, 2022-ish, when Zuckerberg, I think, started, not externally as much, but internally, and especially in conversations with friends and colleagues, really getting resentful at the lack of credit they were getting, as well as some of the social changes that he didn't completely agree with.
Brian Lehrer: The biggest headline from yesterday's announcement was that Meta is moving away from professional fact-checkers to something called community notes. Who are the fact-checkers, or who have they been, and what is the short history of their role on Meta platforms?
Mike Isaac: Back a few years ago, after all of this broke after the 2016 election disinformation, Meta's very quick response was to lean on a few different organizations. One was this international fact-checking coalition that really engaged with a bunch of news organizations across the world as well as news orgs directly. The Associated press, ABC News, Snopes.com, which has those articles that basically tell you pretty quickly whether something that's traveling online is real or fake. The idea was when things go broad on Facebook or Instagram, or what have you, they would refer some of these posts to these fact-checking companies or fact-checking organizations. They would come back with, is this real or not? That could get anywhere from removed to downrank, to adding a post disclaimer on the post, basically saying this has been fact-checked to say it's not correct.
That was the standard for a while. I think it's reasonable to ask whether that actually worked, especially given Facebook's scale of operations. Literally, nearly half the world uses one or more of the company's apps. That was something they at least tried for years.
Brian Lehrer: Zuckerberg said explicitly in yesterday's video that the fact-checking system led to too much censorship. He used those words, too much censorship. Can you give us some detail on that? How many posts and what kinds of posts have been removed or rejected over the several years that they use the fact-checking contractors?
Mike Isaac: I think he's being a little slippery and doing some sleight of hand on what was some of the reason for what he calls censorship on the platforms. I think there's a real conversation to be had on what types of content were being caught in Facebook systems. Whether they're human or automated. They rely on artificial intelligence a lot to catch a lot of the things. I think there's a very good point from social groups that are saying, "Hey, things that shouldn't be taken down, whether that's activist posts or things that follow on both sides of the political spectrum or marginalized groups, those get hit too."
Often, they are erroneously hit by things that don't fall within Facebook's policies. That's fair. That said, I think that Zuckerberg has also changed his mind on what should or shouldn't be allowed on his platforms over the past few years. They specifically call out social issues around gender identity, or, he was particularly upset at the Biden administration pressuring them to take down COVID-related content during the lockdowns. They relied on some other fact-checking networks to do that. This is something that I think Zuckerberg now, with the ultimate benefit of hindsight in his eyes, thinks that he was wronged by the Biden administration. I don't feel like that's fair. We've all learned a lot during the past few years after the immediacy of the lockdowns, but look, he regrets what he thinks as going back on his values of erring on the side of more, not less. I guess he's free to do that.
Brian Lehrer: Yes. Have there been official standards at Facebook, Instagram, and WhatsApp, for what would be taken down? Was it there in writing?
Mike Isaac: Yes, sure. There has been a lot of this stuff codified over the years. The policies that they have are long and winding, and things that most people do not read, except for very smart experts. Those also tend to change a lot and really change on what I think the whims of the very top, especially Zuckerberg, this is Zuckerberg's company, and he is the bottom line, and dictates what he thinks should be on there. Then also I think, politically, things that Facebook is under fire for at any given moment, sometimes they'll find ways to do close readings that allow them a little more leeway on things that they're worried about getting hammered for by different parties.
I think that's a real reality. In the past, they would have denied that. Now I think they probably have less room to deny that, particularly because of the reel that you played in the beginning, with Mark Zuckerberg basically being very open about who he's catering to here, at least the administration he is catering to. Yes, this stuff has been in writing, but it is very fluid. The evidence yesterday on some very contentious social issues of our time, they're saying, "We're open to a complete change in how we deal with that on the platform now."
Brian Lehrer: Let's take a phone call first from a listener who I think supports these changes. Isaac in Lakewood, you're on WNYC. Hi, Isaac, thank you for calling in.
Isaac: Hi, how are you? I find it very interesting that the Democrats run a very pro-democracy campaign would censor free speech and be anti a platform that just gives everybody a voice, especially that they're doing fact-checking and things like that, which is coming from a very left wing narrative. The conservatives are the ones that, till now, are complaining that they don't have their right of speech able to be done. It's just interesting that Democrats are right now not giving the Republicans a place to say the things, and they're censoring them.
Brian Lehrer: Isaac, thank you very much. That's typical of the critique we've heard from the right. Right, Mike? My guest is Mike Isaac, who covers tech companies for The New York Times. He's now covering Facebook's reversal of its policy to employ fact-checkers. The cries of censorship have come overwhelmingly from the political right, as well as cries of hypocrisy, as we just heard from that caller. Democrats run on this pro-democracy platform, but then they use their power to pressure companies in the private sector, like Facebook, to censor conservative points of view. That's the allegation. Based on your reporting, did they have a point, to any degree, about things that were a matter of conservative opinion, rather than actual disinformation or hate speech getting caught in that net?
Mike Isaac: Yes, you're totally right. This is a hallmark of critique around Democrats and Liberals who are upset with these policies. I think that the hallmarks that folks often go back to of like, "Hey, you acted perhaps irrationally or too harshly," was the Hunter Biden laptop scandal when both Twitter, what was called Twitter at the time, and Facebook, Twitter basically suppressed it entirely. Facebook downranked it in the newsfeed, which meant it was very hard to find, over concerns that it was either fake or stolen, and some hand-waving around the policies around why they did that.
You can argue the legitimacy of the story just in terms of whether you think it's a national issue or not. I think that's, both sides have very different viewpoints there, but it was a real incident, and folks, often on the right, argue this was unfairly suppressed. Even Zuckerberg himself has openly expressed on the Joe Rogan show, that he regretted making those moves and that should have been played out differently. The things that they often point to are these very high-profile flashpoints around highly politicized figures in the discourse. What I've reported on a lot is something that I think conservatives are really upset about are getting these--
What Facebook used to do is attach these little notes or flags on Posts, saying, "This is about COVID," or, "This is about some other social issue, and you should look into it a little bit more." Even if it wasn't saying, "This is flat out false," it bothered people. It bothered them seeing things that they were talking about being flagged, or something, by this company. It was not fair, and leftists or Liberals were not getting those types of flags. I think that Facebook really listened to that, really internalized that. That was part of Mark Zuckerberg's message yesterday. "Hey, we're not going to do that to you anymore. We want to give you all a voice, and that was a mistake."
Brian Lehrer: Let's take a phone call. Somebody who think has a very different point of view from our first caller, also in New Jersey, though. Is Gene ready to go? Can I take Gene on Line 1 here? Here is Gene, also in Jersey. You're on WNYC. Hi, Gene.
Gene: Hi. Good morning, Brian. Listening to Zuckerberg, I see a certain amount of brilliance there. What he's saying is that, "I produce a vehicle to let people speak." A parallel is to the manufacturers of cars. They produce cars, but they don't control the speeds people take. We have traffic cops that regulate that. Basically, what I think is that the parallel, Brian, that Zuckerberg is saying, "Let the government create the traffic cop to determine what's lies and what isn't. It's not my job. I provide the vehicle for people to speak, but I don't provide the service where I'm a traffic cop."
Brian Lehrer: Right, so you're, in that case, supporting the idea of this pullback, I think, or am I misunderstanding you?
Gene: Yes. I think what Zuckerberg is doing is producing an exaggeration of what's going to happen. We're going to have a disaster with this type of speech where there's no traffic cop to regulate the lies. The first lie this morning I saw on Facebook was, Mark Zuckerberg dies at 36, and he was a pedophile. That was the first comment on Facebook this morning, opening it up. Mark Zuckerberg knows that it's a disaster. He's trying to say to the government, "You guys better get your act together and straighten out this whole free speech madness."
Brian Lehrer: Gene, thank you very much. That's an interesting example that Gene just gave there, Mike. I don't know if it was a real post on Facebook, but if it was, it, I guess, was intended to make a point. They were posting something so obviously false. Mark Zuckerberg dies at 36, and he was a pedophile. Is Facebook supposed to do nothing about that now?
Mike Isaac: It's funny. I've seen a lot of satire posts, and satire is something they struggled with over the past few years, in the past. There's sites like the Onion, sites like The Hard Times, things that online, The Babylon Bee on the right, they take these tongue in cheek sort of things.
Facebook, A, artificial intelligence is terrible at detecting humor and satire. Where it is right now, those systems can't really deal with it. Even humans, look, people miss things. The joke goes over people's heads a lot of the time. You might imagine that this post this gentleman was talking about would be more obvious as a fake, but there's a lot of people in the world, and they skim their phones, and they don't know what's real at any given time. Things that might be made to prove a point could be taken as serious. I do want to say, on the regulation point that he brought up, Facebook likes to say, "Yes, regulate us, please, Congress. This is something we've asked for, and you neglect to do it," which is half true.
I would say Congress has neglected to make very strong regulations around any of these social companies. I think Facebook wants to be regulated in the way they would prefer. They have an army of lobbyists to do that. They have suggestions saying how we should do it. They don't want to be broken up by antitrust action. I think they're being a bit disingenuous on when they invite government to come in and to help them because they want to help them in a way that continues to help themselves, if that makes sense.
Brian Lehrer: When we continue after a break, we're going to replay a short 19-second clip from the Zuckerberg video where he singles out two topic areas, immigration and gender, where he says Facebook's policies have been out of touch with mainstream discourse. Mainstream discourse as a benchmark there. We'll continue with Mike Isaac, New York Times reporter who covers tech companies. More of your calls and texts. We're going to bring in a former Facebook executive who is very unhappy with what Zuckerberg did yesterday. Stay with us.
[MUSIC]
Brian Lehrer: Brian Lehrer on WNYC as we continue to discuss Mark Zuckerberg's announcement yesterday, that Facebook is ending its fact-checking program and will rely instead on what it calls community notes, to try to suggest things that get taken down because they are misinformation or disinformation or hate speech, beyond a certain point. We're talking about it with Mike Isaac, who's covering the story as he covers tech companies for The New York Times. We're going to bring in, in just a second, a former Facebook executive who's very unhappy with what Zuckerberg did yesterday.
I think she's going to say he's just toting up to Trump, for whatever reason. We'll see what she thinks the reasons are. First, I want to replay a 19-second clip from the 5-minute video of Zuckerberg. We played the whole thing at the top of the segment, and here, he singles out two areas of content, specifically, as ones that he says Facebook's policies have left them just out of touch with mainstream discourse. Listen.
Mark Zuckerberg: Second, we're going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender, that are just out of touch with mainstream discourse. What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas. It's gone too far.
Brian Lehrer: Immigration and gender, the only two topics singled out by Zuckerberg in that video. Also with us now for a few minutes to react to that, and this whole story is Yael Eisenstat, who describes herself as a tech and democracy advocate.
She is currently Senior Fellow at the group Cybersecurity for Democracy. Very relevant to this conversation. She spent six months in 2018 as, I think the title was Facebook's Global Head of Election Integrity Operations for Political Advertising. A 2021 NPR story about her was called Ex Facebook Employee Says Company Has Known About Disinformation Problems for Years. Yael, thanks for a few minutes. Welcome to WNYC.
Yael Eisenstat: Happy to be here. Thank you.
Brian Lehrer: Can you tell us about those days of working for Facebook, first? What did you do?
Yael Eisenstat: Sure. I was hired in 2018 right after the Cambridge Analytica scandal, to come in under this title, which was quite large. I don't think it's one person's role, but the global head of elections integrity ops working on the political advertising side. My job was supposed to be to ensure, in the wake of Cambridge Analytica, that we would figure out how to make sure advertising wasn't being manipulated to sow disinformation, or voter suppression, or other election integrity issues, but around the world. Both for the US Midterm elections that were coming up, but for every election in the world in which Facebook operates.
That's what I was supposed to do there. It didn't turn out so much that way, unfortunately. Everything we tried to do was shelved. It was very clear that they didn't really want us to fix some of the fundamental problems in advertising, including what links directly to today where Mark Zuckerberg, actually, yesterday, they absolutely refused to allow us to fact-check political candidates at the time. They wouldn't let us actually build in any scanning of ads for potential voter suppression tactics. It was all because Mark Zuckerberg had made it very clear that he would not fact-check Donald Trump, in particular.
Brian Lehrer: Why would it have been Facebook's role? Why should it be Facebook's role, in your opinion, to police the platform, which has, what, a billion users, more than a billion users, to look for every post that you or whoever is hired as a fact-checker would judge to be true or false?
Yael Eisenstat: The interesting thing is, I actually don't think that it's Facebook's job to decide what is fact and what is false on every single post. That's the irony here. My goal was not to fact-check everything everybody said. My goal was strictly, at the time, to build a program to ensure that voter suppression tactics were not happening. Meaning, they were not lying about where to vote, how to vote, when to vote, things like that, which is a fundamental part of being able to engage in a free democracy in elections. What is interesting about what we're talking about right now is it actually benefits Mark Zuckerberg for us to focus this entire conversation over the last few days, on fact-checking, as opposed to on what the platform itself actually does with that speech.
On the advertising side, where I worked, it wasn't just whether or not they were allowing lies about elections and other harmful content to be posted. They were profiting from it because they were taking money from the ads. They were selling targeting tools so that those ads could be targeted in very specific ways. Different messages to different people, in ways that were very hard for us to all understand what political messaging looked at. There's basic, basic rules for political messaging on TV and newspapers. None of those rules were applying in the digital world.
That's part of what that danger was. I am less concerned with the idea that Mark Zuckerberg should fact-check every piece of speech on the platform. I am more concerned with the company's own tools for what they do with that speech, how they amplify that speech, how they recommend it. We all know lies spread faster than facts, and particularly because of how the algorithms of their own company spread those lies.
Brian Lehrer: Mike Isaac, were you trying to get in on that? I think I heard you start to react.
Mike Isaac: I gave a "Mm," of agreement. No, I totally agree. I think the fact-checking part of this is getting a lot of attention. While that's interesting and important, Yael knows better than anyone, the scale of Facebook is so huge, even if you wanted to, you can't fact-check every post on the platform, particularly because there's just a torrent of them every day. The real thing that I think folks are going to dig into over the next couple of days are the fact that they're doing a reversal and putting political content back into their apps in a big way, when they had spent the past year and all their executives defending, taking it out, by saying people don't want it, and now suddenly, people do want it, according to Mark Zuckerberg.
Also, making significant policy changes that Brian, you were talking about earlier, around specific social issues that are very contentious. I agree with Yael that the real focus on fact-checking does wag the dog a bit on these other very important points. I guess it's our job to hammer the full scope of those changes in the coming weeks, basically.
Brian Lehrer: Yes. Yael, do you want to comment specifically on the clip from the Zuckerberg video that we just replayed, where he singles out gender and immigration as topics where he says the Facebook algorithms or policies have been too restrictive and out of line with what he called mainstream discourse?
Yael Eisenstat: Sure. That is particularly interesting because he is really singling out the things that we saw during this election cycle, a lot of, whether it's the far right or more MAGA based people, use in their campaign. Anti-trans rhetoric, anti-LGBTQ-plus rhetoric. He did very much single out certain things, including terrible things you can now say about women. One of the things that went under the radar that happened at the same time yesterday, is they also rolled back their hate speech policies. He didn't talk about that in his video.
That happened under the radar. What that means, again, I want to be very clear, it's not that I want Mark Zuckerberg to be the arbiter of truth on every piece of content on the platform, but he even snuck in a sentence there somewhere in the new hate speech policies something about, even if it leads to offline action. What does that mean? That means that even calls to incitement to violence, is that going to be allowed on the platform now? Again, more importantly is when people harass, attack, or target certain communities, it's actually about limiting those communities' speech, not helping their speech flourish.
It's about harassing them to the point that they go offline. These things actually lead to real world consequences. If you look at, whether it's January 6th, whether it's some of the attacks that happened over the recent years like Pittsburgh or Poway, you can trace back a lot of those perpetrators of violence to how they were actually recommended extreme content on Facebook, and on X, then Twitter, how their own algorithms led a lot of these people into conspiracy theory groups, down certain paths. That's why I really want to focus on the systemic issues of how these platforms operate, as opposed to this conversation, which benefits Mark Zuckerberg, it benefits Elon Musk, and it benefits this whole ecosystem right now that wants to pretend that this is about being champions of free speech, to what are their own tools doing with that speech?
What is their own responsibility here? Yes, I think it's very interesting that the categories he's pulling back are all the categories that, frankly, Donald Trump and his cohort have been using in the most harassing way in recent years. Immigration, targeting trans people, targeting women. Those are the things he's rolling back. To say it's not political, which also I have lots of thoughts if you would like, on how this is a political decision, but that just is crystal clear.
Brian Lehrer: Let's take another phone call. Here's Paul in Rye Brook. You're on WNYC. Hello, Paul.
Paul: Hi. I guess number five in the video, he's moving the content moderators or that department from California to Texas, to apparently avoid any appearance of bias, which I'm not sure. First of all, I don't know if it's even related. Musk moved Twitter. Was it Twitter, or-
Brian Lehrer: I think he did move something to Texas.
Paul: -sorry, Tesla to Texas.
Brian Lehrer: Yes.
Yael Eisenstat: Yes, he did.
Paul: Right. Anyway, so why move it at all? Because apparently, people in Texas have, I think it was less concern or something.
Brian Lehrer: Less bias than people in California. Let me replay that clip. This is another 19-second replay from the Zuckerberg video.
Mark Zuckerberg: Fifth, we're going to move our trust and safety and content moderation teams out of California, and our US-based content review is going to be based in Texas. As we work to promote free expression, I think that will help us build trust to do this work in places where there is less concern about the bias of our teams.
Brian Lehrer: Yael, give a reaction to that. Is there less concern about bias in Texas? The caller obviously doesn't understand it. I'll admit I don't understand it. Is this geographical political theater?
Yael Eisenstat: That's a perfect way of putting it. It is absolutely geographic political theater. There is no reason to believe that there is less bias in Texas than there is in California, especially with, the other thing we're not talking about is what a fact-checker is. The fact-checker actually has principles that they have to follow to actually base things on fact as opposed to personal opinion. That said, moving it to Texas is absolutely a political play. Musk also moved a lot of his operations to Texas. It's obviously a district where there are court cases going up relating to these companies. That is 100% a political play targeted at the top audience, Donald Trump. I think it's very important to remember Donald Trump had threatened Mark Zuckerberg multiple times.
Brian Lehrer: With lawsuits, even with imprisonment. Right?
Yael Eisenstat: Correct. The irony here is, in the past, you would have people like Zuckerberg and other tech leaders talk about jawboning and how the government is trying to pressure them to do certain things. Here, when you have Donald Trump threatening to imprison Mark Zuckerberg, or back in 2020 when he threatened to sue Twitter for taking down his posts, Mark Zuckerberg went on Fox News and said, "I don't think we should be fact-checking politicians anymore." I would argue that that's jawboning. You have Mark Zuckerberg very clearly reacting to making sure that he is on the right side.
This is incredibly important to understand because Zuckerberg is doing what Trump wants him to do. In return, Zuckerberg is asking for things. He doesn't want regulation to go forward against him. He doesn't like some of these cases going up through Lina Khan and others. Also, the last point that got really under the radar in that video, he specifically said he is going to have Donald Trump work with him to pressure European and other international regulators to not enforce their own laws in their own countries against the company.
Brian Lehrer: Right.
Yael Eisenstat: There is a quid pro quo or tit-for-tat going on here that should be really concerning for democracy worldwide, not just in the US.
Brian Lehrer: To end this, the last question that I'll give you both a chance to answer in the context of our framing question, is this what democracy looks like? Can you put it in the context of what other media companies are doing? There's, since the election, been ABC News settling for millions of dollars, the Trump defamation suit against George Stephanopoulos, that a lot of commentators say ABC News could have won if they fought it. The Washington Post and LA Times, just before the election, pulling their Harris endorsements. The Washington Post now rejecting a cartoon portraying Bezos and other tycoons. I think. Was there a Zuckerberg representation in that one? I'm not sure.
Yael Eisenstat: Yes.
Brian Lehrer: Yes, on bended knee before Trump. The cartoonist resigned in protest of the cartoon being rejected. Mike Isaac, what's the large context here?
Mike Isaac: Yes, no, I think it's really smart to look at the media part of this, too. It's not all Meta, even though they are a giant apparatus of media in and of itself these days. I think there's a common thread in at least some of those, the ownership of these companies, the stewards now with Bezos coming in and installing people that cater to how he thinks about it. It has been really striking to me to just see a capitulation, so quickly, from a lot of folks who just eight years ago were making what they showed as moral stances, often at the urging of a number of their employees.
I was talking to people inside of Facebook yesterday, a lot of employees, some of which are happy about this, some of which are not. I think the thing that really bothered them the most was the California to Texas example that Zuckerberg led with, which was this really striking example of real politic. The language is pure cynical pragmatism because Facebook knows this is not true. They've had employees in Texas for a long period of time. It's real theater, just appealing to the top. I think that we're going to see that across a bunch of different organizations. I'm curious to see what that looks like, or if there's going to be a lot more just glad-handing with the new administration, to keep hawkish entities off their backs, whether it's the EU, or domestically, and the FTC, and the antitrust suits.
Brian Lehrer: Yael, a last comment from you in about our last 30 seconds. You describe yourself as a tech and democracy advocate. The group you work with is called Cybersecurity for Democracy. What's the big picture democracy overlay here, as opposed to just critique of corporate capitalism?
Yael Eisenstat: Sure, absolutely. I think one of the things the listeners might find interesting is I actually spent the first 14 years of my career in the national security world. I was a diplomat. I was a national security advisor at the White House. This was even before going into tech. There's multiple things happening. I fully agree with Mike. It has been really surprising to see how major media outlets have bended the knee in advance, before Trump even came into the White House. That is an incredible threat for democracy. If the media is going to capitulate, where do we go from there?
With Meta and with Zuckerberg, he has always made choices that would ensure his power and domination of the information landscape. That's why I say this isn't necessarily new. It's been a long time coming, and he is exactly where he wants to be. He is going into the good graces of a leader that he believes will help him maintain dominance globally. He is throwing a lot of people under the bus in doing that, including what I think are decisions that will fundamentally be harmful for democracy because- last comment- as long as people like Mark Zuckerberg and Elon Musk continue to have this complete, unchecked power to tune their algorithms to decide whose voices are amplified, whose voices are silent, which content is recommended, how ads are targeted to us, then that means that they have the fundamental power over how we even engage with and understand information and trust information. That should concern anyone who believes in free, open, democratic discourse.
Brian Lehrer: Yael Eisenstat from Cybersecurity for Democracy, Mike Isaac, who covers tech companies for The New York Times, thank you both very much.
Mike Isaac: Thank you.
Yael Eisenstat: Thank you.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.