Ever since Russian agents and other opportunists abused its platform in an attempt to manipulate the 2016 U.S. presidential election, Facebook has insisted, repeatedly, that it’s learned its lesson.
( AP Photo/Tony Avelar, File
Tanzina Vega: This is The Takeaway, I'm Tenzin Vega. On Wednesday, Facebook CEO, Mark Zuckerberg testified in front of the Senate Commerce Committee where he addressed the spread of misinformation on his platform.
Mark Zuckerberg: This is an extraordinary election and we've updated our policies to reflect that. We're showing people reliable information about voting and results, and we've strengthened our ads and misinformation policies.
Tanzina: Both Facebook and Twitter say they've taken steps in 2020 to slow the spread of false information, including articles and ads meant to deceive voters. This week, Twitter rolled out a policy that involves adding messages about voting and election results to the top of users feeds, as opposed to just debunking information that's already been spread, but misinformation is still rampant on social media.
Signal Labs, the company that tracks misinformation online found that, in 2020, one out of every five vote-by-mail-related claims shared on sites including Facebook, Twitter, and Reddit was misinformation. A recent study from researchers at Harvard found that President Trump was one of the main drivers of misinformation related to voting-by-mail this year. Here on The Takeaway, we continue to explore the truth gap in the United States, and today, it's all about how much social media companies or government officials can actually do to combat misleading information. Joining me now is Davey Alba, a reporter covering disinformation for The New York Times. Davey, thanks for joining us.
Davey Alba: Hi, thank you for having me.
Tanzina: Let's start with the definition, misinformation and disinformation, what's the difference?
Davey: Disinformation is misleading information that can be directly attributed to nefarious actors of whom we know the intent. That would be foreign influence campaigns like we saw with Russia in 2016 and other actors like that. Misinformation, on the other hand, is often referring to rumors that are circulating that may be false, but that actors who are spreading it don't know is false and is taking those rumors in good faith and not knowing that they're spreading inaccurate information or wrong information.
Tanzina: What are some of the big topics that you're seeing are most at risk?
Davey: There are so many different flavors of rumors. Some of the claims that we're seeing include that Black Lives Matter protests are inciting violence at polling places and early election voting sites, that mail-in ballots are being dumped and shredded, and that ballot boxes and voting machines are themselves compromised. We've also seen a lot of claims of ballot harvesting, for instance, which is a loaded political term in which ballots are collected and dropped off in bulk by unauthorized people.
Tanzina: Who's behind a lot of this misinformation and disinformation? You mentioned briefly how the Russians took advantage of Facebook back in 2016 to create what they did, was essentially create content that would stoke racial tensions, which many people say was a success. Who's behind it this time?
Davey: It's hard to pin down specific actors beyond Facebook and Twitter and these social media platforms themselves telling us who is behind a lot of this misinformation. What we can say is that the way it looks to most of us, who are just on the receiving end of it, is that it's a lot of outrage-inducing content and content that inflames our emotions. It's stuff that includes social issues that Americans are already quite divided on and it preys on those vulnerabilities and seeks to exploit that, essentially.
Tanzina: We're not clear on who's doing it, is that right? Because a lot of political advertising and messaging can either come directly from the campaigns themselves or can come from things like Super PACs or organizations outside of the campaigns who want to push a specific message, so we're not clear on whether or not it's those organizations or it could be foreign actors.
Davey: What we do know is that the levels of misinformation this year are almost unprecedented. It's on the rise and has grown since 2016-
Tanzina: Let me stop you there, Davey, because if the levels of misinformation are unprecedented, which we say a lot about this year 2020, Facebook and Twitter have been aware of their role in creating this environment for misinformation. We just heard Mark Zuckerberg testifying yesterday in Congress. What have they not done to prevent this spread? Weren't they supposed to be taking care of this long before we're days ahead of the 2020 Election?
Davey: Yes, absolutely. They have ramped up their policies to curb the spread of misinformation and have been acting more aggressively as the election nears. The problem is there's so many factors that come into play here including that they don't want to be perceived as being biased or putting their thumb on the scale too aggressively. In the hearing yesterday, we actually heard from a lot of Republican senators saying that Facebook and Twitter have liberal bias and allow liberal actors' speech and posts to stay up while moderating conservatives' posts.
What we do know is that's largely anecdotal, the evidence that they cite, and there's been several studies that show that there isn't really a systematic bias of conservatives. In fact, when we've tracked the top engaged posts on Facebook, which we do on a daily basis, you can start to see a trend where it's actually conservative speech that often rises to the top in terms of engagement.
Tanzina: At the end of the day, Twitter and Facebook, are not in the business of removing these posts. They, in fact, rarely remove posts, so how effective is it? At the end of the day, I feel like these companies are couching their own language to suggest that they don't want to be controllers of speech, but at the same time, they're not really doing that, right? If what we're talking about isn't a political affiliation, but whether or not a post is actually factual or not, shouldn't they be responsible for this?
Davey: Yes, absolutely. They've, again, increasingly shown a willingness to moderate posts that are inaccurate.
Tanzina: Moderate how, though?
Davey: Facebook applies a fact-checking label on untruthful posts. They still really largely rely on their third-party fact-checking network to first come out with a fact-check, and then, they can label the post.
Tanzina: Davey, you know this, Facebook also said that they would limit political advertising in the lead up to Election Day. Now, we know the election season has already started, and Facebook makes a lot of money from political ads, how has that been going so far?
Davey: The record on that is mixed with the misinformation policies and with this silent period, which is just a week before the election, them not accepting new political ads. Facebook has been uneven on applying their policies. They've never really done this before, so they are dealing with a new system. We've seen some ads that were wrongly taken down or wrongly get through, and that's been the story of the year, that they do have these policies, they're increasingly refining them, they tried to adjust as needed, but that they still really operate, a lot of times, on a case by case basis. When you zoom out and look at the enforcement of these policies, they are quite uneven.
Tanzina: Let's talk a little bit about other types of information that have come out in the lead up to the final Election Day. What about things like the Hunter Biden's story that was published in The New York Post? Where does that fall into? Is that considered misinformation, disinformation, or justice smearing, for example? We can look at it on the flip side. What about the Lincoln Project that put up a billboard in Times Square here in New York that really upset Jared and Ivanka Trump to the point where they wanted to sue the Lincoln Project? Is there a line between mis and disinformation and what's essentially could be construed as a mean ad or a smear campaign?
Davey: Yes, and as much as possible, we should be trying to see who is putting up these various ads and posts and that is what we can fall back on in terms of how to think about the actual content itself. In the case of the New York Post story, it's a little too early to definitively say whether it's mis or disinformation, but the signs are all there that show that the circumstances of that story getting out we're very suspicious.
We've been very wary of hack and leak operation ever since 2016, when Hillary Clinton's campaign managers emails were leaked on WikiLeaks and extensively covered by the media, and we've been trying to not repeat our mistakes from four years ago. That's the kind of thing, that alertness should be incumbent on media and on all of us. Whether it's the New York Post story or this billboard, we should be trying to seek the sources of this information, and if it's not clear, we should be careful about resharing or taking them at face value.
There's a lot of really good, authoritative official sources. You can get election information from your local officials, your county clerk, your government websites, and those are the kinds of sources that we should be relying on, the ones that are transparent about where they're coming from, and where they're getting their information, and with others, be more careful.
Tanzina: Davey Alba is a reporter covering disinformation for the New York Times. Davey, thanks so much.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.