YouTube’s crackdown on misinformation about the 2020 U.S. presidential election had positive ripple effects on Twitter and Facebook, according to new research.
In November 2020, after early election results indicated Joe Biden would beat then-incumbent Donald Trump, accusations of vote tampering swarmed social media. Conspiracists spread claims that Democrat poll workers and mail-in voting handlers had dumped or changed Republican votes, or simply added fake Biden ballots, particularly in key states.
When Biden’s win was more firmly predicted by major news organizations, the accusations redoubled, and proponents resorted to claiming that Trump had won the election no matter what the vote counts said.
Subscribe for daily Tubefilter Top Stories
All of these claims at first spread largely unchecked on YouTube. The platform had touted its election misinformation policies, holding them up as efforts to “better support” democracy. But in the days following the election, it said its policies focused on suppressing content that misleads people about the mechanics of voting–things like polling dates and locations, and voter registration info. Lies about vote tampering weren’t covered, and neither were claims that one candidate had won when they really hadn’t.
Less than a month later, YouTube backpedaled, deciding that hey, those things maybe were serious election misinformation and deserved to be similarly banned.
That’s when things started to change on other platforms.
The amount of misinformation spread on Twitter and Facebook dwindled with each new YouTube policy
A study by the Center for Social Media and Politics at New York University found that on Dec. 8, the day YouTube finally banned election conspiracies, the amount of YouTube content used to spread conspiracies on Twitter “dropped sharply,” as The New York Times reports.
Before that date, around 30% of all election-related videos shared on Twitter were clips about election fraud that came straight from YouTube. Among the top-shared channels were Project Veritas, Right Side Broadcasting Network, and One America News Network.
By Dec. 21, less than 20% of election fraud claims on Twitter involved YouTube videos. And by Jan. 20–after YouTube’s Jan. 7 pronouncement that it would give a Community Guidelines strike to any channels that endorsed election misinformation–less than 5% was from YouTube.
That decrease didn’t only happen on Twitter. NYU’s study also showed that before YouTube cracked down on misinformation, 18% of all videos shared on Facebook contained election fraud. By Jan. 7, that figure was down to 4%.
As the Times lays out, NYU conducted this study by collecting a random sample of 10% of all tweets and Facebook posts each day. From that 10%, they isolated tweets/posts that contained YouTube links, and looked into whether the videos shared contained election misinformation.
Megan Brown, an NYU research scientist who worked on the study, told the Times that it’s possible YouTube’s crackdown simply eliminated many videos that would’ve been used to spread conspiracies on other platforms; however, it’s also important to note that interest in election misinformation might have dropped in December and January as Biden’s win became more and more concrete.
Overall, the study indicates “these platforms are deeply interconnected,” Brown said, and YouTube is “a huge part of the information ecosystem, so when YouTube’s platform becomes healthier, others do as well.”