YouTube doesn’t let its algorithm recommend conspiracy videos. But it can’t stop people from linking to them…right?

By 02/18/2022
YouTube doesn’t let its algorithm recommend conspiracy videos. But it can’t stop people from linking to them…right?

On YouTube, there’s something the platform calls “borderline content.”

It introduced the term a few years ago to describe videos that don’t cross its community guidelines, but still “misinform users in harmful ways.” Basically, they’re not so egregious that YouTube can justify taking them down, but it doesn’t want people watching them, either.

YouTube deals with borderline videos by preventing its recommendation algorithm from serving them to users. That means when a creator posts a borderline video, it’s (ideally) only viewed by people who already follow them, not by randoms who might be influenced to believe the B.S. it’s hawking.

Tubefilter

Subscribe for daily Tubefilter Top Stories

Subscribe

According to YouTube chief product officer Neal Mohan, this system is working: right now, “significantly below 1%” of views on borderline content come from recommendations, he says in a Feb. 17 blog post about YouTube’s ongoing efforts to combat misinformation.

But the problem is, the system only works on YouTube.

If misinformation starts on YouTube, it’s probably not going to stay on YouTube

“[E]ven if we aren’t recommending a certain borderline video, it may still get views through other websites that link to or embed a YouTube video,” Mohan says in the post.

We have hard data showing that misinformation originally posted to YouTube can gas the misinfo engine on other platforms.

In October 2021, New York University’s Center for Social Media and Politics published a study showing that after Dec. 8, 2020, the day YouTube cracked down on misinformation about the 2020 presidential election, the amount of videos shared on Facebook and Twitter that contained election misinformation dropped significantly.

With that particular case, YouTube cracking down meant terminating videos. Because borderline content can’t be deleted, it’s a whole different animal.

Mohan says YouTube has considered cutting borderline content’s off-platform spread by doing things like disabling the share button, so users can’t embed borderline videos, and breaking URLs, so users can’t directly link to them.

“But we grapple with whether preventing shares may go too far in restricting a viewer’s freedoms,” he says. “Our systems reduce borderline content in recommendations, but sharing a link is an active choice a person can make, distinct from a more passive action like watching a recommended video.”

YouTube has also considered designing interstitials that would specifically appear whenever a borderline video is embedded or linked to, and would warn viewers that the video contains misinformation. Mohan doesn’t express the same reservations about interstitials as he does about disabling embeds and links, so it’s possible that’s the route YouTube is leaning toward taking.

Nothing’s for sure, though. Mohan says YouTube is continuing to “carefully explore different options to make sure we limit the spread of harmful misinformation across the internet.”

New conspiracies can grow fast. YouTube wants to rein them in before they go viral.

In the same post, Mohan says YouTube is also working on containing content about newborn conspiracy theories before they get big.

He points out that YouTube has plenty of data about ‘mainstream’ (for lack of a better word) conspiracy theories like flat earth, and because of that can catch and suppress fresh videos about those conspiracies soon after they’re posted.

“But, increasingly, a completely new narrative can quickly crop up and gain views,” Mohan says.

For example, he explains, the whole ‘5G towers give you COVID’ conspiracy grew extremely quickly in the early days of the pandemic, and before YouTube figured out how to combat content endorsing the theory, people were already torching cell towers.

“Due to the clear risk of real-world harm, we responded by updating our guidelines and making this type of content violative,” Mohan says. “In this case we could move quickly because we already had policies in place for COVID-19 misinformation based on local and global health authority guidance. But not every fast-moving narrative in the future will have expert guidance that can inform our policies.”

Mohan says YouTube is upping its ability to preemptively target new conspiracist content by training its systems with “an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts to identify narratives our main classifier doesn’t catch.”

Mohan goes on to say that YouTube is additionally focusing on building regional teams to handle misinformation that’s spread outside English-speaking countries.

“Also, similar to our approach with new viral topics, we’re working on ways to update models more often in order to catch hyperlocal misinformation, with capability to support local languages,” he says.

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe