YouTube says it’s finally stopping its algorithm from recommending conspiracy theories.
In a blog post this morning, the platform explained that the issue with most conspiracy content is that it doesn’t violate YouTube’s Community Guidelines, and thus can’t be removed from the site altogether. To that end, a number of theories have freely circulated on the platform in recent months — including one that theorizes recent mass shootings are organized by the government and involve trained ‘crisis actors’, and another, ‘Qanon,’ that says elite celebrities are involved in running child sex rings and that Donald Trump is secretly a mastermind out to thwart them.
Some videos may flirt with violating guidelines, but because YouTube can’t remove them altogether, they are free to float within its ecosphere, ripe for the recommending — which has become a sore spot for the company. The reason conspiracy content gets recommended, according to a former Google engineer who now campaigns against the algorithm he helped build, is because those videos tend to draw in lengthy engagement from a significant amount of viewers.
So, to combat this fact, YouTube is releasing a new algorithm — not one to replace its current recommender (which it says it has made hundreds of tweaks to in the last year alone), but one trained to hunt down what YouTube calls “borderline content,” aka content that doesn’t cross the guidelines, but which could still “misinform users in harmful ways.”
The new algorithm will target three specific types of videos: those “promoting a phony miracle cure for a serious illness,” videos “claiming the earth is flat,” and videos that make “blatantly false claims about historic events like 9/11.” (That last category presumably also covers several other popular theories, including that the moon landing was faked and the Holocaust never happened.) Videos the algorithm identifies as borderline content won’t be removed from YouTube. Instead, YouTube’s recommendation algorithm will be blocked from spreading them.
YouTube says the new algorithm will affect less than 1% of the videos on YouTube. Given that there are billions of videos in its library, that 1% adds up to millions being affected.
To be fair, given that there are billions of videos in the platform’s YouTube’s library, that’s still millions of videos.
The platform also says this isn’t an overnight solution. Training the new algorithm is a lengthy process — one that involves using human content evaluators and area experts from around the U.S. Human evaluators will be shown various YouTube videos and provide input about them that will help the algorithm learn what’s good content based on a culturally-informed human perspective.
At first, the algorithm will only be set loose on a “very small set of videos” in the U.S. As it becomes more reliable, it’ll be taken to other countries. Its implementation is “just another step in an ongoing process,” YouTube says, “but it reflects our commitment and sense of responsibility to improve the recommendations experience.”