Neal Mohan has had a busy week. On Wednesday, YouTube‘s Chief Product Officer was one of the tech execs who was called to testify before Congress. A day later, after attending a White House summit, he took to Twitter to announce an updated content moderation policy that aims to curb violent extremist content — even when that content is not affiliated with a known terrorist organization.
YouTube has made repeated efforts to demonetize, demote, delist, and remove videos that violate its community guidelines. Many of those efforts have focused on extremist content that originates from groups like ISIS, but recent events have made it clear that individuals with violent and hateful viewpoints are a problem, too. The perpetrators of mass shootings in Buffalo and Uvalde, Texas had histories of extreme online rhetoric, and a study cited by Reuters found that 85 pro-militia videos have been uploaded to YouTube since the January 6 attack on the U.S. Capitol.
Mohan responded to that rising threat by announcing new YouTube policy:
Subscribe for daily Tubefilter Top Stories
We’re committed to combating violent extremism. As we shared w/ the @WhiteHouse, we’ll update our policies this year to prohibit content glorifying violence/recruiting & fundraising for extremist groups, even when the content isn’t affiliated with designated terrorist orgs. (1/3) pic.twitter.com/heOiJhseXu
— Neal Mohan (@nealmohan) September 15, 2022
A presidential request
Mohan arrived in D.C. with a clear message. “Our commitment to openness works hand in hand with our responsibility to protect our community from harmful content. Responsibility is central to every product and policy decision we make and is our #1 priority,” Mohan said during his Congressional testimony. “To that end, I want to make clear that there is no place on YouTube for violent extremist content. Our policies prohibit content that promotes terrorism, violence, extremism, and hate speech.”
After his trip to the Capitol, Mohan and other big tech leaders headed over to the White House. That meeting included some stern words from the President and Vice President, who said the social media industry “must bear responsibility” for its role in the spread of violent, extreme, and hateful viewpoints.
Perhaps surprisingly, America’s oldest-ever sitting President has made the regulation of big tech companies a focal point for his administration. In his most recent State of the Union address, he stressed that he would “hold social media accountable” for a number of offenses, such as serving targeted ads to minors.
Biden’s campaign to curb digital hate speech and misinformation won’t end with the White House summit. He also wants to revise Section 230, the legal doctrine that prevents social media platforms from bearing responsibility for violative content posted by individual users.
Wasn’t YouTube moderating violent content already?
Yes and no. Long-time online video observers will undoubtedly remember the Adpocalypse, when YouTube’s attempts to moderate extremist content created more problems for the platform. YouTube was trying to stop ads from appearing on ISIS videos, but it inadvertently halted ads on many innocuous videos, too. After creators filed lawsuits and took their videos elsewhere, YouTube adopted a new policy in hopes of cutting down on those “false positives.”
Don’t worry: YouTube’s latest policy shift is unlikely to unleash a new Adpocalyse. Over the past five years, the world’s biggest online video platform has developed several strategies and tools that make its moderation of violative content more precise. Internal discussions have allowed it to better identify those videos, and adjustments to its recommendation engine have curbed the spread of malicious clips.
Research has also played a huge role in YouTube’s refinement of its moderation policies. In hopes of making its viewers more vigilant, the platform has tested “prebunking,” i.e. calling out misinformation via pre-roll ads. As Mohan announced YouTube’s latest policy shifts, he noted that these experimental efforts will continue. “We’re also launching a media literacy campaign on YouTube to help viewers – particularly younger generations – better identify different manipulation tactics used to spread misinfo,” Mohan tweeted.
The third part of YouTube’s anti-extremism initiative is also educational in nature. The company is teaming up with the McCain Institute’s Invent2Prevent program in order to encourage students to “develop tools preventing targeted violence and terrorism,” Mohan said.