After a terrorist livestreamed his attack on mosques in Christchurch, New Zealand, during which he killed 49 people, YouTube saw an “unprecedented” number of reuploads of the horrific footage.
At times, as many as one video per second poured onto YouTube, with people uploading either the shooter’s entire 17-minute livestream, or parts of it. Neal Mohan, YouTube’s chief product officer, told The Washington Post that, following the shooting last Friday, YouTube had to pull together its “war room” full of people called “incident commanders,” who are specifically trained to handle crises on the platform.
Those commanders worked overnight Friday into Saturday, ultimately removing tens of thousands of videos and terminating hundreds of accounts that promoted or glorified the shooter’s actions against the Muslim community, YouTube confirmed in a tweet today.
“The volume of related videos uploaded to YouTube in the first 24 hours was unprecedented both in scale and speed,” the platform added.
On top of speed and scale, many of the users who uploaded the videos were purposefully changing them to get around YouTube’s automatic detection systems, Mohan told The Washington Post. That system works like this: once the Christchurch footage is uploaded, YouTube can pull it down and turn it into a reference file. It then hands that file to its content-sweeping system, which will find and flag any videos containing the footage. But if the footage is altered, YouTube’s system doesn’t read it as a match to the reference file, and therefore doesn’t catch it.
What constitutes altered? Surprisingly little. Users were getting around YouTube’s system by simply adding watermarks or logos or changing the sizes of the clips. Some went as far as to turn victims in the footage into animated people.
As YouTube grasped just how widespread the problem was and realized people were intentionally ducking its system, it took a number of steps to handle the spread. First, it internally classified New Zealand, Christchurch, and related terms as newsworthy, so its search function would primarily pull up videos from trustworthy sources. That’s become something of a standard practice at this point.
But, in this case, it wasn’t enough. So YouTube took more unusual steps. It temporarily disabled search functions that allowed users to sort videos by upload time, since many new uploads were footage of the massacre. That gave it more time to find and pull down footage before viewers could see it. More radically, it also made a tweak to the aforementioned automatic detection system. That tweak removed the need for one of YouTube’s thousands of human moderators to approve a video’s takedown after the system flagged it for containing footage of the shooting. So, instead of videos being put into a queue for YouTube employees to check that they had been correctly flagged, they were simply pulled down without double-checking.
Despite these changes, YouTube said it’s still handling the spread of videos, three days after the massacre.
Our teams are continuing to work around the clock to prevent violent and graphic content from spreading, and know there is much more work to do.
— YouTubeInsider (@YouTubeInsider) March 18, 2019
“This is a tragedy that was almost designed for the purpose of going viral,” Mohan told The Washington Post. “We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us. This incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done.”
The shooter (we have chosen not to share his name) livestreamed his attack on Facebook, and, prior to the shooting posted a white supremacist, anti-Muslim manifesto on Twitter. Both Facebook and Twitter also struggled to contain the spread of footage and supportive messages for the shooter, with Facebook saying it removed 1.5 million videos containing footage of the shooting in the 24 hours after it happened.
Over the weekend, Facebook, Twitter, and YouTube received a wave of backlash and calls to action from social media users who criticized them for not more stringently controlling the spread of hate content on their platforms.