Facebook Implements One-Strike Policy Barring Users From Livestreaming If They Post Hateful Content

By 05/15/2019
Facebook Implements One-Strike Policy Barring Users From Livestreaming If They Post Hateful Content

Facebook is implementing a stringent one-strike policy that will bar people from using the platform’s livestreaming tools after their first violation of its “most serious policies” — specifically, the site’s guidelines regarding Dangerous Individuals and Organizations.

In a blog post about the new rule, Facebook vice president Gary Rosen explained that Facebook bars people who are involved in terrorism, organized hate activity, mass or serial murder, and human trafficking from having Facebook accounts at all. And Facebook’s guidelines allow the platform remove users’ content if it “expresses support or praise for groups, leaders, or individuals involved in these activities.” Users who post support or praise for hate groups aren’t necessarily banned from Facebook wholesale unless they have repeated violations.

But now, thanks to the new policy, they will be banned from livestreaming after just one violation.

Tubefilter

Subscribe for daily Tubefilter Top Stories

Subscribe

Other policy violations Rosen references that will result in a livestreaming ban include using profile pictures or cover images related to terror propaganda, and sharing images of child exploitation. Users who run afoul of this new one-strike policy will have their ability to live stream restricted for set periods of time, Rosen explains. The smallest amount of time for which they can be barred appears to be 30 days. Over the next few weeks, Facebook will also increase the reach of the one-strike policy to ban violators from creating ads on the platform.

Why is Facebook targeting livestreaming in particular? Rosen spells it out. “Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” he writes.

The massacre in Christchurch, New Zealand, where a white supremacist took the lives of 51 mosquegoers while livestreaming his actions on Facebook, resulted in immediate calls for Facebook, YouTube, and Twitter to be more proactive in removing hate content and the people posting it from their platforms.

This one-strike policy is the latest in a series of changes Facebook has made in the two months since the massacre. In late March, it expanded its policies on white supremacist content to also bar content associated with white nationalism and white separatism. And earlier this month, it removed a number of high-profile alt-right, anti-Muslim, and/or anti-Semitic people from its core site and subsidiary Instagram, again citing the aforementioned Dangerous Individuals and Organizations policy.

In addition to the new one-strike policy, Facebook also revealed today that it’s investing $7.5 million in research to help it better remove objectionable videos. Following the Christchurch shooting, thousands of users re-posted the livestreamed footage, sometimes editing it to escape Facebook’s auto-detection and removal features. This new research, being conducted in partnership with The University of Maryland, Cornell University, and The University of California, Berkeley, aims to improve Facebook’s detection technology so it can more sufficiently analyze images and videos.

“This work will be critical for our broader efforts against manipulated media,” Rosen writes. “We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe