YouTube, Facebook, Twitter Agree To Adopt Shared Harmful Content, Brand Safety Policies

By 09/23/2020
YouTube, Facebook, Twitter Agree To Adopt Shared Harmful Content, Brand Safety Policies

YouTube, Facebook, and Twitter have all reached an agreement with the World Federation of Advertisers (WFA) that will establish independent oversight of what they are doing to keep marketers’ ads from running alongside harmful content.

The deal took more than a year of negotiation between the social media giants, the WFA and its lobbying group, Global Alliance for Responsible Media (GARM), brands, and ad agencies, per CNN. It stipulates that the platforms will adopt shared definitions of what constitutes “harmful content,” and commit to developing new tools that will give marketers more control over what kinds of videos and posts appear with their ads.

“Today, advertising definitions of harmful content vary by platform and that makes it hard for brand-owners to make informed decisions on where their ads are placed, and to promote transparency and accountability industry-wide,” the WFA said in a statement.

Tubefilter

Subscribe to get the latest creator news

Subscribe

YouTube, Facebook, and Twitter’s new harmful content definitions (which have not yet been released) will cover 11 topics, including hate speech, sexual content, crime, drug use, piracy, gun-related content, and “sensitive social issues,” per CNN. They will agree on a brand safety “floor” for each topic that will determine when content crosses the line to become unsuitable for advertising. Exceptions can be made for some violative content that is strictly for news or education purposes.

Additionally, before the end of this year, the platforms will each undergo brand safety audits to assess their current handling of things like hate speech and sexually explicit material.

All three companies have faced advertiser boycotts over allegations that they allowed marketing to run on questionable material. YouTube, for example, lost major advertisers like Disney, AT&T, Procter & Gamble, McDonald’s, Hasbro, and Nestlé in both 2017 and 2019 after discoveries that it was hosting thousands of disturbing videos about (or aimed at) kids. Most recently, Facebook was targeted by the NAACP and Anti-Defamation League’s #StopHateForProfit campaign, which saw brands like Coca-Cola, Starbucks, and Unilever pull ads over the social network’s decision not to remove Donald Trump posts that appeared to call for violence against people protesting the police killing of George Floyd.

“Responsibility is our #1 priority, and we are committed to working with the industry to build a more sustainable and healthy digital ecosystem for everyone,” Debbie Weinstein, YouTube’s VP of global solutions, said in a statement. “We have been actively engaged with GARM since its inception to help develop industry-wide standards for how to commonly address content that is not suitable for advertising. We’re excited to have reached this milestone.”

Tubefilter has reached out to YouTube for information about potential policy changes, and will update this story with any new information.

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe