YouTube Wants To Use Creators’ Data To See If Its Algorithms Target Marginalized People

By 12/03/2020
YouTube Wants To Use Creators’ Data To See If Its Algorithms Target Marginalized People

For years, YouTube has denied accusations that its algorithms unfairly target marginalized creators for content suppression, demonetization, and removal. Today, it announced a plan to “more proactively identify potential gaps in our systems that might impact a creator’s opportunity to reach their full potential.”

Starting next year, YouTube creators will be able to voluntarily share their gender, sexual orientation, race, and ethnicity with YouTube. Data from those who choose to share will be examined to see “how content from different communities is treated in our search and discovery and monetization systems,” Johanna Wright, YouTube’s VP of product management, wrote in an official update.

Wright added that YouTube will “also be looking for possible patterns of hate, harassment, and discrimination that may affect some communities more than others.”

She said the surveys will provide a pathway for YouTubers to participate in things like #YouTubeBlack and creator gatherings, and explained that volunteers’ data won’t be used for advertising purposes. Creators will have the option to delete their responses and pull themselves from the survey pool at any time after answering, she said.

The survey–which is being developed in consultation with creators, per YouTube–will launch in the U.S. “in early 2021.”

YouTube is additionally rolling out a new feature that will “warn users when their comment may be offensive to others, giving them the option to reflect before posting,” Wright said. It’s not clear when that feature is coming.

Wright also gave out some new data about YouTube’s efforts to remove hateful content and comments. In the last three months, YouTube has terminated 1.8 million channels for policy violations; 54,000 of the terminations were for hate speech. This is the most channel terminations it has ever made in one quarter, Wright said, “and 3x more than the previous high from Q2 2019 when we updated our hate speech policy.”

As for comments, 46 times more hateful comments are removed per day now than in early 2019, she said.