Nearly two years after asking creators to start self-assessing their videos for advertiser-friendliness, YouTube has published a thorough guide to help them do it.
YouTube introduced the Self Certification system in mid-2019, and hoped it would “make the monetization process much smoother with fewer false positive demonetizations,” according to CEO Susan Wojcicki. Under the system, creators must fill out a questionnaire about each upload. If a video is 100% safe for advertisers, creators only have to answer one question. But if it contains–or, crucially, might contain–content that’s ad-limiting or demonetization-triggering, creators must select from preset options to divulge more details.
After the video is uploaded, YouTube screens it, and if its rating is the same as the creator’s, the creator’s ability to judge their content is considered more trustworthy. If YouTube comes up with a different result than the creator, they can be dinged as less dependable.
With that in mind, it’s not surprising Self Certification’s original scant and fuzzy guidelines caused creators stress. When the system launched, the upload questionnaire gave few specifics about determining the brand safety ratings of tetchy topics like profanity (which words were OK? what counted as “occasional” use of f-ck?), adult content (were frank but educational videos allowed or not?), and violence. The latter was particularly concerning for YouTube’s massive band of gaming creators, who suddenly had to wonder: Did a 360 no scope in Apex Legends count as “violence”? What about beating up Elvis wannabes in Fallout: New Vegas?
YouTube issued clarifications about that part of the system at its first-ever Gaming Creator Summit in November 2019, firmly ruling that no, video game violence and real-world violence do not have the same level of advertiser-friendliness.
But other areas of self-assessment remained unclear, and that’s what YouTube is now trying to fix.
In its latest Creator Insider video (above), YouTube unveiled a comparatively expansive manual on Self Certification that offers numerous specific examples for assessing profanity, adult content, violence, “shocking content,” harmful or dangerous acts, hateful or derogatory content, “controversial issues,” sensitive events, adult themes in family content, and drug-, firearm-, and tobacco-related content.
“There have been no policy changes, however there are a ton more examples added,” Conor, a member of YouTube’s monetization team, says in the video. “Where previously you would’ve seen tables with broader examples…[now] if you go to any one of the guidelines that relates directly to the questionnaire itself, you’re gonna see a guide to Self Certification options.”
Each of the guides contains examples of what kind of content is 100% ad-safe, what’s ad-limiting, and what should be demonetized. They offer, Conor says, “as much detail as we can publish around what falls in each of those three categories.”
Here’s a before and after of YouTube’s Self Certification profanity guidelines:
The new guide is apparently only a first step in making things clearer for creators. YouTube is actively soliciting feedback on the guides, and will update them “on a constant basis” using responses from creators, Conor says.
Creator Insider is also kicking off a video series where, each week, it will focus on a specific section of the Self Certification guide. This week, Conor touched on profanity, saying that YouTube has been “challenged by the sheer volume and changing nature of profanity both on the platform and in the real world offline.”
He invites creators to give feedback on the expanded profanity guide, and on other sections. “We’re bringing this to the Creator Insider community because we want eyes on this article,” he says. “When it comes to these questionnaire options, we’re really trying to condense a big book on policies into one page…So if there’s something you feel is unclear in any of these sections, make sure you call it out to us.”
You can see the new Self Certification guide here.