Nearly a year and a half after YouTube CEO Susan Wojcicki revealed the platform was developing a self-rating monetization system, that system has rolled out to a number of creators, asking them to, in the process of uploading their videos, self-certify whether those videos are safe for all ads, safe for limited ads, or are not suitable for ads at all.
“Now, you can rate your own uploads against our ad-friendly content guidelines,” a backend notice from YouTube tells creators when they’re adding information for a new video. “The more accurate your ratings are, the more we can rely on your input when deciding which ads to run on your videos.”
When Wojcicki introduced the feature — called Self Certification — in a letter to creators last year, she said its overall purpose was to “make the monetization process much smoother with fewer false positive demonetizations” by taking creators’ knowledge of their videos’ content and combining it with YouTube’s regular algorithm-driven video reviews and human reviewers.
Creators have long complained about issues with demonetization. But, judging by the notice above, part of that smoothing process involves essentially letting creators build a kind of honor system credit — the more honest they are about what their videos contain, the more YouTube’s systems trust them to accurately self-report, giving them more control over at least one aspect of monetization.
In her letter, Wojcicki also noted that YouTube’s pilot test of the feature showed that creators’ self-reports were consistent with how the platform’s own reviewers would have rated their videos’ ad-appropriateness.
Thanks to a screengrab of the feature’s backend from Twitter user Saberspark, here’s the nitty-gritty of how and what YouTube expects creators to self-certify. When a creator is in the process of uploading a video, they’re asked to choose one of two options. The first option states that their video “is appropriate for all audiences and does not contain inappropriate language, adult content, violence, harmful or dangerous acts, drug-related content, hateful content, firearms-related content, or sensitive issues.”
YouTube just rolled out a new thing. Before you can monetize your video, you have to answer questions and say whether or not the content is appropriate. Not sure how I feel about this tbh pic.twitter.com/BzywhcQ9GC
— Saberspark (@Saberspark) July 25, 2019
The second option is for creators whose videos do contain some amount of inappropriate language or adult content. Those creators must answer two questions — one about how much and what kind of language their video contains, and one about what kind of adult content their video contains — to self-rate their content’s ad-appropriateness according to various examples given by YouTube.
On the language side, profanity like “hell” or “damn,” censored swearing, or occasional use of strong profanity are considered “light profanity” and are suitable for all ads. Creators hit limited ads when they frequently use strong profanity (in the video, its title, or its thumbnail image). YouTube does specify that strong profanity is suitable for limited ads only when it’s used for the purposes of comedy, documentary, news, or education. Creators hit demonetization if they admit to using strong profanity “in a hateful or derogatory way.”
On the adult content side, content that’s appropriate for all advertisers includes content that discusses romantic relationships or sexuality without directly referencing sex, content that is “moderately sexually suggestive,” like something with limited clothes, “sensual dancing,” or non-graphic sex education. Music videos with sexual content that doesn’t involve nudity are also fine.
Limited ads come into play with videos that have “blurred or censored nudity, even if used for education, news, or in other contexts,” any focus on “sexual body parts,” even if those body parts are covered, frank discussions of experiences with sex, content about sex toys, or graphic sex education.
Ads are entirely barred on videos with “exposed breasts or full nudity, sexual acts, animal mating, discussions of fetishes, or a video thumbnail with sexual content.”
At the end of the questionnaire, YouTube informs creators that their responses will be considered in its automated review of their videos.
It’s not yet clear how many creators have the feature enabled. Tubefilter has reached out to YouTube to ask if the feature has only gone out to some members of the YouTube Partner Program, or if all YPP channels now have it enabled. A YouTube spokesperson said the platform is not commenting on creator eligibility at this time.