Workers Enlisted By YouTube For AI-Training Efforts Make 10 Cents A Task To Analyze Videos

By 03/22/2018
Workers Enlisted By YouTube For AI-Training Efforts Make 10 Cents A Task To Analyze Videos

How does the AI that spots inappropriate content on YouTube learn which videos to flag and which ones to leave be? Creators have long sought the answer to that question so that they can avoid producing content that runs afoul of YouTube’s guidelines.

If a new report is to be believed, one part of that process is not as scientific as creators would hope. Wired has published an inside look at the machine learning tasks YouTube outsources to workers who frequent Mechanical Turk, an Amazon-owned marketplace commonly referred to as MTurk. Many tech companies enlist MTurk workers to complete simple tasks. In YouTube’s case, those workers are paid small sums to analyze videos and are sometimes asked to make subjective decisions and have little incentive to think critically about their choices.

The central figure in Wired‘s story is Rochelle LaPlante, who shared the text of a YouTube-related task she completed through MTurk. After watching and analyzing one video, she made 10 cents. “YouTube and Google have been posting tasks on Mechanical Turk for years,” LaPlante told Wired. “It’s been all different kinds of stuff—tagging content types, looking for adult content, flagging content that is conspiracy theory-type stuff, marking if titles are appropriate, marking if titles match the video, identifying if a video is from a VEVO account.”

Tubefilter

Subscribe for daily Tubefilter Top Stories

Subscribe

The task, which Wired embedded in its post, asked LaPlante to watch a video and answer a series of questions about types of content that may or may not have appeared on screen. For YouTube, there are several potential applications. Most likely, the video site used data from tasks like LaPlante as part of its efforts to teach AI how to recognize videos that contain things like violence, nudity, or hard drug use, all of which are disallowed by YouTube’s community guidelines.

The text of LaPlante’s task leaves much to be desired. MTurk workers are incentivized to complete a lot of tasks as fast as possible so that they can make a decent sum out of nickels and dimes, and YouTube encouraged that sort of expedient behavior. “Consider watching it at 1.5x speed,” suggested the task, referring to the video LaPlante had to watch.

After watching the clip, LaPlante had to answer questions about it, some of which were very ambiguous. For example, she was asked if the video contained “sexual content,” though the prompt noted that “mild sexual activity, including “implied sex acts, light or comedic fetish references or behavior, or mild sexual situations or discussion,” is “OK.” It was up to LaPlante to determine if that “mild” line had been crossed.

YouTube isn’t using MTurk workers like LaPlante to actually moderate videos (it has a separate force for that), and it’s unclear how much the data YouTube collects from MTurk informs its machine learning efforts. What the Wired article shows, however, is that some behind-the-scenes training work related to YouTube’s all-important content moderation AI — which has the power to strip creators of nearly all their ad revenue, should it designate their content inappropriate — is being done by workers who are paid little, pressed for time, and asked to make subjective decisions. Let’s hope, for the sake of every content creators, that the data gleaned from tasks like LaPlante’s makes up only a small portion of YouTube’s machine learning process.

More details are available over at Wired.

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe