In January, AT&T brought its ad spend back to YouTube after a nearly two-year-long absence. At the time, its chief brand officer, Fiona Carter, said the company’s return post-Adpocalypse was due to it being “100% confident” that there was a “near-zero” chance of ads running next to objectionable videos. But now, weeks after AT&T and a number of other major marketers pulled their ads again amid a potential Adpocalypse 2.0, YouTube has said it may never be 100% brand-safe.
U.K. marketing director Nishma Robb explained that she doesn’t think achieving 100% brand safety is “the reality of our platform,” while speaking at the Incorporated Society of British Advertisers’ annual conference on March 5, per The Drum. “The reality is, the internet has dark pockets and our job is to continue to work hard to ensure we have the machines and processes to remove harmful content,” she added.
Those “dark pockets” most recently manifested as swarms of pedophilic comments being left on ostensibly innocent videos of young children uploaded to YouTube. After creator Matt Watson posted his investigation into the comments, bringing the issue to light, YouTube launched increasingly aggressive strategies to clamp down on inappropriate comments and videos. It began with deleting uploaders’ and commenters’ channels, and most recently elected to disable comments entirely on most videos featuring minors.
Robb said another one of YouTube’s strategies involves tweaking settings for advertisers so they’ll have more control over the content where they don’t want ads to appear. (It’s worth noting here that YouTube has said only about $8,000 of ads were run against the videos Watson investigated, and the majority of the videos were demonetized).
Robb added that when it comes to handling inappropriate videos, YouTube has artificial intelligence (AI) safeguards in place that, without human intervention, catch 98% of “extremist or violent” videos upon upload. This is an improvement upon YouTube’s previous AI measures, which caught 75% of problematic videos, per The Drum.
Robb also said YouTube “will continue to use humans and human verification” in an effort to make sure “the platform is safe for users, particularly, and advertisers.”