Categories: YouTube

YouTube May Never Be 100% Safe For Advertisers, Says Company’s U.K. Marketing Director

In January, AT&T brought its ad spend back to YouTube after a nearly two-year-long absence. At the time, its chief brand officer, Fiona Carter, said the company’s return post-Adpocalypse was due to it being “100% confident” that there was a “near-zero” chance of ads running next to objectionable videos. But now, weeks after AT&T and a number of other major marketers pulled their ads again amid a potential Adpocalypse 2.0, YouTube has said it may never be 100% brand-safe.

U.K. marketing director Nishma Robb explained that she doesn’t think achieving 100% brand safety is “the reality of our platform,” while speaking at the Incorporated Society of British Advertisers’ annual conference on March 5, per The Drum. “The reality is, the internet has dark pockets and our job is to continue to work hard to ensure we have the machines and processes to remove harmful content,” she added.

Those “dark pockets” most recently manifested as swarms of pedophilic comments being left on ostensibly innocent videos of young children uploaded to YouTube. After creator Matt Watson posted his investigation into the comments, bringing the

issue to light, YouTube launched increasingly aggressive strategies to clamp down on inappropriate comments and videos. It began with deleting uploaders’ and commenters’ channels, and most recently elected to disable comments entirely on most videos featuring minors.

Subscribe for daily Tubefilter Top Stories

Subscribe

Robb said another one of YouTube’s strategies involves tweaking settings for advertisers so they’ll have more control over the content where they don’t want ads to appear. (It’s worth noting here that YouTube has said only about $8,000 of ads were run against the videos Watson investigated, and the majority of the videos were demonetized).

Robb added that when it comes to handling inappropriate videos, YouTube has artificial intelligence (AI) safeguards in place that, without human intervention, catch 98% of “extremist or violent” videos upon upload. This is an improvement upon YouTube’s previous AI measures, which caught 75% of problematic videos, per The Drum.

Robb also said YouTube “will continue to use humans and human verification” in an effort to make sure “the platform is safe for users, particularly, and advertisers.”

Share
Published by
James Hale
Tags: YouTube

Recent Posts

Meta is using AI to power brand and creator matchmaking on Facebook and Instagram

Meta is looking to improve creator and brand experiences on its platform by investing in AI. The…

4 mins ago

Bob Does Sports cracks a cold one with new “Have a Day” tequila line

Bob Does Sports, the self-dubbed home of "brilliantly dumb sporting adventures" hosted by Robby Berger,…

17 mins ago

Billion Dollar Boy launches biz dev community for creators with flagship location in London

Influencer marketing agency Billion Dollar Boy is launching a new membership community that's "dedicated to…

2 hours ago

Millionaires: Giulia Amato on faith, finding her niche, and getting up at 4 a.m.

Welcome to Millionaires, where we profile creators who have recently crossed the one million follower…

5 hours ago

Creators on the Rise: Celestial Sylvia reads the danger all around us

Welcome to Creators on the Rise, where we find and profile breakout creators who are…

1 day ago

TikTok, UMG re-up licensing agreement, bringing artists like Bad Bunny back to the app

TikTok and Universal Music Group (UMG) have settled their dispute. The two parties have agreed on a…

1 day ago