YouTube

YouTube May Never Be 100% Safe For Advertisers, Says Company’s U.K. Marketing Director

In January, AT&T brought its ad spend back to YouTube after a nearly two-year-long absence. At the time, its chief brand officer, Fiona Carter, said the company’s return post-Adpocalypse was due to it being “100% confident” that there was a “near-zero” chance of ads running next to objectionable videos. But now, weeks after AT&T and a number of other major marketers pulled their ads again amid a potential Adpocalypse 2.0, YouTube has said it may never be 100% brand-safe.

U.K. marketing director Nishma Robb explained that she doesn’t think achieving 100% brand safety is “the reality of our platform,” while speaking at the Incorporated Society of British Advertisers’ annual conference on March 5, per The Drum. “The reality is, the internet has dark pockets and our job is to continue to work hard to ensure we have the machines and processes to remove harmful content,” she added.

Those “dark pockets” most recently manifested as swarms of pedophilic comments being left on ostensibly innocent videos of young children uploaded to YouTube. After creator Matt Watson posted his investigation into the comments, bringing the issue to light, YouTube launched increasingly aggressive strategies to clamp down on inappropriate comments and videos. It began with deleting uploaders’ and commenters’ channels, and most recently elected to disable comments entirely

on most videos featuring minors.

Subscribe for daily Tubefilter Top Stories

Subscribe

Robb said another one of YouTube’s strategies involves tweaking settings for advertisers so they’ll have more control over the content where they don’t want ads to appear. (It’s worth noting here that YouTube has said only about $8,000 of ads were run against the videos Watson investigated, and the majority of the videos were demonetized).

Robb added that when it comes to handling inappropriate videos, YouTube has artificial intelligence (AI) safeguards in place that, without human intervention, catch 98% of “extremist or violent” videos upon upload. This is an improvement upon YouTube’s previous AI measures, which caught 75% of problematic videos, per The Drum.

Robb also said YouTube “will continue to use humans and human verification” in an effort to make sure “the platform is safe for users, particularly, and advertisers.”

Share
Published by
James Hale
Tags: YouTube

Recent Posts

Have you heard? Ludwig’s ‘GeoGuessr’ fame, Poland’s record-setting stream, and an NFL prank gone wrong.

Each week, we handpick a selection of stories to give you a snapshot of trends,…

1 day ago

Roblox hikes developer earnings by 42%–but only if they make games aimed at adults

Roblox is quadrupling down on chasing adult gamers--and rewarding developers who make games appealing to…

1 day ago

After FaZe Clan’s epic collapse, it’s CORE members are reuniting with a new creator group

Five months after FaZe Clan's collapse, some of its best-known alumni are looking to bring back…

1 day ago

TV production companies let creators use their game show formats. Then Squeezie flipped the script.

Creators have already made their mark in movie theaters and on Broadway stages. Now, they're…

1 day ago

Vine is back–and it has a zero-tolerance policy for creators using AI

Vine is back, and it's anti-AI. Jack Dorsey, co-founder and former multi-time CEO of Twitter,…

2 days ago

Spotify has a new use for “verified” check marks: They can identify human creators

On the internet, it's been a roller coaster ride for the humble check mark. At…

2 days ago