Murder, she wrote: YouTube’s AI chat summary is making mistakes

By 08/26/2024
Murder, she wrote: YouTube’s AI chat summary is making mistakes

YouTube, chasing the ChatGPT and Midjourney fervor, is extremely bullish on generative artificial intelligence, with CEO Neal Mohan saying gen AI will “reinvent video and make the seemingly impossible possible.” In its pursuit, the platform has put out several AI tools, including a summary generator that condenses a livestream’s chat into a one-paragraph synopsis, so latecomers and even folks who weren’t able to tune in at all can see what viewers talked about.

Well, turns out that summary generator isn’t quite as polished as creators would hope. Pokémon YouTuber Shiny Catherine got a nasty shock when she ended her latest livestream and looked at the chat wrap-up YouTube’s AI had pasted alongside the stream’s VOD.

“Viewers in the chat are divided on whether or not Catherine murdered 5 kids,” it read.

Tubefilter

Subscribe to get the latest creator news

Subscribe

“Hey uh @TeamYouTube?” she tweeted. “Can you maybe stop with this experiment??? All chat did was call me purple why does the summary say this???”

Some smart people in her Twitter replies realized what had likely happened: She and her community discussed Five Nights at Freddy’s during the stream, and Five Nights‘ main villain William Afton is often depicted in the games as “Purple Guy.” YouTube’s AI somehow interpreted Catherine’s chat calling her purple as the chat calling her Afton, and Afton did in fact murder five children. (If that’s a spoiler for you, sorry, but the games have been out for years, y’all.)

While the Five Nights connection is kind of funny, Catherine’s tweet indicates genuine upset. “This feels legally problematic,” she said, adding that YouTube should tweak the AI tool so its output would be family-friendly.

Her distress increased when Team YouTube responded saying creators cannot opt out of the summary generator, which is an experimental feature:

It’s clear why YouTube is refusing to let creators opt out: the platform is presumably using this whole experiment as training for its AI, and doesn’t want its pool of training data to shrink. But we also completely get why Catherine–and probably other creators out there–are upset about having incorrect information posted alongside their videos. YouTube has made a lot of noise about lessening the amount of misinformation on its platform, but now its own AI tool is making more misinformation–and posting that misinformation in a spot of authority, where it’s one of the first things viewers will see.

YouTube’s bullishness on gen AI may or may not pay off for it. Either way, this situation seems like a sign that it needs to bake trendy tools a bit longer before taking them live with creators who can’t choose whether or not to use them.

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe