TikTok now requires uploaders to disclose when they used AI to generate “realistic scenes” for their videos.
And if they don’t disclose, they run the risk of having those videos removed.
According to in-app screenshots found by social media expert Matt Navarra, TikTok is urging users to “label AI-generated content to follow our guidelines and help prevent content removal.” This new toggle is right below the ad disclosure section in the upload-a-video process.
Subscribe for daily Tubefilter Top Stories
TikTok now let’s you add AI-generated content labels to your videos 🤖
And warns you must disclose AI-generated content
Or risk having your content taken down pic.twitter.com/SgCqPnDjkS
— Matt Navarra (@MattNavarra) August 8, 2023
TikTok isn’t exactly clear on what it means by “realistic scenes.” It already explicitly banned deepfakes of people in 2020, so we think this new requirement covers any kind of video/audio content that doesn’t violate the deepfake ban, but could still be deceptive if the content is indistinguishable (or nearly indistinguishable) from actual footage.
Basically, if you used Midjourney to generate a screenshot of a mishmash river that doesn’t exist? Label it. If you used a voiceover bot? Label it.
TikTok isn’t the only platform branding AI. Meta appears to be working on very similar labels that tell viewers when AI was used to create “text, images, and video” in posts uploaded to Instagram.