Instagram Launches AI Tool That Warns Users Before They Post Potentially Offensive Captions

By 12/16/2019
Instagram Launches AI Tool That Warns Users Before They Post Potentially Offensive Captions

Instagram is today rolling out a new tool as part of its anti-bullying efforts — this time in a bid to crack down on potentially offensive photo and video captions.

The feature will notify users when their captions may be considered offensive, as determined by artificial intelligence tech developed by Instagram, which marks against other captions that have been reported for bullying. The notification is intended to make users pause and reconsider, with the opportunity to edit their captions, though it does not physically prevent them from posting. The feature is rolling out today in select countries, Instagram says, and will become available globally in coming months.

“Results have been promising, and we’ve found that these types of nudges can encourage people to reconsider their words when given a chance,” Instagram wrote in a blog post. “In addition to limiting the reach of bullying, this warning helps educate people on what we don’t allow on Instagram, and when an account may be at risk of breaking our rules.”

Tubefilter

Subscribe for daily Tubefilter Top Stories

Subscribe

The tech is an iteration of a similar feature launched by Instagram in 2017 that essentially provides the same notification process in the comments section. And in terms of other anti-bullying efforts, Instagram launched an IGTV docuseries with Jonah Hill titled Un-filtered in September, in which teens and young adults discussed their experiences with bullying. The company also launched a feature called Restrict in October, which enables users to peak at comments and messages that they’ve filtered in order to keep an eye on their bullies.

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe