While it may seem like an impossible task to automatically police the language of social media users, Instagram wants to give it a shot.
The Facebook-owned company has announced an update to its approach to comment moderation, and it’s goal is to use advances in AI to root out abusive speech without flagging false positives.
Instagram’s new AI-based solution, as Wired tells it, is DeepText, an engine developed by Facebook (which owns Instagram). DeepText’s original goal was to stamp out spam, but Instagram claims the program — which uses advances in machine learning to to better understand the context surrounding certain messages — has also shown an ability to weed out hate speech. While the social media platform wouldn’t say exactly how effective DeepText is, it has put the program into use for its users, with the goal of cleaning up its comment sections.
In adopting an AI to help combat hate speech, Instagram is following the example set by its parent company. Facebook, along with several other tech giants, signed an EU code of conduct last year, through which it pledged to crack down on abusive behavior among its user base.
DeepText is obviously not perfect. Wired shares several examples of posts the engine marked as false positive, including one tweet that included the phrase “chink in the armor,” a term that has caused problems in the past.
Instagram is taking these errors in stride and is hoping for the best. “The whole idea of machine learning is that it’s far better about understanding those nuances than any algorithm has in the past, or than any single human being could,” Instagram CEO Kevin Systrom told Wired. “And I think what we have to do is figure out how to get into those gray areas and judge the performance of this algorithm over time to see if it actually improves things. Because, by the way, if it causes trouble and it doesn’t work, we’ll scrap it and start over with something new.”