As harassment proliferates across the social web — the actress Leslie Jones, for instance, temporarily departed Twitter last month following a slew of racist attacks — Instagram is slowly rolling out new features that will let users moderate their own comments sections.
Users will now be able to filter certain words out of their comment streams, reports The Washington Post, and will also have the option to turn off comments completely on a post-by-post basis. These new protective capabilities arrive on top of Instagram’s site-wide Community Guidelines, which already state that “We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages.”
Previously, users were able to delete single comments that they found offensive, though the new feature lets users ban individual words that they find personally offensive from ever appearing across the board. Instagram is testing out the new filtering features among celebrity users, the company said, and will eventually roll them out to all users in coming months.
“Our goal is to make Instagram a friendly, fun, and, most importantly, safe place for self expression,” Instagram’s head of public policy, Nicky Jackson Colaco, told The Post in a statement. “We have slowly begun to offer accounts with high volume comment threads the option to moderate their comment experience. As we learn, we look forward to improving the comment experience for our broader community.”
Instagram, which has 500 million monthly active users and says that video is the most important area for the company this year, also made headlines when it launched a new feature earlier this week that lets users create photo and video montages that will disappear after 24 hours. Like the comparable Snapchat offering, Instagram’s new feature is called Stories.