Instagram to 'shame' abusive commenters in AI bullying crackdown
Instagram is rolling out an AI-enabled feature designed to encourage users to think twice before posting abusive comments.
The new feature will use artificial intelligence to identify comments that the platform thinks might be offensive based on previous content that has been flagged by users, and double-check with the poster before the comment goes live.
Before the comment is published the user will see a popup asking them to reconsider their post.
The popup system is being rolled out at the same time as a second new feature that Instagram hopes will curb bullying on the platform.
In a blogpost, Instagram head Adam Mosseri said users will soon be given more control over what comments appear under their content.
A new 'Restrict' mode will allow users to hide comments from specific users from anyone but the user themselves.
Mosseri said this feature has been developed to help posters control what appears under their content without being forced to block, unfollow, or report an account with which they are having problems.
Mosseri said: "While identifying and removing bullying on Instagram is important, we also need to empower our community to stand up to this kind of behavior.
"We’ve heard from young people in our community that they’re reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life."