Instagram will warn users when their captions on a photo or video could be considered offensive.
The Facebook-owned company says it has trained an artificial intelligence (AI) system to detect offensive captions.
According to the BBC, the idea is to give users “a chance to pause and reconsider their words”. Instagram announced the feature in a blog on Monday, saying it would be rolled out immediately to some countries.
The tool is designed to help combat online bullying, which has become a major problem for platforms such as Instagram, YouTube, and Facebook.
If a user with access to the tool types an offensive caption on Instagram, they will receive a prompt informing them it is similar to others reported for bullying.
Users will then be given the option to edit their caption before it is published. “In addition to limiting the reach of bullying, this warning helps educate people on what we don’t allow on Instagram and when an account may be at risk of breaking our rules,” Instagram wrote in the post.
Earlier this year, Instagram launched a similar feature that notified people when their comments on other people’s Instagram posts could be considered offensive.
Instagram was ranked as the worst online platform in a cyber-bullying study in July 2017.