Instagram has announced the launch of a tool to enable users to automatically filter out abusive messages from those they do not follow on the platform.
This comes after a number of footballers speaking out about experiencing racist, sexist and other abuse on Instagram.
According to the BBC Liverpool Football Club criticised the platform after some of its players were sent racist monkey emojis.
Direct messages (DMs) containing words or emojis deemed offensive will be removed from view.
Instagram consulted with anti-discrimination and anti-bullying groups to curate a list of terms, phrases and emojis deemed offensive.
Users can also add their own definitions to this list, through the Hidden Words section of the app’s privacy settings.
READ: Tech Talk: Instagram Users Can Now Reply To Instastories Using Gifs
“Because DMs are private conversations, we don’t proactively look for hate speech or bullying the same way we do elsewhere,” Instagram blogged.
The tool focused on message requests from people users did not already follow “because this is where people usually receive abusive messages”, it added.
The tool will be available in the UK, France, Ireland, Germany, Australia, New Zealand and Canada within weeks. More countries will then receive the relevant update in the coming months.
This feature already exists to filter out abuse in comments on Instagram posts.
READ: Tech Talk: Artificial Intelligence Trained By Instagram Will Detect Offensive Captions