Chandan Khanna / AFP / Getty Images
Instagram introduces a feature that prompts users to think twice before posting hate-filled comments to minimize cyberbullying on the massive social media platform.
The new feature uses Artificial Intelligence to review content and notify users when their post can be harmful or offensive. Users will see the message "Do you really want to publish this message?" Displayed. You then have the option to remove or change the comment before others can see it.
Early testing of this feature revealed that some users were less likely to publish malicious comments once they had the opportunity to reflect on their contribution, wrote Instagram boss Adam Mosseri in a blog post.
Gmail has a similar feature This gives users 30 seconds to cancel an e-mail after pressing "Send".
Other social media platforms have attempted to monitor the type of content allowed on their platforms. Twitter has started reporting hateful or offensive tweets from politicians, and Facebook has blocked some white supremacists and other accounts for hateful or offensive posts. However, there is no binding rule for what these platforms should restrict.
Monitoring malicious content in social media is a challenge. Justin Patchin, co-director of Cyberbullying Research Center, says he is working with multiple platforms to find a solution to this problem.
With huge amounts of content being created each second, Instagram is just one of the attempts companies use to monitor AI posts. Both Facebook and Twitter have tried to use the technology in the past. However, AI monitoring poses challenges, and algorithms often find it hard to interpret the slang and nuances in different languages.
Instagram's latest feature differs from previous attempts by major social platforms to prevent cyberbullying, as AI is used to warn users, but ultimately they can decide what they want to post.
"The transparency here is helpful for those who wondered why these big social media companies are not more technologically anti-bullying," Patchin said.
Instagram is the first major platform to try this method to prevent hateful content from being distributed in your app. However, it is a similar concept to the app created by Trisha Prabhu in 2013. The then 13-year-old has created a social platform called ReThink that also alerts users when their message could be offensive. ReThink was praised for its innovation, but Patchin says solutions need to be integrated into widely used platforms to be most effective.
According to Patchin, these big social media companies are moving in the right direction and are nearing finding a method for monitoring malicious content and cyberbullying.
"Companies have spent a lot of energy improving these systems, and they are getting better every year." he said. "They have a responsibility and a commitment to show the way and at least to experiment with such technologies."
Instagram plans to further improve its security features and will soon introduce a "throttling feature" that allows users to filter content from specific accounts without blocking them. Instagram's Mosseri wrote in the blog post that the company decided to add this feature after users feared that retiring accounts posting offensive comments on their site would lead to retaliation.