Twitter announced an update on its usage policy on Wednesday, 10, with new measures against hate messages targeting religious groups. The service will move to remove posts with this content type, which represents a significant change in platform moderation practices. If the system succeeds, the same pattern can be applied to other protected groups in the future.
Last year, Twitter launched a public appeal for help in rewriting its anti-dehumanization policies the initial proposal referred to speeches targeting “identifiable groups.” The company received 8,000 replies from more than 30 countries, and most suggested an exact definition of minority groups since the category described was very broad. As a result, Twitter is testing, at first, only with religious groups.
The social network has exposed specific examples of inappropriate content that should be removed if reported. Tweets that dehumanize people in favor of their religious alignment – for example, referring to them as “cancer”, “rats” and “disgusting animals” are now banned by the platform.
“We’ve created our rules to keep people safe on Twitter, and they evolve continually to reflect the realities of the world we’re operating in,” the Twitter security team wrote on its blog. “Our primary focus is to address the risks of offline offenses, and research shows that dehumanizing language increases that risk.”
Twitter has struggled hard to detect and police offenses on a large scale, resulting in significant changes in the moderation policy of the platform. Late last month the company announced that it would notify users when tweets posted by political figures violated the rules of conduct. If a world leader posts something harmful, the company will now place a gray box before the tweet informing users about the infringing content. Users will need to click the box so they can view the message.
If you need help, have doubts or concerns, do not hesitate to leave a comment in the comment box below and we will try to help you as soon as possible!