In an effort to combat “hateful” content, Twitter has relaunched the test on prompt asking users to revise a potentially hateful reply to a tweet. The relaunch of the experiment furthers the platform’s anti-free-speech policies.
“Say something in the moment you might regret? ? We’ve relaunched this experiment on iOS that asks you to review a reply that’s potentially harmful or offensive,” the company ominously wrote in a tweet announcing the experiment.
So, when Twitter thinks you are about to tweet something offensive, it will prompt you to rethink your words. Users who get the prompt can either “tweet,” “edit,” or “delete,” the reply. There is also a link a user can use to send feedback if they think they should not have gotten the prompt.
This is not the first time the platform has attempted to run such a test. They first ran it in May 2020, and again in August 2020. In the previous test, there was no option to send feedback if the prompt was wrong.
Additionally, the previous tests were done on iOS, Android, and the web versions of Twitter. However, the current test is only running on iOS, and only for English users.
Apparently, Twitter paused the previous experiments so that it could improve the feature.
“We paused the experiment once we realized the prompt was inconsistent and that we needed to be more mindful of how we prompted potentially harmful Tweets. This led to more work being done around our health model while making it easier for people to share feedback if we get it wrong,” a Twitter spokesperson said, speaking to TechCrunch.
“We also made some changes to this reply prompt to improve how we evaluate potentially offensive language – like insults, strong language, or hateful remarks – and are offering more context for why someone may have been prompted. As we were testing last summer, we also began looking at relationships between Tweet authors and people who reply to avoid prompting replies that were jokes or banter between friends,” the spokesperson added.