Clicky

Subscribe for premier reporting on free speech, privacy, Big Tech, media gatekeepers, and individual liberty online.

Twitter tests prompting users to “rethink” what they’re about to say with new dystopian feature

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

If you were in any way speculating that policymakers at Twitter were using George Orwell’s book, 1984, as a guideline and being purposely dystopian in their approach to technology and to speech, today may confirm your suspicions.

Twitter, in response to pressure from mainstream media that demand better control of offensive or harmful content on the platform, has announced it will start an experiment in which it will warn users if what they’re about to say is problematic. Twitter will give users a chance to correct their wrongthink before posting.

Twitter has always relied on strong moderation practices, many of them controversial. To moderate people’s speech, the social network implemented a reporting system to flag inappropriate tweets, and more recently, automatic detection technology for certain content.

Although the measures so far appear to be effective in curbing certain speech and opinions that Twitter doesn’t like, Twitter obviously thinks this isn’t enough and is now going to test warning you before you’ve even posted.

In its latest transparency report, Twitter revealed that during the period from January to June 2019, it took action against more than 584,000 accounts for hate messages, and sanctioned another 396,000 accounts for containing abusive messages.

However, despite all technological efforts, the most “effective” way to control inappropriate content remains in the hands of users, which has led Twitter to carry out their next experiment, where they will try to ensure that the negative messages are not even published at all, stepping into your conversations, checking and correcting you in realtime as you compose your tweet.

Starting today, the social network will begin a new experiment on iPhones that will be applied to all tweets written in English. When users click the “send” button on a new tweet, they will automatically be informed of anything Twitter deems to be problematic in their message and asked if they want to edit it before posting.

Twitter’s global head of site policy for trust and safety, Sunita Saligram, reported that this measure seeks to make users have to think more carefully about everything they are going to publish since, according to her, many people simply respond to discussions without thinking about what they write.

“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram said.

Also, this system constantly learns from other users’ reports, comparing the words of the reported tweets with those of the new messages, so it should always be able to detect problematic comments even if they change in the future.

The system is similar to a controversial feature that Instagram has also been testing.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.

Share