Subscribe for premier reporting on free speech, privacy, Big Tech, media gatekeepers, and individual liberty online.

Twitter wants to know when it’s right to take action against people who share “manipulated media”

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

The internet meme game is stronger than ever, which has led major social networks to consider “taking action” on the matter.

Social networks have been asked by US senators to develop means to combat deepfakes on their respective platforms.

Twitter is one of the platforms and last month announced that it will be taking matters into its own hands.

Twitter drafts new policy and is asking for feedback

After a couple of weeks, there is finally a draft of the new policies. In summary, Twitter will display a warning next to tweets containing deepfakes or manipulated media. It will also show the same warning when users are about to like or share the tweets.

Twitter says it may:

  • place a notice next to Tweets that share synthetic or manipulated media;
  • warn people before they share or like Tweets with synthetic or manipulated media; or
  • add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.

On top of that, a link will be provided to a source – which can be a tweet or a news article from an “authoritative source” – that explains why it is believed that said content is synthetic or manipulated.

Perhaps they can call upon the help of The New York Times that assigned two reporters to explore the authenticity of the fun meme that President Trump posted – the one that appeared to show him giving a dog a medal.

However, the removal of content is only considered when it threatens the physical wellbeing of an individual. The rest of the details are still being debated.

Twitter is at least attempting to let users participate in the discussion through an online survey that asks important questions, like the scenarios in which it is acceptable to remove deepfake content.

The survey also contemplates other scenarios for content removal, such as instances where the mental health, privacy or dignity of an individual is threatened.

Additionally, Twitter asks if it should take action against the accounts that share manipulated media and if it should make it harder to find.

On a side note, in a somewhat odd stance that could take the fun out of parody, Twitter will also consider a tag that indicates if the manipulated content has been made for entertainment purposes (such as spoofs or special effects from movies). In that case, users will know that the content is not meant to be taken seriously.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.