Clicky

When it comes to content moderation a little transparency goes a long way, study finds

The study was conduced by the Georgia Institute of Technology and University of Michigan researchers.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

A new study by Georgia Institute of Technology and University of Michigan researchers – looking into user behavior on Reddit after receiving content removal explanations – has found that social media platforms could make their moderation job easier if they provided more transparency about their decisions.

The researchers, who note that current moderation practices are opaque to the degree that it’s difficult to properly examine them, focus on the role that providing explanations for removal of content has in the way users interact on a platform in the future.

The study used 32 million Reddit posts as its sample, to sum up its findings by suggesting that social media platforms should invest more and make better use of content removal explanations as an important tool of moderation.

Referring to the “black-box nature of content moderation” deployed by a majority of platforms, the study says that this makes it difficult to understand the process, and difficult for users to understand its end-product – the decision to remove their posts.

One consequence of this is the loss of trust in social media sites – but what effect would greater transparency have? The study finds that providing explanations for such decisions decreases both the likelihood of future submissions, and future removals.

The first outcome could have to do with users being irritated by deletion of their content because they found out about it – given “the frequent silent removals on Reddit.” But those who continue participating are less likely to have their future submissions deleted, meaning that the quality of content will have increased.

A more direct approach is also found to be more effective – explanations work best when provided in the form of a reply, than by tagging the post with an explanation. But it doesn’t seem that messages from humans have a better effect than those crafted by bots, the researchers noted.

And while the massive amount of content means that companies go for the cheapest way to moderate it, the current system is flawed, and is not likely to get better as the volume of content increases.

That is another reason why the researches suggest social media platforms should invest in building moderation that will incorporate removal explanations.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share