Clicky

Instagram says it will cut the reach of posts that are “likely” to contain “hate speech”

More opaque algorithmic censorship.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Instagram is introducing more vaguely defined restrictions on its users, this time acting “proactively” to lower Feed posts and Stories that “may” contain bullying or hate speech, or those which “may” encourage violence – as well as content that is “potentially upsetting.”

In a blog post, Facebook’s platform said that this means the already existing policy of reducing the reach of posts determined to contain misinformation by third-party “fact-checkers” – and all posts from accounts that are said to have shared misinformation “repeatedly” – is being expanded.

It is Instagram’s “systems” that will be tasked with making the distinction between what “may” or is “likely” or “potentially” contains hate speech and represents bullying. The blog post explains that (algorithms) will make these decisions by comparing captions – if a caption is similar to another that was already found to be violating the platform’s rules, then the post will be pushed down Feeds and Stories.

Instagram also said that the new policy, that smacks of shadow-banning, affects individual posts and not accounts themselves, and that posts Instagram actually thinks break its rules, rather than suspect them to, will be removed, as before.

The blog post spells it out that Instagram wants to have the last word on what its users see: you may decide to follow an account, but it will be Instagram that decides how likely, and where in the feed, you are to see posts from that account. That’s because Instagram thinks it knows best what might “upset” its users or “make them feel unsafe.”

What and where users see will be further constrained by their own behavior interpreted by Instagram. “If our systems predict you’re likely to report a post based on your history of reporting content, we will show the post lower in your Feed,” the company said.

Considering that previously Instagram “relied” on fact-checkers to make mistakes while downranking posts for misinformation, but that now, with hate speech and bullying, “systems,” i.e., automation will be used, all this could be a big exercise to train machine learning censorship algorithms, with Instagram unlikely to be upset if its method of “comparing captions” and the like produces errors.

Instagram said that making public this new policy is part of its efforts to be more transparent. What it reveals are some of the ways the platform manipulates content visibility.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Share this post

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.