Amazon’s Twitch has unveiled the platform’s new anti-harassment rules that critics worry are so vague and broadly worded that they leave way too much room for potential misinterpretation and abuse.
In addition to that, the phrase “protected groups” pops up an inordinate amount of times while failing to explain how and by whom it is defined. But one thing Twitch does spell out immediately is that the company puts its vague version of protecting users from harm clearly over those users’ right to free speech.
“Twitch prioritizes minimizing harm to our users over freedom of expression, and we will limit some expression with the goal of preventing, protecting users from, and moderating hateful behavior and harassment,” the updated rules say.
Apparently, the foundations on which the country in which Amazon grew to a behemoth it is today are no longer good for the company – namely, that “both things can” and even must be true: protection from harm, and freedom of expression.
“Hateful and harassing” are also terms used repeatedly, often to negate any affirmative statement made immediately prior, such as saying that unpopular and “diverse” points of view would be allowed – unless of course they were found to be “hateful and harmful” in what appears to be an almost fully opaque underlining decision-making process.
Then there’s the issue of “context.” You may not be satirical, ironic, express yourself in exaggerated terms unless it’s to “expose and critique abusive behaviors.” Then, the use of these comedic expressions is allowed. Otherwise, you’re out of luck on Twitch, if somebody decides your “context” was not up to spec.
Context is mentioned a lot, to further broaden definitions and leave enough space for whatever Twitch’s desired interpretation may be, from case to case.
Something called “established (by whom?) hate groups” is also cited as forbidden, but then also “speech, imagery, or emote combinations that dehumanize or perpetuate negative stereotypes and/or memes.”
The entire set of rules seems to be geared towards doing some serious speech – and even intent – policing on the platform, and it will certainly be interesting to see how, and if it will be enforced.
It’s certainly vague, and if past experiences with Big Tech teach us anything, it’s that it’s deliberately vague – to make sure it allows, if need arises, for comprehensive, opaque, and unaccountable censorship.