Researchers from Binghamton University have developed algorithms to identify two specific types of what they deem to be “offensive online behavior” on Twitter – “cyberbullying” and “cyberaggression.”
According to the researchers, intending to insult another Twitter user once is enough to get an account classed as “aggressive” while doing it twice or more is a sign of “bullying.” The researchers also claim that Gamergate and “gender pay inequality at the BBC” are topics that are “more likely to be hate-related.” However, they don’t explain how they ascribe user intent.
One of the researchers, computer scientist Jeremy Blackburn, adds that even if a Twitter user doesn’t do anything that they deem to be “aggressive” or “bullying,” the user may still be labeled a bully or aggressor based on the accounts they follow and the accounts that follow them.
The researchers go on to say that the algorithm can identify Twitter accounts engaging in what they deem to be bullying with “over 90% accuracy.” Additionally, they suggest that the algorithms could be used to find and delete what they define as “abusive accounts” on Twitter.
Twitter didn’t say whether it plans to use these algorithms but re-affirmed its commitment to keeping the service “free of abuse”:
“Our priority is ensuring our service is healthy, and free of abuse or other types of content that can make others afraid to speak up, or put themselves in vulnerable situations.”
While Twitter hasn’t said whether these specific algorithms will be used by the company, it has made several other recent changes to crack down on what it deems to be “abusive” and “hateful” conduct.
In June, the company updated its “hateful speech” policies and banned “dehumanizing” terms against “protected” religious groups. The company is also currently testing a filter to hide “offensive” direct messages. In July, the company even started to send notifications out to its users telling them to “support a culture of respect on Twitter.”
This proliferation of rule changes and algorithms that increasingly rely on subjective terms such as “hateful,” “offensive,” or even a user’s supposed intent is leading to an increase in censorship by big tech platforms. This month alone we’ve seen a fishmonger have his Instagram produce photos censored after the company told him they were “offensive or disturbing” and a family cafe have one of their Google ads promoting the British dish faggots and peas banned for containing “inappropriate and offensive content.”