Clicky

Researchers develop algorithm to scan for “misogynistic” tweets

Apparently, the world needs a little more speech control.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

With the Orwellian race to police online speech becoming a popular one, private companies and researchers are keen to develop ways to speed that process along.

In Australia, scientists at the Queensland University of Technology have created a system that says it can identify “misogyny” in tweets. The team hopes social media platforms could adopt the technology.

After mining one million tweets, they started developing the system by searching three abusive phrases that could indicate misogyny – rape, whore, and slut. From one million, there then remained 5,000 tweets.

They then used intent and context to classify the remaining tweets as either misogynistic or not. Those deemed as misogynistic were transferred to a machine learning classifier so that it would use them to develop a classification model, The Next Web reported.

“Take the phrase ‘get back to the kitchen’ as an example — devoid of context of structural inequality, a machine’s literal interpretation could miss the misogynistic meaning,” explained Professor Richi Naya, one of the researchers.

“But seen with the understanding of what constitutes abusive or misogynistic language, it can be identified as a misogynistic tweet.”

By making the algorithm identify ‘go back to the kitchen’ as a misogynistic statement, the researchers demonstrated that applying context learning in AI is possible, and it can work.

According to the team, their system can identify misogyny in tweets with an accuracy level as high as 75%. They also said that their system could be tweaked to identify homophobia, racism, and discrimination of people with disabilities. With the evidence they presented in their study, which is published on Springer, they hope social media companies will adopt their tool.

“At the moment, the onus is on the user to report abuse they receive. We hope our machine-learning solution can be adopted by social media platforms to automatically identify and report this content to protect women and other user groups online,” said Professor Nayak.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Share this post

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.