The dystopian world is teeming with twisted concepts; there’s things like “pre-crime,” and now, thanks to Reddit, there is also “post-guidance.”
And even though the phrases use prefixes opposite in meaning, in yet another twist, they are meant to serve a fairly similar purpose.
Reddit’s upcoming “post-guidance” feature, now being prototyped, uses a form of AI to censor content by flagging it for violating guidelines before it ever gets published.
We learn this, and more about plans to (mis)use AI, from Reddit CEO Steve Huffman, who also revealed in an interview that whatever the platform decides to consider bullying and hate speech will be dealt with with the same technology.
According to Fast Company, all this is happening as Reddit is reportedly preparing for an IPO, and thus looking for ways to make itself more palatable to investors. It’s also indirectly indicative of a market that appreciates, if not requires, ever more censorship on big social platforms.
Currently, Reddit has 70 million active daily users, and 50,000 moderators policing their posts. But going forward, that model will be “reinforced” with AI, specifically, large language models (LLMs).
Huffman found an interesting way of putting a positive spin on this: he is essentially criticizing the droves of (overwhelmingly unpaid) moderators currently helping rein in Reddit users by suggesting they make mistakes, including by being “strict” and “esoteric” with their rules and implementation thereof- and claims AI can help with this.
“Post guidance” is what Huffman is talking about, and the feature essentially warns a user about “accidentally” breaking the rules before their post is published and a moderator can see it.
“The new user gets feedback, and the mod doesn’t have to deal with it,” he is quoted as saying. In reality, instead of supposedly helping those users “join the conversation,” a feature like this creates two levels of censorship.
First is automated, and then whatever’s left is handled by moderators, who would be expected to be more efficient if they have less content to look at. But the end result would highly likely just be more censorship.
Another thing Reddit is working on is incorporating AI in hunting down those breaking the rules “willfully” with “bullying” and “hate speech.”
Regarding this, Huffman “expects progress” in 2024, the reports says.