Clicky

Subscribe for premier reporting on free speech, privacy, Big Tech, media gatekeepers, and individual liberty online.

Losing Their Grip: Why Anti-“Misinformation” Crusaders Are Mourning the End of Control

Abstract artwork depicting a ballot box can with vibrant red and blue splashes in the background.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

In the brave new world of the University of Washington’s Center for an Informed Public (CIP), it seems like “informed” is synonymous with “watched.” Birthed to combat the wildfires of online “misinformation,” CIP and its partners – including the defunct Election Integrity Partnership (EIP) and the short-lived Virality Project – thought they might have been celebrated defenders of truth. Instead, they became poster children for what happens when watchdogs get a little too cozy with power, diving into an experiment that teetered between public good and Orwellian oversight.

Election Integrity Partnership: A Marriage of “Good Intentions” and Government Influence

The Election Integrity Partnership, a coalition that included CIP as a key player, kicked off its operations with a noble-sounding mission: to shield our fragile electoral systems from the scourge of fake news. For the discerning reader, the term “integrity” in their name may raise eyebrows; it’s reminiscent of government programs cloaked in the language of virtue, their real work a little murkier. Partnering with government entities and social media giants like Facebook and then-Twitter, EIP set out to identify and “mitigate” misleading content related to elections. In other words, they assumed the job of selectively filtering out the lies, or as critics would say, the truths that didn’t toe the right political line.

For a while, EIP was in its element, functioning as a digital triage, purging the internet of what they deemed harmful content. But what started as “informational integrity” quickly became a federal hall monitor, policing citizens’ Facebook posts and Twitter threads with all the subtlety of a sledgehammer. Conservatives, in particular, saw this as more of a censorship scheme than a public service. Their view? EIP wasn’t there to inform – it was there to enforce.

The Consequences of Playing Speech Police

Predictably, the backlash came hard and fast. Between accusations of censorship, lawsuits, and subpoenas, the EIP got hit with more legal troubles than a tech startup in a copyright infringement scandal. And when all was said and done, EIP disbanded, its ambitions buckling under the weight of public scrutiny and political pressure. The New York Times, ever the mournful observer of lost social crusades, called it a tragedy for public discourse. They framed the dissolution as a loss for those who believe in “responsible” information regulation, i.e., those who think someone should be appointed arbiter of truth, as long as it’s the “right” someone.

The lawsuit-laden disbandment sent a message: Americans are more than a little skeptical about government agencies and their academic friends lurking behind the scenes, flagging speech like a hall monitor on a power trip. The public isn’t too keen on playing along with institutional gatekeepers telling them which “facts” are allowed to stand.

CIP’s Retreat: Education Over Eradication?

With EIP gone, CIP has had to pivot. It’s retreated from the frontlines of digital speech enforcement, now favoring a softer approach – “educating” the public on misinformation rather than erasing it outright. Translation: CIP now hosts workshops and seminars where it teaches researchers and civilians alike about the nature of disinformation, sidestepping its prior role as a social media referee. This rebranding effort is essentially CIP’s way of saying, “We’re not here to censor, promise.”

Yet, the academic world’s “shift to education” sounds suspiciously like the fox retreating from the henhouse after getting caught. CIP’s pivot reflects the current climate, one in which watchdogs like it have to tread carefully or risk losing all influence. Now, they’re not shutting people up; they’re merely explaining why certain ideas are wrong, a move that feels less aggressive but still keeps CIP’s finger on the scale of public opinion.

The Larger Implications: Free Speech in the Crosshairs

CIP’s saga shines a harsh light on the deepening tensions between free speech advocates and so-called “disinformation” experts. On one side, you have entities like the New York Times wringing their hands, lamenting the “tragedy” of these anti-misinformation efforts falling apart. The Times warns of a future in which misinformation spreads unchecked, as though without EIP, social media will devolve into an apocalyptic pit of lies. On the other side, you have critics of censorship, those who see CIP’s previous activities as a government-endorsed grab at control, cloaked in the language of public safety.

Now, we find ourselves in a new chapter, with CIP toeing the line carefully, offering lessons in “awareness” rather than flagging posts. This so-called “nuanced understanding” might sound respectable, but it still hinges on a central belief: certain ideas are dangerous enough to warrant intervention, even if the means have shifted from banning to benign “educating.” In short, CIP may be keeping a lower profile, but its ambitions haven’t changed – they’ve merely gone underground.

So what do you get when you hand the keys to social discourse over to government-aligned bodies like the EIP? For starters, the inevitable slide toward an overzealous surveillance state. Free speech advocates have been beating this drum for a while, and they aren’t wrong: schemes like EIP carry the perfect storm of potential for overreach and abuse. It’s the classic “trust us” move from government and corporate giants who assure the public that they’re only flagging content for “our own good.” But when a government body is allowed to sift through online conversations, the notion of “our good” quickly morphs into “their control.”

The result? People start censoring themselves, fearing that one wrong post might put them on a watchlist or see them “fact-checked” into silence. These watchdog groups claim to target misinformation, but they often mistake dissenting views for danger and critique for conspiracy. The very act of monitoring speech creates a chilling effect, where the public might think twice before posting on sensitive subjects. After all, who wants to risk getting flagged by an algorithm armed with both the moral zeal and clumsiness of a hammer trying to nail jelly to a wall?

Transparency and Accountability – Or the Lack Thereof

And then there’s the lack of transparency – a time-honored tradition in institutions that insist they know best. When EIP was in full swing, it wasn’t as if users got an email detailing who decided their post was a threat to democracy or what precise reasoning went into labeling it “misinformation.” Instead, decisions were made in rooms far from public view, with opaque policies and an ever-shifting definition of what “misinformation” even means. Political or corporate interests could easily influence this moderation, and, surprise, surprise – with little oversight, the system quickly looks more biased than benevolent.

The arbitrary and often political nature of these decisions only stokes public distrust, especially when it’s the very voices challenging authority that find themselves most frequently muzzled. It’s the internet equivalent of a teacher who can’t explain why certain kids always get detention – people quickly learn not to ask questions and go along with the rules, but that doesn’t mean they believe in the fairness of the process.

Democracy’s Achilles’ Heel: Stifling Discourse in the Name of Truth

In democratic societies, it’s a cornerstone. The ability to voice different viewpoints, even those that shake the system, is essential for a healthy public sphere. When bodies like EIP take it upon themselves to deem what’s acceptable for public consumption, we’re left with a sanitized marketplace of ideas – one in which only the ideas that align with sanctioned narratives get a seat at the table. If only certain perspectives survive the cut, we end up with voters fed a curated set of “truths,” unable to challenge, investigate, or even consider alternatives.

And it’s not just a hypothetical fear. History has repeatedly shown that the silencing of controversial or dissenting voices only deepens public division. Ironically, the very thing these “integrity” initiatives aim to prevent – public polarization – often worsens when people feel their speech is being filtered. With an overpowered referee deciding which facts to keep on the field, the game of democracy itself suffers.

The Slippery Slope: Setting the Stage for Future Censorship

The question becomes, once government-linked entities start moderating our conversations, where does it end? Today, it’s about “election integrity.” Tomorrow, it could be “economic stability” or “public health.” Every crisis invites a new round of justifications for more speech control. After all, if misinformation on elections is a threat to democracy, couldn’t misinformation on any number of other issues pose a similar threat? Accepting censorship in any form opens a Pandora’s box of future government interference, each intervention creating new precedents that make the next round of censorship feel more routine.

The free speech argument here is simple: even if an opinion is wrong, unpopular, or offensive, it deserves protection. The minute we concede that it’s acceptable to police ideas – especially by bodies connected to government interests – we make it all the easier for future, more dangerous limitations to slip into place.

The Real Effectiveness Question: Censoring Ideas or Fanning the Flames?

Then there’s the effectiveness issue. Does suppressing “misinformation” really work, or does it just make it more insidious? Efforts like EIP may well reduce the volume of “dangerous” content on mainstream platforms, but it doesn’t just vanish. Ideas banned in one place tend to bubble up elsewhere – often in online echo chambers where censorship only serves to validate radical viewpoints, feeding a cycle of resentment and extremism.

The disinformation crusade might actually be doing more harm than good, driving misinformation underground where it becomes even harder to address. The government’s digital eraser may scrub certain ideas from view, but it often intensifies belief among those already suspicious of authority. For them, censorship itself becomes “proof” of suppression, amplifying distrust and cementing conspiratorial thinking. In trying to stamp out the “lies,” EIP and its ilk may have simply fueled the fire.

In the end, the dissolution of the Election Integrity Partnership is perhaps less a blow to public discourse than a win for the democratic spirit. As the Center for an Informed Public pivots from censoring to educating, we’re reminded that the battle against misinformation doesn’t require speech suppression. It requires a trust in the public’s ability to sift truth from nonsense – a trust that, in a healthy democracy, should never be in short supply.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

A skyscraper with the Google logo on top, emerging from a sea of clouds at sunset.

Google’s Empire Cracks

As Google faces mounting antitrust scrutiny, its legal and PR battles intensify, with potential remedies threatening to reshape the tech giant’s iron grip on search, Android, and digital advertising.

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.

Share