Section 230 of the Communications Act (CDA), an online liability shield that prevents online apps, websites, and services from being held civilly liable for content posted by their users if they act in “good faith” to moderate content, provided the foundation for most of today’s popular platforms to grow without being sued out of existence. But as these platforms have grown, Section 230 has become a political football that lawmakers have used in an attempt to influence how platforms editorialize and moderate content, with pro-censorship factions threatening reforms that force platforms to censor more aggressively and pro-free speech factions pushing reforms that reduce the power of Big Tech to censor lawful speech.
And during a Communications and Technology Subcommittee hearing yesterday, lawmakers discussed a radical new Section 230 proposal that would sunset the law and create a new solution that “ensures safety and accountability for past and future harm.”
We obtained a copy of the draft bill to sunset Section 230 for you here.
In a memo for the hearing, lawmakers acknowledged that their true intention is “not to have Section 230 actually sunset” but to “encourage” technology companies to work with Congress on Section 230 reform and noted that they intend to focus on the role Section 230 plays in shaping how Big Tech addresses “harmful content, misinformation, and hate speech” — three broad, subjective categories of legal speech that are often used to justify censorship of disfavored opinions.
And during the hearing, several lawmakers signaled that they want to use this latest piece of Section 230 legislation to force social media platforms to censor a wider range of content, including content that they deem to be harmful or misinformation.
Rep. Doris Matsui (D-CA) acknowledged that Section 230 “allowed the internet to flourish in its early days” but complained that it serves as “a haven for harmful content, disinformation, and online harassment.”
She added: “The role of Section 230 needs immediate scrutiny, because as it exists today, it is just not working.”
Rep. John Joyce (R-PA) suggested Section 230 reforms are necessary to protect children — a talking point that’s often used to erode free speech and privacy for everyone.
“We need to make sure that they [children] are not interacting with harmful or inappropriate content,” Rep. John Joyce (R-PA) said. “And Section 230 is only exacerbating this problem. We here in Congress need to find a solution to this problem that Section 230 poses.”
Rep. Tony Cárdenas (D-CA) complained that platforms aren’t doing enough to combat “outrageous and harmful content” and “harmful mis-and-dis-information”:
“While I wish we could better depend on American companies to help combat these issues, the reality is that outrageous and harmful content helps drive their profit margins. That’s the online platforms.
I’ll also highlight, as I have in previous hearings, that the problem of harmful mis-and-dis-information online is even worse for users who speak Spanish and other languages outside of English as a result of platforms not making adequate investments to protect them.”
Rep. Debbie Dingell (D-MI) also signaled an intent to use Section 230 reform to target “false information” and claimed that Section 230 has allowed platforms to “evade accountability for what occurs on their platforms.”
Rep. Buddy Cater (R-GA) framed Section 230 as “part of the problem” because “it’s kind of set a free for all on the Internet” when pushing for reform.
While several lawmakers were in favor of Section 230 reforms that pressure platforms to moderate more aggressively, one of the witnesses, Kate Tummarello, the Executive Director at the advocacy organization Engine, did warn that these efforts could lead to censorship.
“It’s not that the platforms would be held liable for the speech,” Tummarello said. “It’s that the platforms could very easily be pressured into removing speech people don’t like.”
You can watch the full hearing here.