The Communications Decency Act (CDA) and the protections its Section 230 affords to Big Tech continue to be a hotly debated topic.
Ruling in a case concerning social media giants’ liability in the circumstances surrounding a string of terror attacks, a US Court of Appeals for the Ninth Circuit on Tuesday confirmed what is by now well understood: as it is, the CDA, a piece of 30+ years old legislation, fully protects them.
But some of the judges at the same time raised the issue of whether the overwhelming degree of that protection is what Congress had in mind when it passed the law, and whether the use of algorithms is something that Congress should now consider regulating.
We obtained a copy of the ruling for you here.
The court of appeals’ judgment concerned four cases involving acts of terrorism committed by Islamic terrorists that resulted in deaths of US citizens: the Reina night club attack in Turkey, that left 38 people dead, the San Bernardino, California attack that saw 14 people perish, the Paris massacre claiming 131 victims, and an attack in Jordan committed the same year, 2015.
The families of the victims wanted courts to declare Google, Facebook and Twitter liable in these cases over content posted on their platforms by third parties responsible for the violence. But like in a number of other similar lawsuits, these were eventually dismissed thanks to Section 230 protections.
However, these legal proceedings addressed the issue of algorithmically programmed recommendation systems – arguing this is effectively content in its own right, created by the social networks in question rather than their users.
And while the 79-page judgment by and large rejected the claims made by the plaintiffs, the court also concluded that there is no question Section 230 “shelters more activity than Congress envisioned it would.”
The ruling said that it was a pressing question for Congress to decide whether service providers should continue to be shielded from prosecution for third-party content they publish – and whether the way they use algorithms – a practice that has emerged since the CDA was passed in 1996 – should now be regulated.
“I urge the Court to take this case en banc to reconsider our case law and hold that websites’ use of machine-generated algorithms to recommend content and contacts are not within the publishing role immunized under section 230,” Judge Marsha Berzon is cited as saying in the judgment, and adding: “These cases demonstrate the dangers posed by extending section 230 immunity to such algorithmic recommendations, an extension, in my view, compelled by neither the text nor history of the statute.”