AI Safety Institute Debuts with Big-Name Backers and a Censorship Agenda

Every major speaker at the Copenhagen summit has a resume built on telling platforms what to take down.

Hand holding smartphone showing facial scan silhouette and text "Verifying your identity" with lock and fingerprint graphics in background

Stand against censorship and surveillance: join Reclaim The Net.

Common Sense Media’s Youth AI Safety Institute arrived at the Danish Parliament this week and the guest list is stacked with people who think you can’t be trusted to speak freely online.

Hillary Clinton, Ursula von der Leyen, former Biden Surgeon General Vivek Murthy, Ofcom chief Melanie Dawes, and the head of an organization that wants to break end-to-end encryption are all gathering at Christiansborg Palace in Copenhagen to announce what they’d like to do next about AI and children.

The “next” part is where it gets concerning. The Youth AI Safety Institute, launched by Common Sense Media on May 5, says it will “complement efforts by regulators and policymakers to translate frameworks such as the EU AI Act, the Digital Services Act, and the UK Online Safety Act into practical protections for child-safe AI.”

Those three censorship laws represent the most aggressive government-directed speech suppression regimes currently operating in the Western world. The Institute isn’t questioning them. In fact, it wants to help implement them and push them further.

Reclaim Your Digital Freedom.

Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.

The summit, titled “Keeping Our Children and Families Safe in the AI Era,” is co-hosted by Common Sense Media, Save the Children Denmark, and Margrethe Vestager, who spent years as the European Commission’s executive vice president building the regulatory architecture that now lets EU officials order platforms to delete content.

More than 200 policymakers, tech executives, and civil society figures are expected. King Frederik X of Denmark is giving the opening address. The Duchess of Edinburgh will attend. Danish Prime Minister Mette Frederiksen is on the bill.

And so is Pinterest CEO Bill Ready, whose company helped pay for the Institute’s creation.

Who’s Funding This?

The Youth AI Safety Institute is bankrolled by a mix of philanthropic donors and deep industry money.

The industry funders are Anthropic, the OpenAI Foundation, and Pinterest. All three make AI products that the Institute will evaluate and rate. The Institute says it “maintains complete editorial independence over published results.” But the structural incentive is obvious enough to name. Companies are funding an organization that will publish safety ratings of their competitors, define what “safe” means, and push governments to enforce those definitions through law.

John Giannandrea, a former senior AI executive at both Apple and Google, sits on the Institute’s Board of Advisors. So does Murthy, who has publicly advocated for digital ID systems to combat online “misinformation” and worked directly with Big Tech companies to target speech the government classified as false during the Biden administration.

Common Sense Media CEO James P. Steyer framed the project by citing the Institute’s own polling. “Eight percent—that is the share of parents across the four countries we surveyed who are confident AI companies are prioritising teen safety,” Steyer said.

“For more than two decades, Common Sense Media has built the standards, ratings and research that families trust through every major technological transformation young people have lived through, from streaming to social media. Our Youth AI Safety Institute applies that work to AI: independent standards, real testing and clear accountability for the products young people use. Copenhagen is where that mission begins in Europe.”

The polling, conducted across Spain, Denmark, the Netherlands, and Poland by Common Sense Media, SocialSphere, and YouGov, found that 77% of parents want strong laws governing AI. The press materials use that number to argue for “stronger laws and child-centred AI governance,” which in the context of this particular coalition means more age verification, more content restrictions, and more government involvement in deciding what AI systems are allowed to say.

The Speaker List Tells You Everything

Every major speaker at the Copenhagen summit has a track record of pushing for expanded government control over online speech.

Clinton has backed digital ID proposals and repeatedly called for tighter restrictions on what people can say and share online. She told the summit, “Social media was a societal experiment unleashed on young people without oversight, accountability, or consequence for those who profited from it. We are still reckoning with what that cost us. AI will be more complex, more pervasive, and more consequential. That demands urgent investment, dedicated institutions, and leaders willing to be both vocal and unrelenting. Common Sense Media’s Youth AI Safety Institute is driving the kind of accountability this moment requires — and I’m looking forward to joining that global conversation in Copenhagen.”

Von der Leyen, who presided over the EU’s Digital Services Act and has defended expanded speech controls alongside Macron and Merz, said, “Our children are growing up in a digital world shaped by addictive algorithms. But it should be parents, not platforms, that raise them. Together, Europe must forge a harmonised approach and set new standards. Not by rejecting technology, but by protecting our children.”

Dawes runs Ofcom, the UK regulator that enforces the Online Safety Act and has already opened investigations into platforms like Telegram under its authority.

Chris Sherwood heads the NSPCC, which has openly supported weakening end-to-end encryption so that platforms can scan private messages before they’re sent. That is mass surveillance of everyone’s private communications, justified by the existence of children.

Murthy, who served as Surgeon General under Biden, has pushed for digital ID as a tool to fight “misinformation” and worked directly with tech companies to identify and suppress speech the government wanted gone. He told the press, “We are at great risk of making the same mistakes with AI that we made with social media: subjecting children to new technologies without adequate safety guardrails and thereby causing harm to countless lives.”

Vestager called the summit “where we must act now” and described the Institute as “a key part of the global AI safety ecosystem.”

Every person on this stage has supported giving governments or unaccountable regulatory bodies the power to decide what speech is acceptable. They are not even debating whether AI should be censored. They are coordinating how.

Stand against censorship and surveillance: join Reclaim The Net.

Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

Read More

Share this post

Reclaim The Net Logo

Reclaim The Net

Defend free speech and privacy online. Get the latest on Big Tech censorship, government surveillance, and the tools to fight back.