Clicky

White House Pushes for Collaboration With Big Tech on “Safe” AI

A White House meeting raises questions about whether new AI safety protocols will serve as a tool for political control.
The White House with green digital matrix code overlay.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

On Thursday, a pivotal meeting at the White House brought together leaders from major tech corporations including OpenAI, Anthropic, Nvidia, Microsoft, Google, and Amazon, alongside top executives from key American power and utility companies. Although the discussions officially centered on the energy demands of the rapidly advancing artificial intelligence sector, the underlying focus was increasingly on the drive towards developing “safe” and “responsible” AI—a theme that has sparked considerable debate about who gets to define exactly what is safe and responsible.

Among the notable attendees were Nvidia’s CEO Jensen Huang, OpenAI’s CEO Sam Altman, Anthropic’s CEO Dario Amodei, Microsoft’s president Brad Smith, Google’s president Ruth Porat, and AWS’s CEO Matt Garman. The discussions aimed to fortify public-private collaboration, ostensibly to support the energy needs crucial for sustaining AI development, yet the gathering also delved deeply into the ramifications of intensified safety and ethical standards.

Related: OpenAI Adds Former NSA Director To Its Board

The White House announced the establishment of a new task force to streamline policy coordination across governmental departments, indicating a strategic push towards integrating safety protocols more profoundly into AI development processes. This move reflects a growing governmental interest in tightening the leash on AI innovations under the banner of public safety and ethical considerations.

Exiting the meeting, Nvidia’s CEO Jensen Huang remarked, “We’re at the beginning of a new industrial revolution. This industry is going to be producing intelligence, and what it takes is energy… So we’ve got to make sure that everybody understands the needs coming, the opportunities of it, the challenges of it, and doing it in the most efficient and scalable way we can.” Yet, beneath these comments lies a critical dialogue about the potential stifling of speech through increased safety regulations.

An OpenAI representative emphasized the significance of expanding US infrastructure not just for economic growth but as a strategic move to anchor the nation’s leadership in ethical AI development. However, this perspective sparks concerns about the potential for such safety measures to morph into mechanisms of control, influenced by political and commercial interests rather than genuine ethical considerations.

The implications of these discussions are profound, especially considering the administration’s recent directive following an executive order in October 2023 that mandated new safety assessments and research into AI’s impact on labor. This reflects an intensified focus on creating a controlled environment for AI development, prioritizing governmental and large corporate influences over a more decentralized innovation ecosystem. This approach could lead to a form of gatekeeping where only certain entities have a say in what constitutes “safe” AI, potentially marginalizing smaller developers and researchers who may have differing views on AI ethics and development.

The recent decision by OpenAI and Anthropic to permit the US AI Safety Institute to review their new AI models before public release is stirring concerns about government influence over technological speech. By enabling a governmental body—a division of the Department of Commerce at the National Institute of Standards and Technology—to serve as a gatekeeper, this move introduces a layer of control that extends beyond safety and into the realm of regulating speech through technology.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Share this post

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.