Several senators suggested that generative AI tools should be restricted and that only companies with a government-approved license should be able to provide the software during a Tuesday Senate Judiciary Committee hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.”
Sam Altman, the CEO of OpenAI, testified during the hearing and agreed with the push for AI access to be restricted via licensing.
During his opening statement, Senator Richard Blumenthal (D-CT) lamented that Congress failed to regulate social media and said: “Now we have the obligation to do it on AI before the threats and risks become real.”
Blumenthal continued by proposing “limitations on use” where “the risk of AI is so extreme,” bans where the AI makes “decisions that affect people’s livelihoods.” He also called for AI companies to be held liable when they “cause harm.”
Altman noted that OpenAI makes “significant efforts to ensure that safety is built into our systems at all levels” in his opening statement.
He added that GPT-4, the latest version of the large language model that powers OpenAI’s generative AI tool, ChatGPT, is more likely to refuse “harmful requests” than any other widely deployed model of similar capability.
Altman then called for a US government licensing regime that applies to the development and release of AI models above a certain capability threshold. Additionally, he welcomed AI companies partnering with governments and said that as part of these partnerships, companies and governments should examine “opportunities for global coordination.”
Altman’s written testimony, which he pointed to during his opening statement, also contained a strong call for a federal government licensing scheme:
“It is vital that AI companies—especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.”
This written testimony also recommended that AI licensing regulations be deployed on a global scale.
Senator Lindsey Graham (R-SC) agreed with Altman’s suggestion that AI companies should have to obtain a license from the government and proposed that a new agency should be responsible for governing AI and licensing. He also envisioned empowering this agency to put AI companies out of business if they don’t adhere to the standards imposed by the government.
“We need to empower an agency that issues a license and can take it away,” Graham said. “Wouldn’t that be some incentive to do it right if you could actually be taken out of business?”
Altman responded: “Clearly that should be part of what an agency can do.”
The idea of restricting AI via licensing came up again when Senator John Kennedy (R-LA) asked Altman for his AI regulation recommendations. Altman proposed forming a new agency that can hand out licenses and take them away based on “safety standards.”
Senator Mazie Hirono (D-HI) brought up licensing when questioning the witnesses and asked NYU Professor Emeritus Gary Marcus what type of regulation scheme he would contemplate. Marcus recommended a similar licensing scheme to the one proposed by Altman where companies “have to make a safety case” to get a license.
Altman continued to push for licensing when asked about the scope of AI regulations by Senator Jon Ossoff (D-GA). The OpenAI CEO suggested that companies should have to obtain a license when they surpass a “threshold of compute” or when they surpass “capability thresholds.”
When Senator Korey Booker (D-NJ) raised concerns about the corporate concentration of the generative AI space, which is currently dominated by the Microsoft-backed OpenAI, Google-backed Anthropic, and Google’s Bard AI chatbot, Altman revealed the endgame of the licensing regime that he’s pushing for. He acknowledged that while there will likely be many models, “there will be a relatively small number of providers that can make models at the true edge.”
Marcus followed up by warning about the risk of “technocracy combined with oligarchy where a small number of companies influence people’s beliefs through the nature of these systems.”
Senator Peter Welch (D-VT) expressed support for creating an agency that oversees AI.
“You don’t build a nuclear reactor without getting a license,” he said. “You don’t build an AI system without getting a license. It gets tested independently.”
Altman agreed with Welch, describing it as “a great analogy.” Marcus told Welch that there would need to be both “pre-deployment and post-deployment” licenses.
You can watch the full hearing and access written testimony here.
While several senators and Altman are fans of a government licensing scheme that dictates which companies are allowed to build the AI systems that are used by the public, such a scheme would likely entrench the dominant companies.
Big Tech and large corporations with more resources and better connections would find it easy to obtain licenses. Meanwhile, small businesses, startups, or individual developers with limited resources would find it harder to get a license.
The fact that influential AI companies, such as OpenAI, already have the ear of the lawmakers that would implement an AI licensing scheme also gives the dominant companies an avenue through which they can exert influence and potentially shape the licensing scheme to benefit them while excluding competitors.
Licensing could also restrict free speech. Much of the justification for licensing during the hearing was centered around preventing “harm” — a far-reaching, subjective term that’s been used many times to justify censorship on Big Tech platforms and government censorship. If such a scheme were implemented, governments could decide that truthful but inconvenient content is harmful and make censoring such content a condition of licensing.