The Biden administration is looking to regulate artificial intelligence systems to ensure that AI produces approved outputs, including assessing whether AI is promoting “misinformation.”
In a speech at the University of Pittsburgh, Alan Davidson, the assistant secretary of communications and information at the National Telecommunications and Information Administration (NTIA), said that auditing AI systems are one way of building trust in the developing technology.
“Much as financial audits create trust in the accuracy of financial statements, accountability mechanisms for AI can help assure that an AI system is trustworthy,” Davidson said. “Policy was necessary to make that happen in the finance sector, and it may be necessary for AI.”
“Our initiative will help build an ecosystem of AI audits, assessments, and other mechanisms to help assure businesses and the public that AI systems can be trusted,” he added.
The stated aim is to ensure AI systems do what their developers say they do, respect privacy, and that they do not lead to “discriminatory outcomes or reflect unacceptable levels of bias.” The audits will also determine if AI systems “promote misinformation, disinformation, or other misleading content.”
The NTIA said that it was accepting public comment on the best ways to approach the regulation of AI.
At the President’s Council of Advisors on Science and Technology (PCAST), President Joe Biden addressed the topic of artificial intelligence, saying that there is not yet proof that it is dangerous but that tech companies have a responsibility to ensure their products are safe.
“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” he said.
Asked if the technology is dangerous, the president said, “It remains to be seen. It could be.”
According to Biden, AI has the potential to help address issues like climate change and diseases. But it is also crucial to assess potential risks to national security, the economy, and society.
He used social media as an example of how a lack of safeguards can result in technologies being harmful.
“Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people,” Biden said.