The notion that whoever controls and shapes AI could potentially wield significant influence over large swathes of society could be one of the most alarming and prominent over the next few years.
In a recent episode of “Unconfuse Me with Bill Gates,” Sam Altman, the CEO of OpenAI, and tech billionaire Bill Gates controversially delved into the potential of artificial intelligence (AI) as a tool for maintaining democracy and promoting world peace.
The discussion was aired on January 11, 2024.
Read the transcript for the episode here.
The conversation explored the idea of using artificial intelligence as an instrument to foster unity in society, enhance global amity, and help overcome geopolitical polarization.
Microsoft, founded by Gates, and OpenAI, whose CEO Altman is currently working closely with Microsoft, are promoters of using AI to solve global issues.
Gates spoke excitedly on the topic: “I do think AI, in the best case, can help us with some hard problems…Including ‘polarization’ because potentially that breaks democracy and that would be a super bad thing.”
In addition to resolving polarization, the two heavyweights also discussed the notion of AI potentially acting as a peacemaking tool.
Gates said, “Whether AI can help us go to war less, be less polarized; you’d think as you drive intelligence…I’d love to have people working on the hardest human problems, like whether we get along with each other. I think that would be extremely positive if we thought the AI could contribute to humans getting along with each other.”
Sam Altman responded positively to Gates’s vision, stating: “We’ve got to find out and see, but I’m very optimistic. I agree with you, what a contribution that would be.”
“If the key is to stop the entire world from doing something dangerous, you’d almost want global government, which today for many issues, like climate, terrorism, we see that it’s hard for us to cooperate,” Gates said, when the topic came to global issues like nuclear.
On the topic of polarization, Altman and Gates lamented that the government, they allege, didn’t act on social media “polarization,” but hoped that there would be potential with AI.
“I don’t understand why the government was not able to be more effective around social media, but it seems worth trying to understand as a case study for what they’re going to go through now with AI,” Altman added.
The discussion notably omits a critical aspect: the influence of the programmers’ own beliefs and principles on the AI’s functioning. The designers and developers of AI systems inherently embed their own ideas about democracy, free speech, and governance into the AI’s algorithms. This raises significant concerns about the impact of these personal biases on the AI’s neutrality and its ability to make fair, unbiased decisions.
The prospect of AI systems being programmed with particular ideologies could have profound implications for free speech. If an AI is designed to favor certain political or social viewpoints, (excluding those it decides are “polarizing”) it could potentially suppress opposing perspectives, leading to a form of digital censorship.
This becomes especially concerning in the context of AI platforms that manage large-scale public discourse, such as social media algorithms or news aggregation tools. The power to subtly shape public opinion and control the narrative on critical issues could be an unintended consequence of AI programmed with specific democratic ideals. Or perhaps that’s the intention all along.