The Gates Foundation and OpenAI have announced a $50mn initiative to introduce artificial intelligence tools into primary healthcare networks across Rwanda and other African nations by 2028.
The project, named Horizon1000, is meant to relieve overwhelmed medical workers and improve access to care, but its approach is renewing questions about how data-driven systems are being tested on vulnerable populations.
At the World Economic Forum in Davos, Bill Gates described the plan as a breakthrough for under-resourced countries. “We aim to accelerate the adoption of AI tools across primary care clinics, within communities and in people’s homes,” he said, calling the technology a possible “game-changer in expanding access to quality care.”
The foundation and OpenAI say the tools will help with patient records and clinical evaluations, giving health workers more time and better guidance. Gates emphasized that the project will “support health workers, not replace them.”
He noted that sub-Saharan Africa faces an estimated shortfall of nearly six million health professionals, leaving many in what he called an “impossible situation” where they must “triage too many patients with too little administrative support, modern technology, and up-to-date clinical guidance.”
More: The UN Is Using Africa as a Testing Ground for Controversial Digital ID Systems
Hospitals around the world are already experimenting with artificial intelligence to automate medical notes, summarize consultations, and flag potentially serious symptoms. Systems like ChatGPT and Gemini are now used to generate documentation that once required hours of manual effort.
Yet this growing dependence on algorithmic systems in healthcare introduces a layer of risk that goes beyond efficiency.
To function, these models rely on immense datasets, often containing personal or identifiable medical information. In regions without strong privacy legislation, the line between helpful automation and invasive data collection can easily blur.
OpenAI’s chief executive, Sam Altman, highlighted the social potential of the technology, saying: “AI is going to be a scientific marvel no matter what, but for it to be a societal marvel, we’ve got to figure out ways that we use this incredible technology to improve people’s lives.” His statement reflects the optimism surrounding AI in medicine, but the implementation context matters.
Africa has become a frequent starting point for large-scale technology pilots funded by global foundations and corporations. From digital identity programs to vaccine logistics, the continent is often chosen for early trials that later influence global health strategies.
Gates argues this accelerates innovation where resources are scarce. However, such experiments can also occur in environments where informed consent, data governance, and regulatory oversight are still developing or even non-existent.
More: Inside The Bill Gates and Friends’ Plot To Hardwire AI Into Public Services
The Gates Foundation has said it will monitor and audit the AI models for safety, bias, and accuracy, rolling out the technology gradually and tailoring it for local needs. Rwanda, for example, has established a national health intelligence centre to use AI in analyzing data at the community level.
Language remains a persistent challenge. Many leading AI systems are trained primarily on English-language data, which limits their ability to interpret medical terms and symptoms described in local dialects. A 2023 study from the Massachusetts Institute of Technology found that medical questions containing typos or informal phrasing were between 7 and 9 per cent more likely to trigger an incorrect recommendation against seeking care, even when the clinical meaning was identical.
Such findings illustrate how easily a model’s training data can reproduce inequality. Patients who are not fluent in English or who communicate in non-standard ways risk being misunderstood by the very systems designed to assist them.








