Article Text

Download PDFPDF

The Gates Foundation’s new AI initiative: attempting to leapfrog global health inequalities?
  1. Jonathan Shaffer1,
  2. Arsenii Alenichev2,
  3. Marlyn C Faure3
  1. 1Sociology, University of Vermont, Burlington, Vermont, USA
  2. 2Wellcome Centre for Ethics and Humanities, Ethox Centre, Oxford University, Oxford, UK
  3. 3The Ethics Lab, The Department of Medicine and the Neuroscience Institute, University of Cape Town, Rondebosch, Western Cape, South Africa
  1. Correspondence to Dr Jonathan Shaffer; jonathan.shaffer{at}uvm.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The Bill & Melinda Gates Foundation has long been criticised for championing the trend of socially reductive, ‘magic bullet’ technical ‘solutions’ to the complex, historically shaped, politically conflicted problems at root of global health inequities.1–5 Their August 9th announcement of the launch of a new US$5 million, 48 project funding push6 to launch new ‘artificial intelligence (AI) large language models (LLM) in low-income and middle-income countries to improve the livelihood and well-being of communities globally’ is set to continue this hegemonic global health trend. And, as much as ‘magic bullets’ can solve issues, they, as bullets, are also capable of wounding and causing harm.

There are at least three reasons to believe that the unfettered imposition of these tools into already fragile and fragmented healthcare delivery systems risks doing far more harm than good.

We are not Luddites. New tools of technology, biomedicine, scientific knowledge and population care have often made life better and safer for those with access and control over their use.7 LLMs and AI, however, will not be so equity-advancing despite the Gates Foundation’s overheated rhetoric of ‘fostering innovation to solve pressing global health and development problems’, which will inevitably meet the complex reality of local contexts and power asymmetries in which such interventions are set to unfold.

The first reason is pragmatic. A common mantra in computer science and machine learning is ‘Garbage In, Garbage Out’8; that is, if you feed biased or low-quality data into a machine that supposedly ‘learns’, out comes the reproduction thereof, perhaps even worse than before.

What data, then, are the AI tools deployed by the Gates-funded projects being trained on? Practically speaking, the data are structurally racially biased because the world and its governing political economy is structurally racist, according to concerns of numerous scholars and activists.9 The LLM or AI tools being implanted around the world by the Gates Foundation and other corporations are built on ‘training data’—the inputs to what AI ‘learns’ in order to produce an output—that is constructed from a hugely unequal social world marked by racist structural violence, the legacies of colonial rule, ongoing labour extraction and precarity as a new norm.

Some may argue that the rapid and unceasing expansion of training data fed into LLMs—especially the expansion and inclusion of data collected from marginalised groups—will ‘iron out’ the biases or systematic errors that current iterations of LLMs and AI tools seem to produce.

While we are sympathetic to the cause, we disagree that simply providing better ‘training’ data will be able to solve chronic socioeconomic issues. Data—no matter how voluminous—are not neutral nor objective. Decades of research from the philosophy of science, science and technology studies, and sociology of science have pointed to the moral, locally pragmatic and historically contingent nature of social epistemology, evidentiary standards and institutional struggles that shape how data sets are built, commensurated with morally or theoretically laden codes or cases, and turned into socially meaningful evidence.10–13 All the above is vividly ‘social’ and ‘contextual’, and none of this is merely technical nor can be thought of as a neutral, automatic process in vacuum. LLMs and AI’s great mystification and misdirection has been its ability to black box, and thus obfuscate, these social facts.14

For years, black feminist social scientists15 have warned us, pointing to the ways that AI and LLMs reproduce and reify existing dimensions of difference, exclusion and inequality especially along racist lines.16 17 Recent work also shows visually how AI tools trained on structurally biased datasets reproduce, double down and ‘hallucinate’ outputs that reinscribe symbolically violent, racialised and abusive imagery in global health.18 Instances of more mundane harms caused by the biases and inaccuracies of LLM outputs are mounting.19–21 The exploitative labour conditions experienced by people in the Global South tasked with manually observing and coding instances of ‘violence, hate speech and sexual abuse’ for less than US$2 per hour ought to burst the bubble of the neutral, objective and frictionless datafication fantasy.22 This scholarship, emerging AI activism and mounting critical evidence cannot simply be brushed aside.23

The second reason to oppose the careless deployment of AI in global health is the near complete absence of real, democratic regulation and control—an issue that is applicable to global health more broadly. Specifically, there is a tension between access and control in largely neoliberal global health, with promises of the former almost always coming at the expense for the latter as generations of scholars in global health and international development have been highlighting since the 1980s. For instance, the textbook example is the advent of powerful medications for HIV/AIDS spurred a social movement for access, but control over intellectual property and pharmaceutical industrial manufacturing capacity largely remained in the hands of corporations and governments in the Global North. Global North-led AI initiatives are structurally set up to reproduce this scenario. The tension between access and control appears even more stark on examining the global access to and production of effective vaccines for COVID-19 throughout the pandemic.24

The data and algorithms at the base of the LLMs of AI are owned and controlled exclusively by powerful corporate entities in the Global North—propped up by billions on billions of dollars of speculative financial capital—with the aim of extracting value and profit from their widespread implementation.

Gates and others often gesture towards their commitment to ‘responsible and safe use’ of AI tools, ducking for cover behind the authority of mainstream bioethicists and technocratic futurists. But at the end of the day, the hard, sharp edges of capital, command and control are in the hands of a very few entities and individuals, notably including the conflictingly interested Microsoft corporation itself, which has invested more than US$10 billion in OpenAI.25 And unlike antiretrovirals, these tools do not prolong life, but rather seem poised to gut key caregiving institutions of the human labour morally invested in their flourishing.

Other violations of justice that will likely be coproduced with widespread AI imposition include unwanted and unconstrained surveillance, including and especially regarding health and medicine.26 Ubiquitous data inputs, devices and wearables that record an expanding array of biometrics, and global positioning systems (GPS) geospatial tracking enable the development of ‘highly personalised and targeted marketing and information campaigns as well as greatly expanded systems of surveillance’.27 Beyond intrusive, manipulative and extractive targeted sales advertisements, this surveillance apparatus will be wielded by political actors and states to control discourse on social media, conduct surveillance on citizen activity, and manipulate political opinion and voter behaviour. The spectre of authoritarian regimes using AI for social control seems like an increasingly likely scenario.27 The absence of democratic governance of these tools is a grave and growing threat.

Additionally, for all the Gates Foundation’s discourse of ‘equity’ and ‘inclusion’ of health practitioners in the Global South, there is not yet any systematic mechanism to incorporate patients, citizens or the impoverished into meaningful political control or governance over the purposeful uses of these AI tools.28 29 AI is presented by those who own the technology and control the discourse as a universally smooth and infinitely compatible solution to deeply entrenched social problems everywhere.

Which brings us to the third reason for opposition, which is programmatic obliteration. Health systems’ programmatic delivery of quality clinical caregiving is a complex, fraught, labour-intensive process anywhere and especially within and among postcolonial geographies and peoples. On reading the ‘project descriptions’30 of the efforts funded in this first tranche of grant disbursements, the overwhelming impetus goes along the lines of: ‘we don’t have enough resources to do the care work in front of us; we therefore hope that implementation of AI tools will help us do more with less’. They evoke dreams of being able to easily jump over the material inequalities at the heart of their ongoing challenges.

This fantasy achieves two intertwined goals. First, it elides material inequalities as a key explanatory driver of global health challenges.31 Second, through this elision, the policy or programmatic prescriptions advanced avoid calls for costly material investments and instead are steered towards efficient, inexpensive, technologically focused quick fixes.

The ‘do more with less’ heuristic mindset is always mobilised to undermine proposals of expanded social care for the impoverished. Slouching towards hallucinations of efficiency, the neoliberal ethos perpetually moves towards hollowing out the material content of service provision for the poor. Experiencing a debilitating and dangerous psychotic break? You can now type with an algorithmic ‘therapist’. Well-trained caregivers with effective medications to manage such mental illness is an efficiency bridge too far. Are you poor and facing the terrifying experience of gender-based violence? The provision of a lawyer capable of struggling for legal justice is just not going to be ‘cost-effective’ for you. But, how about a robot-generated text app to ‘guide (young women) through the complex judicial system using everyday language’ on their own?

Programmatically, the imposition of AI tools for all things ‘global health’ will only further accelerate the erosion of the social care institutions made available to the impoverished in the name of efficiency, and of course will accrue enormous profits for those that own and control them.

There should be serious public discussion about the uses and likely abuses that AI will wreak on already teetering and impoverished ‘global health’ and domestic healthcare systems. Pragmatically, these tools were built from data encoding a deeply unequal social world and, like a cancer, AI will risk metastasizing and spreading malignant racist ideologies and priorities in new and terrifying ways. Politically, Gates’ active dissemination of these tools is little more than an instance of crass philanthrocapitalist profit extraction.32 They are controlled by a few corporations in North America and Western Europe. No meaningfully democratic means of control and constraint are available to most and the rhetoric of ethics and equity are obfuscatory. Finally, programmatically, AI in global health will likely accelerate the neoliberal push to eviscerate institutions of social care.33 34

But none of this is inevitable. We can and must fight back.

Data availability statement

Data sharing not applicable as no datasets were generated and/or analysed for this study.

Ethics statements

Patient consent for publication

References

Footnotes

  • Handling editor Seye Abimbola

  • Twitter @jonshaffer

  • Contributors All authors contributed equally to the conceptualisation and writing of the article.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.