The Perilous Push of AI into Healthcare for Marginalised Communities
As generative artificial intelligence is increasingly integrated into medical settings across the United States, a troubling pattern is emerging. This technological shift, while marketed as an efficiency tool, carries significant risks of deepening existing health inequities, particularly for unhoused individuals and those with low incomes. The drive to implement AI solutions often overlooks the fundamental need for patient-centred care, instead prioritising corporate efficiency over human connection and diagnostic accuracy.
When AI Replaces the Doctor in the Examination Room
A concerning example is unfolding in southern California, where homelessness rates are among the nation's highest. Here, a private company called Akido Labs operates clinics serving unhoused patients and others with limited financial means. Their operational model involves medical assistants conducting consultations while AI systems listen to conversations, generating potential diagnoses and treatment plans for subsequent doctor review. The company's stated ambition, as revealed to the MIT Technology Review, is to effectively "pull the doctor out of the visit." This approach represents a dangerous precedent in healthcare delivery.
This is not an isolated case. A 2025 survey by the American Medical Association indicated that approximately two-thirds of physicians now utilise AI to assist with daily tasks, including patient diagnosis. Meanwhile, US lawmakers are contemplating legislation that would formally recognise AI's capability to prescribe medication. While this technological trend affects nearly all patients, its impact is profoundly more severe for economically disadvantaged communities who already confront substantial barriers to accessing quality healthcare and experience higher rates of mistreatment within medical systems.
Systemic Bias Embedded in Algorithmic Diagnostics
The fundamental problem lies in AI's inherent limitations. These systems do not possess independent thought; they operate through probabilities and pattern recognition, which can systematically reinforce and amplify existing societal biases. Research substantiates these concerns. A pivotal 2021 study published in Nature Medicine examined AI algorithms trained on extensive chest X-ray datasets. It found these algorithms consistently under-diagnosed Black and Latinx patients, individuals recorded as female, and those insured through Medicaid.
Further evidence emerged in a 2024 study revealing that AI misdiagnosed breast cancer screenings among Black patients, with significantly higher odds of false positives compared to white counterparts. This algorithmic bias means some clinical AI tools perform notably worse for Black patients and other people of colour, thereby risking the exacerbation of health disparities for already marginalised groups. The question "Isn't something better than nothing?" becomes irrelevant when that "something" is fundamentally flawed and potentially harmful.
The Erosion of Informed Consent and Patient Agency
Compounding these diagnostic risks is the frequent lack of transparency. In some instances, patients are not adequately informed that AI is being used in their care. One medical assistant admitted that while patients know an AI system is listening, they are not told it generates diagnostic recommendations. This practice disturbingly echoes historical eras of exploitative medical racism, where procedures were conducted without informed consent. The removal of patient agency in determining what technologies are implemented in their healthcare is a significant ethical breach.
AI's Expanding Role in Deciding Access to Care
The potential consequences extend far beyond diagnostic inaccuracies. Advocacy group TechTonic Justice estimates that approximately 92 million Americans with low incomes have fundamental aspects of their lives, including healthcare access, decided by AI systems. These decisions range from Medicaid benefit amounts to eligibility for Social Security disability insurance.
This is not theoretical. Current federal court cases highlight the real-world impact. In Minnesota, Medicare Advantage customers have sued UnitedHealthcare, alleging coverage denials based on erroneous determinations by the company's nH Predict AI system. Some plaintiffs represent estates of patients who allegedly died due to denied medically necessary care. A similar case against Humana is proceeding in Kentucky, where plaintiffs argue the AI system makes generic recommendations based on incomplete medical records. While these cases remain ongoing, they signal a dangerous trend of AI gatekeeping essential healthcare for vulnerable populations.
Resisting Medical Classism in the Age of Automation
We must firmly reject the notion that unhoused and low-income patients should serve as testing grounds for AI healthcare rollouts. The documented harms—including systematic bias, eroded consent, and restricted access—far outweigh the unproven benefits touted by technology ventures. Given the substantial barriers these communities already face, it is imperative they receive genuine patient-centred care from human healthcare providers who listen to their needs and priorities.
Creating a healthcare standard where practitioners take a backseat to privately-operated AI systems disempowers patients and institutionalises medical classism. The development and implementation of any healthcare AI must involve rigorous evaluation and leadership from the communities it intends to serve. We cannot allow technology to widen the chasm of health inequity; the human element in medicine must remain paramount, especially for those most vulnerable.