AI in UK Healthcare: The Hidden Cost of Algorithmic Medicine
AI's Dangerous Impact on UK Healthcare Revealed

The quiet hum of computers is increasingly replacing the human conversation in British doctor's surgeries, as artificial intelligence transforms medical practice at an unprecedented pace. What begins as a promise of efficiency may ultimately cost us the very essence of care itself.

The Algorithmic Consultation

During a recent medical appointment, a woman in her seventies named Pamela found herself speaking to both her doctor and his computer simultaneously. As she described her increasing breathlessness when climbing stairs, the clinic's new AI scribe was already transcribing, summarising, and highlighting keywords from their conversation.

The doctor, apparently satisfied the algorithm had captured Pamela's symptoms adequately, turned away from her to review the screen while she continued speaking. The AI note was fluid and accurate in its clinical summary, but it completely missed the catch in her voice when she mentioned the stairs, the flicker of fear about her increasing isolation, and the unspoken connection to her mother's traumatic death that the doctor never explored.

The Rapid Rise of Medical AI

Scenes like Pamela's are becoming commonplace across the UK health system. While physicians have traditionally resisted technologies that threatened their authority, artificial intelligence is breaking that pattern by sweeping into clinical practice faster than almost any tool before it.

Recent data shows that two-thirds of American physicians – a 78% jump from the previous year – and 86% of health systems used artificial intelligence in their practice during 2024. Dr Robert Pearl, former CEO of Permanente Medical Group, predicts that "AI will be as common in healthcare as the stethoscope".

Policymakers and business interests promise AI will solve physician burnout, lower healthcare costs, and expand access. Hospital leaders like Dr Eric Topol have hailed AI as the means to finally restore humanity to clinical practice by liberating doctors from documentation drudgery.

The Human Cost of Algorithmic Care

However, when installed in a health sector that prizes efficiency and profit extraction, AI becomes not a tool for care but simply another instrument for commodifying human life. The technology's fundamental limitation lies in its inability to capture what makes us human.

One young woman's experience illustrates this perfectly. Before her psychiatric appointment, she had told her story to ChatGPT at least ten times, refining her narrative based on which words elicited the most clinically appropriate responses. When she finally spoke to her psychiatrist, she used the precise, clinical language the AI had taught her – language largely stripped of emotion, personal history, and the idiosyncratic way she actually experienced her suffering.

Her deep fears were encased in borrowed phrases, translated into a format she thought would be recognised as legitimate medical concerns. While this made bureaucratic documentation easier, much else was lost in the process.

The Political Dimensions of Medical AI

The faith in AI reflects a dangerous misunderstanding of care itself, decades in the making through the unquestioned adoption of evidence-based medicine. While EBM brought real gains by challenging practices based on habit, it also narrowed the scope of clinical encounters.

The messy, relational dimensions of care – the ways physicians listen, intuit, and elicit what patients may not initially say – became secondary to standardised protocols. Doctors began treating not singular people but data points, and the complexity of patients' lives was crowded out by metrics and algorithms.

This transformation has profound political implications. A system that alienates people in their most vulnerable moments – bankrupting them, gaslighting them, leaving them unheard – breeds despair and rage. It creates conditions where authoritarians gain traction.

Reclaiming Genuine Care

If the danger of AI medicine is forgetting what genuine care entails, we must collectively recall the foundation of caregiving that has been obscured under health capitalism. Care is not about diagnoses or prescriptions but about the provision of support alongside the cultivation of genuine concern for others.

This kind of care is inseparable from politics and the possibility of community. As philosophers and feminist theorists have long argued, care is not only a clinical task but an ethical and political practice. To be truly listened to – to be recognised not as a case but as a person – can change not just how one experiences illness, but how one experiences oneself and the world.

The rush to automate care is not politically neutral. To hollow out medicine's capacity for presence and recognition is to hollow out one of the last civic institutions through which people might feel themselves to matter to another human being.

Technology, including AI, need not be inherently dehumanising. In a health system oriented toward genuine care, AI could help track medication safety, identify vulnerable individuals for support, or remedy inequities. But such uses depend on a political economy premised on care, not extraction and commodification of human life.

True progress requires refusing the illusion that faster and more standardised is the same as better. Genuine care is not a transaction to be optimised but a practice and relationship to be protected – the fragile work of listening, presence, and trust that forms the bedrock of both healing and democracy itself.