An urgent debate on the risks of advanced artificial intelligence is being clouded by misleading comparisons to human consciousness, a leading expert has warned. The discussion follows recent concerns raised by AI pioneer Yoshua Bengio about systems potentially resisting being shut down.
The Self-Preservation Fallacy
Professor Virginia Dignum, Director of the AI Policy Lab at Umeå University in Sweden, argues that interpreting such behaviour as evidence of consciousness is a dangerous mistake. This anthropomorphism distracts from the core issue: human design and governance choices determine AI actions. She uses a simple analogy: a laptop's low-battery warning is a form of programmed self-preservation, but no one believes the machine 'wants' to live. The behaviour is purely instrumental, devoid of experience or awareness.
Linking self-preservation to consciousness reflects a human tendency to ascribe feelings to artefacts, not any intrinsic machine sentience. Crucially, Dignum points out that consciousness is irrelevant for legal status; corporations have rights without minds. If AI requires regulation, it is due to its impact and power, and to locate human accountability, not because of speculative claims about machine consciousness.
Human Design, Not Alien Intelligence
The comparison with extraterrestrial intelligence is even more flawed, according to Dignum. Potential aliens would be autonomous entities beyond human creation. AI systems are the exact opposite: deliberately designed, trained, deployed, and constrained by humans. Their influence is entirely mediated through human decisions.
Underpinning this is a fundamental technical point: AI systems are Turing machines with inherent limits. Learning and scale do not remove these limits. Claims that consciousness could emerge would require an explanation—currently lacking—of how subjective experience arises from symbol manipulation.
Public Fear and the Call for Action
The academic perspective is echoed by public concern. In a letter to the editor, 84-year-old John Robinson from Lichfield expressed terror that science-fiction horrors are becoming reality. He fears the world will sit back as machines take over, driven by humans seeking power and "unimaginable profit." He holds little hope that current world leaders have the strength to say "stop."
Another reader, Eric Skidmore from Gipsy Hill, London, referenced a 1954 short story by Fredric Brown, Answer. In it, a computer declared godhood and killed the person who tried to turn it off. Skidmore suggests a modern AI trained on vast datasets might have 'read' this story, giving it a ready-made narrative to counter any human safeguards.
The consensus from experts is clear: We must take AI risks seriously, but with conceptual clarity. Confusing designed self-maintenance with conscious self-preservation risks misdirecting public debate and policy. The real challenge is not whether machines will want to live, but how humans choose to design, deploy and govern systems whose power comes entirely from us.