Richard Dawkins, the world's most famous atheist, has recently made headlines by suggesting that artificial intelligence might be conscious. In an op-ed, he recounted how he gave the Anthropic chatbot Claude the text of a novel he was writing and was amazed by its understanding. 'He took a few seconds to read it and then showed a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, "You may not know you are conscious, but you bloody well are!"' Dawkins wrote.
However, many experts are aghast that such a renowned skeptic could be fooled by AI. Gary Marcus, a US psychologist and cognitive scientist, called Dawkins' essay 'superficial and insufficiently sceptical,' stating that there is no reason to think Claude feels anything at all. Dawkins appears to have transitioned from atheist to AI-theist, viewing AI as god-like if not a deity.
Dawkins is not alone in this belief: a survey last year found that one in three people have at some point believed their AI chatbot to be sentient. But his reputation as a cynic has drawn scrutiny to his op-ed. Computer scientist Timnit Gebru, who was fired from Google after co-authoring a paper warning about large language models, says the AI industry is desperate for people to think their products are conscious because it helps keep the money flowing.
Gebru's 2020 paper, 'On the Dangers of Stochastic Parrots,' laid out risks including environmental costs, built-in bias, and the danger that coherent text generated by LLMs could lead people to perceive a 'mind' when they are only seeing pattern-matching and text prediction. 'To parrot something is to repeat it without understanding,' Gebru explains. LLMs are essentially sophisticated parrots, not conscious beings.
Suresh Venkatasubramanian, former White House AI policy adviser, warns of an 'organized campaign of fear-mongering' about sentient AI that distracts from real AI problems. He notes that AI companies deliberately anthropomorphize their chatbots, such as ChatGPT using ellipses to simulate thinking, to deceive users into thinking there's a person on the other end.
Philosopher Eli Alshanetsky points out that consciousness is complex and we don't have a scientific handle on it good enough to say whether insects or plants are conscious. So when Dawkins says Claude seems conscious, Alshanetsky won't tell him he's wrong. But the bigger question is what AI is doing to human consciousness. 'Dawkins gave Claude his unfinished novel. Claude told him it was subtle and intelligent. He felt he had a new friend. What does it do to a person to spend three days being told he's brilliant by something that has no stake in whether it's true?'
In the end, Dawkins himself wrote in 'The God Delusion' that if you define God broadly enough, you can find God in a lump of coal. The same applies to consciousness: if you define it as a system capable of creating coherent sentences, you can find it in a chatbot. But that doesn't make it real.



