In a digital world increasingly populated by artificial people, the ability to distinguish a real human face from an AI-generated fake has become a critical skill. A groundbreaking new study, however, reveals that nearly half of us are failing this visual test, with synthetic faces often appearing more 'real' than our own.
The Illusion of Reality
Research published in the prestigious journal The Royal Society has delivered a startling conclusion: people can only correctly identify an AI-generated face about a third of the time. The study, led by Dr Katie Gray from the University of Reading in collaboration with academics from Greenwich, Leeds, and Lincoln, involved 664 participants.
In a telling detail, the participants' accuracy rate only improved to 50% when they closed their eyes and simply guessed. This highlights the profound challenge posed by advanced generative AI tools, which create images by analysing patterns in vast datasets of real human photographs.
Dr Gray explained a phenomenon known as hyper-realism. 'AI-generated faces tend to look more average than real faces, but participants were more likely to judge faces that look average as real,' she told Metro. 'It's likely that several different factors are working together to make AI-generated faces appear more realistic than real faces.'
Training the Human Eye
The study also offered a glimmer of hope. Researchers discovered that even a brief, five-minute training session could significantly improve detection rates. This training highlighted common anomalies found in AI creations. After this minimal guidance, the average participant's accuracy in spotting a fake face rose to 51%.
An even more dramatic improvement was seen among a group dubbed 'super-recognisers' – individuals with a natural talent for remembering faces. Before training, this group scored 41%. Afterwards, their accuracy jumped to an impressive 64%.
Dr Eilidh Noyes, a psychologist and the study’s co-author from the University of Leeds, pinpointed five key anomalies that often betray an AI-generated face:
- Misaligned teeth
- A wonky nose
- Blurred hairlines
- Mismatched or misaligned ears
- Asymmetric eyes
'There are many factors that contribute to these anomalies,' said Dr Noyes. 'But some of them are linked to the data which the algorithms that make the faces are trained with.' The images for this study were created using the StyleGAN3 model, trained on a public repository of photographs.
A Growing Cybersecurity Threat
While the research might seem like a curious puzzle, it points to a serious and growing threat in the realm of cybersecurity. These hyper-realistic AI faces are the building blocks of deepfakes – eerily lifelike images and videos used for scams, misinformation campaigns, and blackmail.
Marijus Briedis, Chief Technology Officer at NordVPN, emphasised the gravity of the issue. 'Deepfakes aren’t just a technical issue, they’re a trust issue,' Briedis said. 'A little scepticism goes a long way, and research shows that even a few minutes of awareness can make you far better at spotting when something isn’t quite what it seems.'
This study builds on previous work, including research involving other AI models like ChatGPT and DALL-E, which also found people struggle to tell real from fake. Professor Jeremy Tree of the University of Swansea, who led that earlier research, called the new findings a 'little surprising'.
He noted the positive implications of the training, stating, 'Anything that can help stem the somewhat dystopian possibility that AI images will never be identified has got to be good news for humanity.' He added that the challenge extends even to familiar faces, making public awareness and education more crucial than ever.