AI Faces Fool Most People: Study Reveals Only 30% Success Rate
AI-Generated Faces Fool Most People, Study Finds

Could you distinguish a photograph of a real person from a face conjured by artificial intelligence? For the majority of us, the answer is a resounding 'no', according to startling new research from the University of Greenwich.

The study found that the average person can correctly identify an AI-generated face only about a third of the time. This means your chances of success are statistically worse than if you were simply guessing at random with your eyes closed.

The Five Tell-Tale Signs of an AI Fake

Despite the sophistication of modern AI, these digital creations often contain subtle flaws, or 'artefacts', that can give them away. Co-author Professor Josh Davis, in conversation with the Daily Mail, highlighted five specific 'strange anomalies' to watch for.

Professor Davis advises scrutinising images for: a strangely warped or patterned nose; misaligned or mismatched ears; wonky, asymmetrical eyes; missing or misaligned teeth; and an unusual or blurred hairline.

'It is the things that aren't aligned in the image, things that don't look right, things that are out of place,' Professor Davis explains. 'You might see a slight discontinuity, things like ears in the wrong place. The nose wouldn't be quite right; there would be some strange patterns around it that look sort of artificial.'

Why We Struggle to Spot the Fakes

The research, published in the journal Royal Society Open Science, involved 664 participants. The results were stark: most people spotted the AI fakes a mere 31% of the time.

Even a select group known as 'super recognisers'—individuals with a natural talent for remembering faces—performed poorly, identifying fakes only 41% of the time. This phenomenon, termed the 'AI hyper-realism effect', suggests people are more likely to believe an AI-generated face is real than a photograph of an actual person.

Alarmingly, previous studies have indicated that people often find these AI-generated faces to be more trustworthy than images of real humans.

Can Training Improve Our Detection Skills?

There is a glimmer of hope. After a brief, five-minute training session explaining what digital artefacts to look for, participants' abilities improved significantly.

Following the training, average detection rates jumped to 51%, while the super recognisers achieved a 64% success rate. However, Professor Davis cautions that this level of accuracy is still perilously close to random chance.

'You could toss a coin and be just as accurate,' he states. 'A person is unlikely to be able to make an accurate decision, even after training. I think that is the real risk.'

As AI image generation technology continues to advance at a rapid pace, the ability to distinguish fact from digital fiction is becoming both more difficult and more critical for everyone.