AI Facial Recognition Errors Lead to Wrongful Arrests and Racial Bias Concerns
AI Facial Recognition Errors: Wrongful Arrests and Racial Bias

Alvi Choudhury, a 26-year-old software engineer, was 115 miles from Milton Keynes, a city he had never visited, when an artificial intelligence (AI) system flagged him for committing a crime there. Despite his protests of innocence, he was arrested at his home in Southampton by Hampshire police, handcuffed, and held in custody for ten hours. It was not until the early hours of the next day, January 8, that officers admitted he had been wrongly identified by an automated facial recognition system that matched him to CCTV images of a curly-haired Asian burglar who had stolen £3,000 from a Buddhist meditation centre in Bedfordshire a month earlier.

“I was very angry because the kid looked about ten years younger than me,” says Alvi, who sports a beard. “Everything was different. Skin was lighter. Suspect looked 18 years old. His nose was bigger. He had no facial hair. His eyes were different. His lips were smaller than mine. I just assumed that the investigative officer saw that I was a brown person with curly hair and decided to arrest me.”

Worryingly, Alvi’s story is not unique. Midwife Rennea Nelson, who was six months pregnant at the time, experienced a facial recognition error in a B&M store in Romford, Essex, last year. “I just walked in with my husband, Charles, and an alarm went off,” she says. “Then someone from the staff came running towards me shouting, ‘You’re a thief! You’re a shoplifter!’ It was traumatising and degrading. I had a high-risk pregnancy because I’d lost an earlier baby. I’d been told to avoid stressful situations and here was this man shouting at me, telling me my face was on a system that flagged up shoplifters.”

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Anti-knife campaigner Shaun Thompson, 39, was walking down Borough High Street near London Bridge on February 3, 2024, when police stopped him, demanded identity documents, repeatedly asked for fingerprint scans, and inspected him for tattoos and scars. Shaun, born in Jamaica but a London resident since age five, says an officer told him that facial recognition had flagged him as wanted. “I asked him what I was wanted for and he said that is what they were trying to find out. None of what he was saying made any sense.”

This incident led Shaun, alongside fellow claimant Silkie Carlo, to challenge the Metropolitan Police’s use of live facial recognition in the High Court, arguing it breached their right to privacy under Article 8 of the European Convention on Human Rights. Last week, the court ruled against them, stating the technology does not breach the law. Policing Minister Sarah Jones welcomed the ruling, saying the technology would be rolled out nationally with “record investment” because “there can be no true liberty when people live in fear of crime.”

Yet Alvi, Rennea, and Shaun are among increasing numbers of black and Asian people flagged as “false positives” by AI facial recognition systems. These systems examine images from live camera vans or crime scene footage and compare them with watchlists. The technology measures facial characteristics, part of an individual’s biometrics. While largely effective, it is controversial due to racial bias.

“Live Facial Recognition is the equivalent of having your fingerprints scanned as you walk down the street, without your consent,” warns Ruth Ehrlich of Liberty. The technology, provided by German firm Cognitech, performs around 25,000 comparisons monthly against the Police National Database, which holds 20 million images. It is almost 100% accurate for white people but less so for black or brown individuals. In Rennea’s case, the software from UK-based Facewatch wrongfully flagged her.

Home Office research published in December highlighted significant inaccuracies for ethnic groups: retrospective AI facial recognition generated false positive rates of 5.5% for black people and 4% for Asians, compared to 0.04% for white people. Among black women, the failure rate rose to 9.9% – 100 times higher than for white women. The Association of Police and Crime Commissioners described this as “concerning in-built bias” and noted that “technology has been deployed into operational policing without adequate safeguards.”

Pickt after-article banner — collaborative shopping lists app with family illustration

Alvi, an IT professional, was shocked by the 4% false positive rate for Asian faces. “No tech company would ever put a system into production with a failure rate of one in 25,” he says. “That’s horrific. They said they had officers visually review it. That is even more concerning because that is probably racial discrimination.” Thames Valley Police, who requested Alvi’s arrest, admitted the mistake and apologised, stating that the arrest was based on officers’ visual assessment and not racial profiling.

Jake Hurfurt of Big Brother Watch says errors likely stem from AI training on mostly white faces. “These machine learning algorithms are trained on massive data sets of mostly white faces, so it’s going to be better at identifying those.” Dr Daragh Murray of Queen Mary School of Law warns that false positives undermine trust in police and will increase as facial recognition expands. Rennea, who works at Queen’s Hospital in Romford, says B&M offered her a £20 compensation voucher, which she refused, and reported the company to the Information Commissioner. A B&M spokesperson said it was “human error” and apologised.

The digital collection of biometrics can aid crime fighting, but questions arise over consent and storage. Police say they dispose of biometrics after sweeps, but campaigners worry about future misuse. In January, Home Secretary Shabana Mahmood announced increasing LFR vans from 10 to 50 nationally. Biometrics and Surveillance Camera Commissioner Professor William Webster argues for specific legislation: “Any police force that uses face recognition will find themselves in a court of law because they will misidentify somebody.”

Shaun and Silkie Carlo’s High Court challenge was dismissed, but they argued that the Met scanned 4.2 million people’s faces in 2025 without permission, treating the public like suspects. Shaun, now a mentor with Street Fathers, said in court: “I am concerned about the use of this technology on the streets of London. It’s intrusive. What happened to me could have happened to anyone.” As AI facial recognition becomes ubiquitous, the question remains: why should innocent people worry? As Ruth Ehrlich points out, “Alvi, Rennea and Shaun hadn’t done anything wrong. And look what happened to them.”