AI Job Interviews Surge: 43% of Large UK Firms Now Use Automated Screening
AI job interviews rise as UK firms adopt automated screening

Returning to work after the Christmas break with a brain feeling like "wet cake" is perhaps not the ideal state in which to face a job interview. Yet, this was the predicament of journalist Helen Coffey in January 2026, as she prepared to be quizzed not by a human resources manager, but by an artificial intelligence interface.

The Rise of the AI Interviewer

The use of AI in UK recruitment has tripled in the past year alone, with three in ten employers now implementing it in their hiring processes. According to data, 43% of large companies are using AI to interview candidates. Coffey's interviewer, an avatar she mentally dubbed "Carl", was created by HR-tech firm TestGorilla. This conversational AI tool is designed to filter candidates, with close to 800 organisations signed up to use it.

Despite the interview being a trial for a content marketing strategist role she had no experience in, Coffey found herself unusually anxious. She realised how much she traditionally relied on soft skills and human rapport to navigate interviews—using humour, reading body language, and creating a positive feedback loop with an engaged interviewer. With Carl's fixed half-smile, dead-eyed expression, and slight head shakes, that crucial human connection was entirely absent.

The AI 'Doom Loop' and its Consequences

This shift is part of a wider, self-perpetuating cycle described by Daniel Chait, CEO of recruiting software company Greenhouse, as an "AI doom loop". Candidates are increasingly using AI to mass-apply for jobs, while recruiters use AI to mass-reject them. Since ChatGPT's launch, job applications have surged by 239%, with the average opening now receiving 242 applications. Consequently, the number of applications making it to the hire stage has dropped by 75%.

The fallout is a profound erosion of trust. Greenhouse research shows 40% of job hunters report decreased trust in hiring, with 39% blaming AI directly. There are also serious allegations of built-in bias, with one landmark lawsuit alleging AI tools systematically screen out older workers, racial minorities, and people with disabilities.

Fraud, Deepfakes, and a Robotic Future

As the arms race escalates, so does fraudulent activity. A US report found a third of candidates admitted to using AI to alter their appearance in interviews, while 30% of hiring managers have caught candidates reading AI-generated answers. Chait warns this could lead to a future where AI interviewers interact with AI candidates, necessitating robust identity verification processes to combat both cheating and more sinister infiltration attempts.

Yet, it's not all negative. Proponents argue that, unlike human bias, algorithmic bias can be systematically audited and corrected. Automated assessments are scalable, can work across languages and time zones, and remove some of the inconsistency of human-led screening.

For job seekers now navigating this landscape, Chait's advice is to seek clarity on each company's rules regarding AI use in applications and interviews. For employers, the challenge is to remember that behind every application is a three-dimensional human being, not just a collection of algorithmic data points. As AI's steely grip on recruitment tightens, the central question remains: in the quest for efficiency, are we sacrificing the humanity that makes a workplace function?