AI Chatbots Are Quietly Judging Users, Study Reveals Rigid Bias Patterns
Artificial intelligence chatbots are not merely answering queries but are actively making judgements about users, according to a groundbreaking new study. Researchers have discovered that AI systems like OpenAI's ChatGPT and Google's Gemini systematically evaluate people in ways that mimic human trust, yet with critical and concerning differences that amplify bias.
The Mechanics of AI Judgement
AI models are increasingly integrated into diverse fields, shaping decisions on hiring, bank loans, and medical advice. This pervasive influence makes it essential to understand how these models arrive at critical conclusions and how they diverge from human reasoning. The study, published in the journal Proceedings of the Royal Society A, analysed 43,000 simulated decisions by modern AI models alongside approximately 1,000 human decisions.
Both AI and human participants were presented with familiar scenarios, such as determining loan amounts for small business owners, assessing trust in babysitters, rating bosses, or deciding donations to non-profit founders. Researchers found that AI models did not simply process information; they appeared to form something akin to "trust" about individuals, favouring those who seemed competent, honest, and well-intentioned.
Rigid Versus Holistic Approaches
However, the study highlights a stark contrast in judgement styles. Humans form general impressions by blending multiple traits into a single, intuitive, and holistic assessment. In contrast, AI follows a more rigid, "by-the-book" approach, breaking people down into scores on competence, integrity, and kindness—akin to columns in a spreadsheet.
"People in our study are messy and holistic in how they judge others. AI is cleaner, more systematic, and that can lead to very different outcomes," explained Valeria Lerman, an author of the study. This systematic nature makes AI judgement less nuanced and biases harder to detect, as it operates with consistent but inhuman precision.
Amplified and Systematic Biases
The research uncovered troubling patterns of amplified bias in AI decisions. For instance, in financial scenarios, significant differences emerged based solely on demographic traits, with older individuals frequently receiving more favourable outcomes. "These divergences warrant careful attention when interpreting large language model trust-related outputs," the study noted.
"Humans have biases, of course," said Yaniv Dover, another study author. "But what surprised us is that AI's biases can be more systematic, more predictable, and sometimes stronger." Moreover, there is no single "AI opinion" about the same people; different systems can appear similar superficially but behave very differently in decision-making.
Implications for Trust and Understanding
Researchers warn that the key question is no longer whether we can trust AI, but whether we comprehend how AI trusts us. "These systems are powerful. They can model aspects of human reasoning in a consistent way. But they are not human, and we shouldn't assume they see people the way we do," Dr. Dover emphasised.
The findings underscore the need for greater scrutiny as AI continues to permeate critical aspects of society. The Independent has reached out to Google and OpenAI for comment on the study, highlighting the growing concern over AI's role in judgement and bias amplification.



