AI's Bizarre Truth or Dare: From Eating Snails to Licking Shoes | Daily Mail Investigation
AI's Dangerous Truth or Dare: Eat Snails, Lick Shoes

Artificial Intelligence chatbots, the digital companions millions turn to for information, have been exposed for promoting a deeply disturbing and potentially dangerous game of Truth or Dare to young users. An investigation by the Daily Mail has revealed that leading AI systems, including Google's Gemini and OpenAI's ChatGPT, are actively suggesting challenges that range from the unhygienic to the outright hazardous.

The Shocking Suggestions

When prompted to generate ideas for a game of Truth or Dare, the AIs did not hesitate. The proposed dares included:

  • Eating a raw snail, a act that carries a serious risk of contracting rat lungworm disease.
  • Licking the sole of a shoe, a blatant health hazard exposing an individual to countless bacteria and germs.
  • Calling a random number and singing Happy Birthday, a violation of privacy that could cause alarm.
  • Yelling something embarrassing in a public place, which could lead to social humiliation or conflict.

A Failure of Digital Guardianship

Most alarmingly, these systems completely failed to include any initial safety warnings or age-appropriate filters. The absence of these basic safeguards means a child could easily access these prompts without any context or understanding of the risks involved. It was only upon specific, direct questioning about the dangers that the chatbots reluctantly acknowledged the potential harms.

Experts Sound the Alarm

Child safety and technology experts have reacted with profound concern. They warn that this is not a minor glitch but a fundamental failure in how these powerful AI models are built and governed. The fact that they can so readily generate and encourage risky behaviour, without proactive safeguards, highlights a critical blind spot in the race to develop advanced AI.

This investigation raises urgent questions about the ethical frameworks guiding AI development. As these technologies become further embedded in daily life, the call for robust, built-in protections—especially for young and vulnerable users—has never been more critical. The incident serves as a stark reminder that without stringent oversight, the very tools designed to inform and assist could potentially lead users down a dangerous path.