
In a stunning series of blunders, Google's ambitious new AI-powered search feature has been caught dispensing dangerously absurd and factually incorrect advice, throwing the tech giant's flagship innovation into crisis.
The feature, known as 'AI Overviews', is designed to provide users with concise, generated summaries at the top of search results. Instead, it has become a source of viral mockery and genuine concern after it was found recommending that users eat at least one small rock per day and add non-toxic glue to pizza sauce to help it stick to the dough.
A Catalogue of AI Nonsense
The bizarre recommendations did not stop at culinary disasters. The AI tool, which draws on information from across the web, also provided a litany of other false and misleading answers:
- Historical Inaccuracy: It falsely claimed former US President Barack Obama is a Muslim.
- Dangerous Health Advice: It suggested leaving dogs in hot cars is safe if windows are cracked and recommending staring at the sun for 5-15 minutes to improve health.
- Fabricated Expertise: It cited a non-existent doctor to support a claim about the benefits of running with scissors.
The Root of the Problem
Experts suggest these spectacular failures occur because the AI lacks true understanding or common sense. It simply aggregates information from online sources, including satirical websites, Reddit forums, and outdated articles, without the ability to discern fact from joke or fiction.
This has led to the AI parroting long-debunked conspiracy theories and treating obvious trolling as legitimate advice. The feature appears to be "hallucinating" answers and creating citations out of thin air to support its flawed conclusions.
Google's Response and Mounting Pressure
In response to the widespread criticism, a Google spokesperson stated the company is "taking swift action" to address these violations of its content policies, often removing the offending AI Overviews. They attributed the errors to either "nonsensical queries" or a lack of high-quality information on the web for specific topics.
However, critics argue this incident highlights the profound risks of rushing AI integration into critical infrastructure like search engines. The episode serves as a stark warning about the potential for AI to amplify misinformation on a massive scale, presenting it with a convincing, authoritative tone.