How a Fake Disease Called Bixonimania Fooled the Internet and AI Models
Fake Disease Bixonimania Fooled Internet and AI Models

How a Fabricated Disease Called Bixonimania Deceived the Internet and AI Systems

In 2024, a group of scientists posted findings online about a new condition called bixonimania, which they claimed affected the eyes after prolonged computer use. However, the entire study was a fabrication—not just the research, but also the authors' names, affiliations, locations, and funding sources, which were listed as the University of Fellowship of the Ring and the Galactic Triad.

Despite its fictional origins, large language models such as ChatGPT and Gemini treated bixonimania as a real health concern, thereby amplifying a made-up disease into a legitimate-sounding issue. This incident highlights a growing problem in our digital age: the ease with which false information can spread and be accepted as truth.

The Prevalence of Deception in Science and Beyond

Bixonimania is not an isolated case. Deception, whether targeting humans or AI models, is alarmingly common in science and other fields. From AI hallucinations and state-backed disinformation to everyday lies, humans often fall prey to falsehoods due to cognitive biases and an increasing reliance on outsourcing learning to others.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

These challenges underscore the urgent need for individuals and society as a whole to better understand and overcome the risks associated with misinformation. Our shared fascination with deception is evident in the popularity of shows like The Traitors, which revolves around the tension between trust and suspicion.

A Traitors-Themed Science Event at the Cambridge Festival

To illustrate these risks, researchers recently organized a Traitors-themed science event at the Cambridge Festival. Four panellists presented their work, with the audience tasked with identifying which presenter was deceiving them. The presenters and their topics were deliberately outlandish, covering global health, climate, media, and astrophysics.

The researchers aimed to explore which signals—such as accent, gender, ethnicity, dress, and presentation style—influenced the audience's decisions. Surprisingly, the audience often relied on these cues to make incorrect judgments, rating the "traitors" as more credible than the honest researchers.

Audience Misjudgments and Credibility Perceptions

The two "faithful" researchers, Ada and Sarah, received the most votes as potential deceivers. Ada, from the non-profit Development Media Initiative, presented work on saving lives through radio broadcasts in the global south. However, the audience found her results implausibly impressive and questioned her credibility due to her presenting work she hadn't personally contributed to.

Sarah, an astrophysicist specializing in galactic archaeology, struggled to convey depth in her brief presentation, leading the audience to perceive a lack of understanding. The unusual name of her field also harmed her legitimacy, with one attendee noting, "Galaxy archaeology is too cool a name to exist."

In contrast, the two traitors, Jack and Joyce, received the fewest votes. Jack was an actor posing as a climate researcher, while Joyce, a genuine researcher, falsified her results. Joyce's personal connection to her work on Nigerian communities helped convince the audience of her authenticity, demonstrating how presentation can overshadow truth.

The Broader Implications for Society and Critical Thinking

Misinformation has always existed, but today it spreads faster and more convincingly, thanks to advanced tools like AI. Our collective ability to recognize false information is at risk, partly because society often prioritizes hard sciences over critical thinking skills from the arts, humanities, and social sciences.

For instance, the UK government's 2023 push to require all students to study maths until age 18 lacks a parallel emphasis on developing critical thinking. This gap makes it easier for falsehoods like bixonimania to be accepted as truth, especially when promoted by AI models.

Pickt after-article banner — collaborative shopping lists app with family illustration

Tools such as AI, the internet, and media are helpful, but it is up to us to use them wisely and avoid manipulation. In The Traitors, contestants have limited information to discern truth, but in the real world, we have the ability to verify claims. With effective caution and critical thinking, we can determine what is trustworthy, but it requires thinking for ourselves. Trust is ours to give, and we must learn to give it wisely.