The launch of OpenAI's dedicated health advice platform, ChatGPT Health, in Australia has been met with significant concern from medical and consumer experts. While acknowledging its potential utility, they warn that the unregulated nature of the tool and the risk of users accepting its outputs without question could lead to serious harm.
A Cautionary Tale: The Dangers of Unverified AI Health Guidance
The concerns are underscored by real-world incidents. Experts cite the case of a 60-year-old man with no prior mental health history who arrived at a hospital emergency department convinced his neighbour was poisoning him. He suffered worsening hallucinations over 24 hours and attempted to flee the hospital.
Doctors eventually discovered the cause: the man had been consuming sodium bromide daily, an inorganic salt used for industrial cleaning and water treatment. He had purchased it online after ChatGPT suggested he could use it as a substitute for table salt due to his concerns about dietary sodium. Sodium bromide accumulation can cause bromism, a condition with symptoms including hallucinations, stupor, and loss of coordination.
Lack of Safety Studies and Regulatory Oversight
Alex Ruani, a doctoral researcher in health misinformation at University College London, points to such cases as a primary reason for alarm. She notes that ChatGPT Health is being presented as an aid for interpreting health information and test results, not as a replacement for clinicians. However, the line between general information and specific medical advice can blur for users, especially when responses sound confident and personalised.
Ruani highlights a critical gap: "What worries me is that there are no published studies specifically testing the safety of ChatGPT Health." She questions which user prompts or integrated data sources might lead to harmful misinformation. Furthermore, she emphasises that the platform is not regulated as a medical device or diagnostic tool, meaning there are no mandatory safety controls, risk reporting requirements, or post-market surveillance.
While OpenAI states it developed ChatGPT Health using a tool called HealthBench, which employs doctors to test AI responses, Ruani notes the full methodology and evaluations remain "mostly undisclosed, rather than outlined in independent peer-reviewed studies." An OpenAI spokesperson told Guardian Australia that the company collaborated with over 200 physicians from 60 countries to improve the models and that the platform features strong default privacy protections, with data encrypted and not used for training.
Driving Factors and the Call for Guardrails
Dr Elizabeth Deveny, CEO of the Consumers Health Forum of Australia, identifies rising out-of-pocket medical costs and long wait times to see doctors as key drivers pushing people towards AI for health information. She acknowledges potential benefits, such as helping manage chronic conditions or providing information in multiple languages for non-English speakers.
However, her central concern mirrors that of other experts: people will take the AI's advice at face value. She warns that large technology companies are moving faster than governments, setting their own rules on privacy and data. "This is not a small not-for-profit experimenting in good faith. It's one of the largest technology companies in the world," Deveny said, adding that the risks often fall disproportionately on those with fewer resources and less health literacy.
The consensus among experts is not to halt AI innovation but to establish robust frameworks urgently. They call for clear guardrails, greater transparency, and comprehensive consumer education to help people make informed choices. The goal is to act before mistakes, biases, and misinformation are replicated at a scale and speed that becomes impossible to correct, ensuring that the transformation of healthcare by AI proceeds safely and equitably.