The next time you turn to an AI chatbot for quick health advice, the answer you get may sound confident—but it might not be correct.
A new study published in BMJ Open- a peer-reviewed open access medical journal- found that artificial intelligence chatbots may give misleading medical advice nearly half the time.
The study titled ‘Generative Artificial Intelligence- driven Chatbots and Medical Misinformation: An Accuracy, Referencing and Readability Audit’ raises concerns about the performance of AI chatbots in giving medical advice on an everyday basis.
Researchers led by Dr Nicholas B Tiller, a research associate at Lundquist Institute, Harbor-UCLA Medical Center, tested five popular AI chatbots with 50 common health-related questions. Some answers were accurate, but nearly half (49.6%) of them were labelled as “problematic”, with 19.6% responses tagged as “highly problematic.” The study explicitly says nearly one in five answers could result in harmful outcomes if followed without professional guidance.
The authors warned that the issue is not just accuracy, but also how confidently chatbots present their responses.
“Approximately half of all outputs were deemed problematic, citations were frequently incomplete or fabricated, and chatbot response readability tended to be complex,” read the report
The study also highlighted a troubling tendency for chatbots to provide answers even when they lack reliable information.
“This combination of overconfidence and a lack of verifiable sourcing has implications for medicine, law, journalism and any field that places a premium on accuracy and evidence-based reasoning,” the researchers noted.
The study also noted that chatbot responses were written at a college reading level, far above the recommended sixth-grade level for public health information.
Importantly, the authors suggested that AI systems should sometimes refrain from answering altogether. “When these conditions cannot be met, a refusal to answer would be preferable—a restraint that may improve reliability, reinforce public trust and prevent sycophancy,” they added.
With more and more people using these AI technologies to get their health questions answered, it becomes all the more critical that there should be better regulation and more knowledge about their shortcomings.



