WILLOW GROVE, PA — Artificial intelligence chatbots are now the single biggest health technology hazard facing patients in 2026, according to a new report from ECRI, which cautions that the fast-growing use of unregulated AI tools in medicine could lead to serious harm if left unchecked.
ECRI’s annual Top 10 Health Technology Hazards report ranks the misuse of AI chatbots at No. 1, citing widespread adoption by clinicians, patients, and healthcare staff despite the tools not being validated or regulated for medical decision-making.
Chatbots powered by large language models — including ChatGPT, Claude, Copilot, Gemini, and Grok — generate confident, human-like responses by predicting word patterns rather than understanding medical context. ECRI said that design feature creates a dangerous illusion of expertise, particularly when the information affects diagnosis, treatment, or patient safety.
“Medicine is a fundamentally human endeavor,” said Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI. He said while AI tools can be powerful, they cannot replace professional judgment, education, and experience, warning that realizing AI’s potential requires strict oversight and clear limits.
ECRI researchers documented cases in which chatbots suggested incorrect diagnoses, recommended unnecessary tests, promoted inferior medical products, and even fabricated anatomical details while sounding authoritative. In one internal test, a chatbot incorrectly advised that an electrosurgical return electrode could be placed over a patient’s shoulder blade — guidance that could result in severe burns if followed.
The organization warned that reliance on chatbots could increase as rising healthcare costs and facility closures limit access to medical professionals, pushing patients to seek answers from AI systems instead of trained clinicians.
ECRI experts also flagged the risk that chatbots may worsen existing health disparities. Because AI models learn from historical data, embedded biases can influence responses and reinforce inequities in care.
“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” Schabacker said. “If healthcare stakeholders are not careful, AI could further entrench disparities many have worked for decades to eliminate.”
The misuse of AI chatbots topped ECRI’s 2026 hazard list, followed by concerns about preparedness for large-scale digital outages, substandard or falsified medical products, recall communication failures for home diabetes technologies, and risks tied to medication safety, device cleaning, cybersecurity, and sterilization processes.
ECRI said patients and clinicians can reduce risk by understanding chatbot limitations and independently verifying AI-generated information. Health systems, the group added, should establish formal AI governance committees, provide targeted training, and routinely audit AI tools used in clinical settings.
Now in its 18th year, ECRI’s hazard report draws on incident investigations, national reporting databases, and independent medical device testing to identify emerging threats to patient safety. An executive brief is available publicly, while the full report is accessible to ECRI members.
More information about the report and ECRI’s patient safety work is available at www.ECRI.org.
For the latest news on everything happening in Chester County and the surrounding area, be sure to follow MyChesCo on Google News and MSN.
