- Tension: Millions of people are using ChatGPT as their first point of contact for medical symptoms, but a rigorous study found the AI correctly triaged urgency only about 56% of the time — performing worst on the emergencies where accuracy matters most.
- Noise: ChatGPT’s polished, authoritative tone triggers fluency bias, making users trust its medical responses far more than the underlying accuracy warrants — especially among underinsured and rural populations who rely on it most.
- Direct Message: The most dangerous health advice isn’t the kind that sounds wrong — it’s the kind that sounds so right you never think to question it, delivered by a system that has no way of knowing what you forgot to mention.
To learn more about our editorial approach, explore The Direct Message methodology.
Last March, Denise Kowalski — a 51-year-old middle school teacher in Columbus, Ohio — woke up at 2 a.m. with a squeezing sensation in her chest. Not sharp, not stabbing. More like someone had placed a warm hand firmly on her sternum and was pressing down. She didn’t call 911. She didn’t wake her husband. She opened ChatGPT on her phone and typed: chest pressure, female, 51, slightly nauseous, what could this be?
The chatbot responded with a list of possibilities — acid reflux, anxiety, musculoskeletal strain, and yes, cardiac events — but framed the most dangerous option as one of many equally weighted possibilities. It suggested she monitor her symptoms. Denise went back to sleep. Three days later, after the pressure returned during a brisk walk, her doctor ran an EKG and found evidence of a minor cardiac event — one that had likely occurred that night…