AI vs Experience: In Medicine, One Misses What the Other Sees
- Ifeanyi Esimai, MD

- Sep 24, 2025
- 3 min read

We were medical students on rounds in the children’s emergency room of a teaching hospital in the 1990s. A woman paced near the corner, wailing in intervals. Her son, no older than six, slipped in and out of consciousness. There were no lab results. No vitals on screen. Just the sound of panic and the faint rattle of oxygen tanks.
The consultant—seasoned, sharp—finally barked at the nurse: “Give him a bolus of 50% dextrose.”
Within a minute, the child sat up. Alert. Alive.
We stood frozen.
He turned to us and said,
“You think it’s magic. It’s not. It’s experience. And after years of practice, I hope you’ll be able to make such calls too.”
The Promise and the Problem
A few days ago, I read Joshua Rothman’s New Yorker piece on AI in medicine. It quotes Dr. Dhruv Khullar’s observation that diagnostic AI now rivals—and often surpasses—human doctors in speed and accuracy, if the information is clean, curated, and complete.
But here’s the tension: Medicine isn’t clean, curated, or complete.
It’s messy. It hides inside missing charts, fractured family histories, faulty memories, and unsaid fears.
Experience isn’t just pattern recognition—it’s noise filtration.
The kind that tells you when the fever isn’t just an infection. When the labs are normal but the child is dying.
Sometimes the doctor has to see the diagnosis before it exists on paper.
The Centaur Model: Not Man vs. Machine, but Both
Dr. Khullar suggests that AI won’t replace physicians—it will augment them. The centaur model: part human, part machine.
I use AI in my writing workflow. It helps organize. It prompts alternatives. But it doesn’t know what I know. It doesn’t remember the way the boy’s fingers trembled from low glucose—or the way his mother screamed like she was losing her last hope.
In that ER, AI might’ve flagged sepsis. But the consultant saw sunken eyes, wasted limbs, the signs of urban hunger. That isn’t in the algorithm.
The Risk of Atrophy
Rothman makes a chilling point: over-reliance on AI can de-skill physicians. The way we no longer know how to percuss a liver. We've outsourced heart murmur interpretation to echocardiograms—and now, with AI-powered stethoscopes delivering diagnoses in 15 seconds, we may soon outsource auscultation entirely. I wrote about that here – When AI Listens.
What happens when a generation of physicians loses its diagnostic gut?
Here’s what: we become dependents, not practitioners. And dependency in medicine means danger. Because when the power goes out, when the signal drops, when the AI doesn’t understand the undocumented patient or the dialect in the note—you better hope someone in the room still knows what 50% dextrose looks like in a syringe.
Doctors are Not Just Diagnosticians
They're interpreters of the unspoken. Sherpas through fear. They bring context, character, and in some cases—conscience.
Arthur Conan Doyle was a physician before he created Sherlock Holmes. He understood that the great detective wasn't great because he saw everything—but because he saw what mattered. I’ve written psychological thrillers and police procedurals myself, and I’ve come to believe the same thing about good doctors: they don't just observe. They interpret. They deduce. They listen between the lines.
Same with medicine.
AI may detect the pattern, but experience knows which part of the noise is music.







Comments