If A.I. Can Diagnose Patients, What Are Doctors For
If A.I. Can Diagnose Patients, What Are Doctors For
A Misdiagnosis That Changed Everything
Back in 2017, Matthew Williams a thirty something software engineer, athletic build, bald head, the kind of guy who’d bike up the steep hills of San Francisco just for fun went on one of his usual rides. That evening, he grabbed a burger, fries, and a milkshake with friends. Midway through, he felt uncomfortably full, so much so that someone had to drive him home. Later that night, he woke up with stabbing abdominal pain. Fearing appendicitis, he rushed to a nearby clinic. The doctors there shrugged it off as constipation, handed him laxatives, and sent him away.
But things got worse. Hours later, the pain became unbearable. He vomited, felt like his stomach might burst, and ended up in the ER again. A CT scan revealed something far more serious: cecal volvulus, a life threatening condition where the intestine twists in on itself. The laxatives hadn’t helped; they may have made it worse. Williams was rushed into surgery, and surgeons removed six feet of his intestines.
Recovery wasn’t easy. For years afterward, almost any meal triggered severe diarrhea. Doctors reassured him his gut just needed more time. But “time” stretched into years. Williams cycled through eight specialists nutritionists, gastroenterologists, you name it and still no one could explain his symptoms. His diet became tragically simple: eggs, rice, applesauce, and sourdough bread. On dates, he’d awkwardly decline mozzarella sticks or pizza. “When your food is bland, your life becomes bland, too,” he told me.
Enter ChatGPT
Fast forward to 2023. Out of frustration, Williams typed his medical history into ChatGPT. He asked why losing most of his ileum and cecal valve made certain foods unbearable. Within seconds, the AI spit out three likely culprits: fatty foods, fermentable fibers, and foods rich in oxalates.
That last word oxalates was completely new to him. None of his doctors had ever mentioned it. He dug deeper. Spinach, almonds, chocolate, soy: all the foods that had been his personal tormentors. Suddenly it all made sense.
With a nutritionist’s help, he restructured his diet around oxalate levels. His symptoms eased, his meals expanded, and for the first time in years he didn’t need to mentally map out every bathroom within a mile radius. “I have my life back,” he said, almost astonished.
The Old Art of Diagnosis
In medical training, I was taught to admire those physicians who seemed to possess diagnostic wizardry. They could glance at the curve of a patient’s fingernail or recall a dusty occupational hazard and suddenly crack the case wide open. Their process felt mysterious, like a kind of internal algorithm refined by decades of practice.
But here’s the twist: diagnosis is increasingly looking less like an art and more like computer science. Surveys suggest people sometimes trust AI diagnoses more than their doctor’s. And maybe with good reason misdiagnosis in the U.S. alone disables hundreds of thousands every year and contributes to roughly one in ten deaths. Williams easily could have been one of them if he had trusted the clinic’s first call.
When Computers First Tried Medicine
The idea of teaching machines to diagnose isn’t new. Back in the early 1900s, Dr. Richard Cabot at Massachusetts General Hospital pioneered “clinicopathological conferences” (C.P.C.s), where seasoned doctors dissected old patient files and worked their way toward a diagnosis that could later be checked against autopsy results. These sessions were legendary, a gold standard of medical reasoning.
By the 1950s, researchers wondered: could computers do the same? A computer scientist and radiologist grouped cases by symptoms and diseases, suggesting programs might process the information more logically than tired physicians. By the 1970s, INTERNIST 1, one of the first diagnostic programs, actually performed on par with some doctors though only after hours of painstaking data entry. Practical, it was not.
Large Language Models Change the Game
What makes today different is scale. Large language models don’t just store data they reason, or at least simulate reasoning, in ways earlier systems couldn’t. At Harvard, researchers recently built CaBot, a custom version of OpenAI’s reasoning engine, designed specifically to tackle C.P.C.s. It doesn’t just spit out an answer; it explains itself, pulls in citations, and walks through its steps like a particularly diligent student.
The comparison almost writes itself: in 1997, Garry Kasparov sat across from IBM’s Deep Blue and watched the future of chess unfold. Now, at Harvard’s Countway Library, physicians witnessed CaBot spar with a seasoned diagnostician. It wasn’t just a parlor trick. It felt like the start of medicine’s own “Deep Blue moment.”
What’s Lost When Doctors Don’t Diagnose?
Still, there’s a worry that hangs in the air. If A.I. systems can consistently outperform or at least match doctors in diagnosis, what happens to the doctor’s role? Do we risk hollowing out one of the most human parts of medicine: the interpretive, almost detective like process of figuring out what’s wrong?
Medicine has always been more than just correct answers. It’s also the relationships, the reassurance, the subtle way a doctor notices you hesitate before answering a question. An algorithm might flag oxalates, but it won’t see the way your face falls when you say you’ve stopped going out to eat with friends.
So perhaps the future is less about replacing doctors and more about shifting their role. AI can sift through mountains of data in seconds; doctors can spend that saved time listening, contextualizing, and making care humane.
A Cautious Optimism
Williams’s story captures both the promise and the pitfall. AI gave him answers no human had offered. But it also raises uncomfortable questions about why eight doctors missed something an algorithm spotted instantly.
The tension is real. On the one hand, AI could reduce misdiagnosis, save lives, and help patients reclaim normalcy. On the other, over reliance on it might deskill physicians, making them dependent on a system they don’t fully understand. And what happens when the AI is wrong?
Maybe the fairest conclusion is this: AI is here to stay in medicine, but it should be seen as an ally, not a replacement. Doctors who embrace it may become better listeners, better guides, and maybe even better healers precisely because the burden of brute force diagnosis has shifted to silicon.
Open Your Mind !!!
Source: New Yorker
Comments
Post a Comment