If you want to understand how artificial intelligence will really impact the world, don’t look at coding, law, or finance. Look at healthcare. It is where AI faces its hardest test: layers of regulation, life-or-death stakes, complex biology, and a deeply human, compassionate core that most people would assume is the last thing a machine could replicate.
Nearly a decade ago, computer scientist and Nobel Prize-winner Geoffrey Hinton (known as the “Godfather of AI”) said hospitals should stop training radiologists because, within five years, AI would do the job better. Almost 10 years later, there are more radiologists than ever. Of the 950 artificial intelligence and machine learning tools that received FDA approval between 1995 and 2024, 723 were radiology devices. The machines improved. The humans didn’t leave.
[time-brightcove not-tgx=”true”]
When I raised this with Hinton recently, he was quick to reframe rather than retreat. What he misjudged, he said, wasn’t the technology. It was the economics.
“Healthcare is a very elastic market,” he told me. “If you allowed a healthcare worker to do ten times as much, we’d just all get ten times as much healthcare. Particularly old people, they can absorb endless amounts of it.”
The standard question—“Will AI replace doctors?”—turns out to be the wrong one. Demand for healthcare is effectively infinite. There is always another scan to read, another condition going undiagnosed because no one has time to look. AI will not shrink the medical workforce. It will expose how much unmet need was always there.
When AI outperforms doctors, and when it fails
In some settings, AI is already surpassing doctors. Cardiologist and researcher Eric Topol pointed to five studies in which AI systems working independently outperformed physicians who had access to AI as a tool. “I still think the combination is likely to win out,” Topol told me. “But I’m not as confident as I was in 2019.”
Why would AI alone sometimes outperform a human using AI assistance? One explanation is what researchers call automation neglect: physicians anchor on their initial diagnosis and fail to adjust, even when the system suggests an alternative. Another is that we simply have not learned how to collaborate effectively with these tools.
Not all the evidence favors the machine. In a randomized controlled trial published in Nature Medicine, cardiologist Jack W O’Sullivan and colleagues tested an AI system on complex cardiology cases involving suspected genetic cardiomyopathies, a diagnosis that even experienced clinicians find difficult.
“Specialists are scarce,” he said. “Could AI help generalists think like them?”
They could. General cardiologists assisted by AI produced assessments that specialist reviewers preferred, with fewer clinically significant errors. But 6.5% of the AI’s responses contained clinically significant hallucinations.
What made the finding useful was what happened next. “When the human cardiologist questioned the AI model, ‘are you sure the echocardiogram showed a thickened ventricle?’ the AI would correct itself.” The machine did not know it was wrong until someone asked.
And there are cautionary signs. Just last month, Topol noted, a paper in Nature Medicine evaluated medical triage using ChatGPT’s most advanced model. It triaged incorrectly more than half the time, telling patients who urgently needed the emergency room to stay home. “We have a long way to go,” he said.
The evidence is uneven. For some tasks, AI alone performs best. For others, human and machine together outperform either. In still others, the technology is dangerously unreliable. The real challenge isn’t whether AI works. It’s knowing when.
Shifting from reactive to preventive medicine
The most significant shift may not be diagnostic accuracy but timing. Modern health systems are built to treat disease after symptoms appear. Topol believes AI could help move medicine upstream.
“The three major age-related diseases, neurodegeneration, cancer, and cardiovascular disease, all take 15 to 20 years of incubation time in our bodies,” he told me. “We have this great runway to work with, but we didn’t have a way to integrate all the data. We didn’t even have all the data.”
Now we are starting to. Half a billion people are already using smart watches and other wearables, which generate continuous streams of heart-rate variability, blood oxygen, and sleep data. Researchers at Stanford recently showed that 130 conditions could be accurately predicted from a single night of sleep sensor data. Organ clocks, derived from thousands of blood proteins, can now estimate the biological age of individual organ systems. The missing piece, according to Topol, is the immunome, a comprehensive map of a person’s immune function.
“After the brain, the immune system is the most complex system in the body,” he said. “And we have no way in the clinic to measure it. In 2026, that’s dreadful.”
He believes that a deregulated immune system is the common thread connecting cancer, neurodegeneration, and heart disease, and that measuring it will unlock a new era of risk prediction.
The opportunity isn’t in replacing doctors with a single breakthrough product, but in building the infrastructure around a new upstream model of preventative care: sleep, wearables, blood proteins. The real promise of AI may be it quietly monitoring the body’s earliest warning signs and intervening long before illnesses become visible.
The legal, ethical, and human limits of AI in healthcare
Adoption of AI in healthcare, however, will not be purely technical. Hinton pointed to a legal asymmetry. If a doctor fails to use an available AI tool and a patient dies, no one is sued. But if a doctor uses AI and harm follows, liability could be immediate. The system discourages early adoption.
Meanwhile, human error remains pervasive. “We know there are at least 12 million diagnostic errors a year in the U.S. that result in about 800,000 people with disability or death,” Topol told me. “And we don’t tend to talk about that. We keep talking about the mistakes the AI makes.”
Plus, the question of empathy remains unresolved. When I asked Hinton whether he would feel comfortable being cared for by AI at the end of his life, he paused. “I might think it was faking it,” he said. Then added: “But I think AIs can genuinely have empathy.”
Topol disagrees. “AI is really good at channelling empathy,” he told me. “But there’s no such thing as a machine knowing what empathy is. People want to look somebody in the eye and know that person cares about them. That’s the essence of medicine. No machine will ever truly replace that.”
