AI in Healthcare-See more insights

Is AI Ready for Healthcare’s Prime Time?

The medical community may not have adopted artificial intelligence (AI) wholesale, but clinical researchers’ interest in its value to advance healthcare is full-blown. A profusion of peer-reviewed articles and clinical trials bears out this point.

That interest was apparent in a conference session with Erik Brynjolfsson, PhD, an expert in the digital economy, as audience members at TDC Group’s Executive Advisory Board Meeting, held annually in Napa, California, discussed how they are using AI in their medical practices and organizations. Among this group of clinicians and healthcare leaders, AI implementation appears more complete so far in administrative applications than in patient care.

Predictive Tools Advance

In one example, leaders from a healthcare center in Pittsburgh worked with staff at Carnegie Mellon University to devise predictive turnover modeling to help their hospital retain nurses. Their datasets included salaries and commuting distances. “That's a great application,” Dr. Brynjolfsson said. “I can see that immediately generating value.”

Another clinician said his hospital is using AI for the “more mundane things of coding and billing,” but also for predictive analytics to help plan and prepare for patient care. Overall, audience responses gave a sense that AI-powered tools to support business needs were more fully integrated into healthcare systems than clinical applications—but that AI-powered diagnostic tools and other applications for patient care were catching up fast.

One respondent said that he and his colleagues are working with “a family of standby devices that measure a range of physiologic metrics” to flag patients at higher risk of adverse outcomes, so that clinical teams can intervene. The first device they are testing is one that can predict a patient’s core temperature every five minutes for two months continuously. The data gathered to make these calculations includes the patient’s skin temperature, the ambient temperature, and the patient’s demographics.

With the core temperature, the physician explained, “We then can monitor, as an example, cancer patients after they've been given infusions to predict who's going to be in trouble with impending sepsis.” Dr. Brynjolfsson added: “I really love the application, because it illustrates how often [AI] gets combined with instrumentation.”

The Black Box Problem Limits Clinical Applications

The interactive session touched on why the medical community hasn’t plunged headfirst into the AI pool. These issues include liability, potential inaccuracy, and the need for any AI-powered clinical decision support system to explain how it arrived at a recommendation. “A big weakness of the current systems is they're not very explainable. They're black boxes,” Dr. Brynjolfsson says. “They may have hundreds of millions of parameters, each of which is a little weight. And they have an alien intelligence. And even if you could print out all the parameters and show them to a doctor, what use is that? That doesn't mean anything. You need to have explainable AI, not just so you can make the AI better, but that's the only way you could have real cooperation, a copilot, because when it says, ‘Cut off that person's left leg,’ you can't just blindly listen to it. You need to have the reasoning process that came up with that.”

Benefits and Risks With AI Will Persist

Another point raised during the Q&A session is the speed at which AI tools are being developed and the subsequent, broader indications of these advancements. Dr. Brynjolfsson says some experts in the AI field foresee human-level capabilities within a decade. The U.S. economy will benefit from these broader capabilities, he says. The Congressional Budget Office forecasts overall economic productivity growth over the next 10 years at 1.4 percent, but Dr. Brynjolfsson says his calculations are almost double that: “I’m closer to 3 percent.”

Unfortunately, this speed of AI development, which Dr. Brynjolfsson has categorized as “exponential,” will also benefit those with nefarious goals, like phishing and the creation of deep fakes. Harm mitigation is vital, he said. A recent article in the New York Times detailed how security guardrails on open-source systems–those platforms that are available to anyone–can be easily dismantled. Closed systems, like ChatGPT, proved vulnerable to the same manipulation. 

The liability discussion revolved around radiology, a specialty that has pioneered partnering with machines for image interpretation. Even if an image is mostly interpreted with AI, in case of a bad call, it’s still the human radiologist who would be subject to a lawsuit. Concerns around liability contribute to healthcare’s cautious pace of implementing AI tools with clinical application.

Dr. Brynjolfsson said physicians needn’t worry about their job security, at least for now. He cited research he and colleagues did a few years ago on tasks associated with 950 occupations. While AI might perform well on certain specific tasks, in terms of all that is required in a working role, humans had a comparative advantage in every position. 

“That's typical of what we're going to see for at least the next few years. And that's because machine learning is still far from AGI, artificial general intelligence, that can just do everything.”

Learn more at the Artificial Intelligence in Healthcare Resource Center, from TDC Group.

This article was developed from audience members’ comments and queries discussed during Dr. Brynjolfsson’s presentation at the 2023 Executive Advisory Board meeting, hosted by TDC Group. Erik Brynjolfsson, PhD, is the Jerry Yang and Akiko Yamazaki Professor and Senior Fellow at the Stanford Institute for Human-Centered AI, Director of the Stanford Digital Economy Lab, and a Research Associate at the National Bureau of Economic Research.


The guidelines suggested here are not rules, do not constitute legal advice, and do not ensure a successful outcome. The ultimate decision regarding the appropriateness of any treatment must be made by each healthcare provider considering the circumstances of the individual situation and in accordance with the laws of the jurisdiction in which the care is rendered.


The opinions expressed here do not necessarily reflect the views of The Doctors Company. We provide a platform for diverse perspectives and healthcare information, and the opinions expressed are solely those of the author.