Bussiness
Even with AI, healthcare must be in ‘the humanity business’
Discussions of artificial intelligence tend to engender hope, hype and even fear.
Many healthcare leaders expect AI to offer new opportunities in patient care and streamlining operations. But even ardent supporters of AI also acknowledge that the development of new AI tools must be managed carefully, even cautiously.
Geeta Nayyar, MD, author and healthcare technology leader, and Vinitha Ramnathan, chief product officer of NRC Health, talked with Chief Healthcare Executive® about AI in healthcare. In a joint discussion, they offered their perspectives on AI’s potential, the need for ethical safeguards and building trust in AI.
Nayyar, known as “Dr. G,” says the potential for AI in healthcare seems to be “limitless.” She also notes that there were similarly sunny projections for everything from electronic health records to social media.
“We have to be mindful that the lessons we have learned from past technologies need to be applied to AI, no matter how ‘new’ it might seem to be,” Nayyar says.
In a recent 40-minute discussion with Chief Healthcare Executive, they stressed that appreciation for AI’s possibilities doesn’t clash with the need for thoughtful development and use of AI technologies.
AI is not a ‘silver bullet’
Healthcare organizations recognize some of the concerns around AI, including the fact that many consumers are leery of AI being used in areas such as patient diagnosis, says Ramnathan.
“Ethical AI is here to stay in healthcare,” Ramnathan says.
And she adds, “There’s a lot of buzz about it, but it’s definitely here, and it’s really up to us now to say, how do we ensure that it’s being used the right way, and what can we do to make that ethical?”
Nayyar agrees with that assessment, but says, “I would also add that I think unethical AI is here to stay as well, if we are not mindful as an industry.”
The author of “Dead Wrong: Diagnosing and Treating Healthcare’s Misinformation Illness,” Nayyar points to the existence of AI hallucinations as a cautionary tale about the hype versus the reality of AI.
One example of misinformation came when Google’s AI Overview tool was asked how to prevent cheese from sliding off of pizza, and the answer was to apply some glue, as Forbes reported (the AI response reflected an old comment that gained popularity on Reddit).
Nayyar says it’s important to recognize that AI is not “the silver bullet.”
“All of the bad habits that we have in healthcare, the lack of interoperability, the lack of diversity in our clinical trials, the lack of communication and understanding and data on different types of populations, particularly the underserved Black and brown, rural health communities, etc, all of our bad habits in healthcare need to still be fixed. AI is only going to amplify our good habits or our bad habits,” she says.
Picking the right use cases
When it comes to fostering trust, Ramnathan says healthcare organizations can pick the right use cases for AI to demonstrate how AI can be impactful to staff.
Rather than jumping to diagnostic uses, Ramnathan suggests using generative AI to help patients with scheduling appointments. Similarly, AI tools can provide a more holistic view of patients that show up in hospitals and clinical practices.
“Those are areas and use cases where it can be very impactful for that patient’s experience and for a clinician’s experience,” Ramnathan says.
She notes that AI can help improve the experience for patients by offering more of the personalization that consumers enjoy in other industries. NRC Health works with health systems to gain insights from patients, and the company is using AI-powered tools to gather more data from patients.
“In an era where my barista knows exactly how I like my coffee, it’s kind of sad that that same amount of personalization is not available in healthcare,” she says.
Pointing to electronic health records and some of the administrative headaches they have created for doctors and nurses, Nayyar also notes that it’s important to involve clinical leadership in developing AI tools to help support the workforce and allow caregivers to do their best work. Electronic health records were adopted without an understanding of the workflows of care teams, she says.
“While we may have solved three problems, we created five new ones for the workforce, and we effectively burned them out,” Nayyar says. “And we also have now turned the workforce off to technology.”
She says thinking of using AI to replace clinicians isn’t realistic and “isn’t the right problem to solve.
“We’re really at an inflection point in the industry, where we have to be mindful that we are in the humanity business. Healthcare is nothing but the business of humanity,” Nayyar says.
She says it’s critical to think of AI as a way to support the work of doctors, particularly at a time when the public’s trust in the healthcare industry has diminished.
“We instead have to acknowledge that no one person can know everything,” Nayyar says. “How can we make our staff quicker, better, faster, more efficient at the bedside?”
AI and equity
At a time when many health systems and hospitals have focused on improving health equity, some healthcare leaders fear that AI could exacerbate disparities if the technology isn’t widely available in some communities.
Ramanthan says that’s a realistic concern.
“We’ve heard it numerous times from health systems that we work with,” she says. “They deal with it on a day to day basis, and it’s a high priority for them.”
And she says healthcare organizations need to be thinking about using AI with health equity in mind.
“Everything we do in healthcare, in terms of ensuring that we are providing the best possible care for consumers, needs to account for social determinants of health,” Ramnathan says.
Nayyar says healthcare organizations need to get better data for AI tools.
“We know that we don’t have good data,” Nayyar says. “We have to be mindful that AI can only work off of the data. We provide it. And so all of the things that we need to correct around health equities, is actually likely even more important now.”
Nayyar notes that misinformation and disinformation from AI “tends to victimize more of the underserved populations.
“Being very, very mindful of, again, the ethical uses and how those disparities can truly actually get worse with time, is incredibly important,” Nayyar says.
However, Ramnathan says building AI models to reflect social determinants of health and disparities can offer promise for better treatment of underserved groups.
“AI done right can actually help,” she says.
Reasons for excitement
While advocating for considered and thoughtful approaches to developing AI, Ramnathan and Nayyar both relish the potential to improve the patient experience and help devise better treatments.
“If I’m a consumer, having a personalized AI assistant that helps guide me through how I need to navigate my care would be amazing,” Ramnathan says. “If I am a clinician, having an AI assistant that guides me through my day, from understanding the patient better, having a better interaction with the patient to ultimately provide better care would be super exciting.”
She also points out AI solutions that can simplify and improve billing, often a thorny area for patients.
For Nayyar, she says she’s enthusiastic about AI’s potential in the diagnostic and therapeutic space “and the ability to really do personalized medicine.”
“I think that’s what excites any doctor, any technologist, anyone in the healthcare industry,” Nayyar says. “So this idea that we’re going to be able to better, faster, produce medical devices, medical therapeutics, is incredibly thrilling.”
As a clinician, she says she is eager for a time when she can treat a 35-year-old Black man with a rare autoimmune disease with a personalized care routine, rather than relying on studies where the participants were young, white men.
“Rather than just throwing things at the wall, which is what it feels like sometimes in medicine, particularly in esoteric fields, it’ll be so terrific to be able to actually get the data personally, actually come up with a therapeutic plan that is specifically to the individual in front of me at the bedside,” Nayyar says. “That’s what’s thrilling, because that’s what really drives outcomes.”
And she notes that there isn’t a one-size-fits-all approach to healthcare, even if AI is involved.
“So the more we’re able to understand about patient populations and then apply them on personalized care journeys, that’s really the holy grail of medicine across the board,” she says.