Artificial Intelligence: A Revolution for Health Care?
Q&A with Kay Burke, chief nursing informatics officer for UCSF Health
Electronic health records promised big improvements in health care but ended up making extra work for physicians and nurses. Kay Burke, MBA, RN, works on solving those challenges, helping UCSF Health improve the digital tools that nurses use every day. Could artificial intelligence make their jobs – and our health care – better?
Why is everyone talking about artificial intelligence (AI)?
We’re seeing more generative AI – ChatGPT is a great example. You can ask a question, and the tool searches an almost infinite database of information. Then it summarizes a logical response. So you as a user are not researching a bunch of different sources. You’re just asking a question.
Interest in AI in health care has exploded because there’s a nursing shortage and a lot more burnout. Clinicians need support, and more sophisticated technology – namely, AI – could help.
What’s behind the burnout?
Clinicians have experienced increased stress since the pandemic. In the U.S., over 100,000 nurses left the profession between 2020 and 2021 – the largest drop in 40 years. I surveyed about 7,500 UCSF nurses and allied health professionals, and they attributed their feelings of burnout to constantly being on the computer. Technology has proliferated.
We didn’t really put work phones in the hands of all nurses four years ago, but now? The number of digital messages they are receiving from patients and other clinicians has exploded. They’re constantly bombarded by alarms and alerts, and their ability to respond to all of them just isn’t the same by the end of a shift. And nurses are getting called in to work more often because staffing is so short. It ends up being a very hectic environment.
How could AI reduce that stress?
AI could help nurses handle the additional documentation tasks. Say I have to give a patient report to the next nurse at every shift change. What if I could prompt the electronic health record with, “Look back 12 hours on Mr. Jones and provide a summary,” and then AI populates a report with food and water intake details, medication reactions, upcoming MRIs or other imaging, estimated date of discharge, and so on.
All the emerging AI trends are in some way alleviating cognitive burden. Right now, nurses are typically hunting and pecking to find and input this information. Often, some of it is missing or forgotten. The AI’s summary might not be fully accurate, but generative AI isn’t purporting to be. It’s providing a response to a prompt and then allowing the expert clinicians, the humans, to manipulate that summary.
What about patients? Will AI help us?
When electronic health records were implemented, patients complained that their clinicians stopped looking them in the eye. They were always looking at the computer. Nurses and other clinicians went into the profession to care for patients, right? They don’t want to be entering data constantly. In the future, natural language processing could help with that by capturing the provider’s voice and documenting information about a patient in their record. AI could even suggest a medical intervention that should happen based on that conversation.
The AI is not taking away the work of the nurse or the clinician, but it’s augmenting their decisions. Your outcomes as a patient might improve.”
Also, companies are already developing cameras that can see something clinical – a wound, for example – and the technology can tell if the wound is becoming infected. Machine learning can then suggest that the nurse call the physician. Or let’s say a patient is deteriorating, and a nurse hasn’t observed that yet, but the monitors and cameras are capturing clinical data. The AI sends an alert to the nurse saying, “Check on this patient.” The AI is not taking away the work of the nurse or the clinician, but it’s augmenting their decisions. Your outcomes as a patient might improve.
A few vendors are offering this kind of technology to UCSF Health to explore. But we have to be very careful as we decide whether to use it. We have to know what kinds of data are being considered.
Why do you need to understand the kinds of data the AI is using?
There’s a big difference between explainable AI, which is very transparent, and black-box AI. With black-box AI, you’re not sure exactly which data are going into the algorithm. There’s a saying: “Garbage in, garbage out.” If the data going in is not reliable or relevant, the output coming out might not be useful, whether the output is a patient care summary or a suggested medical intervention. That can be very dangerous. If we cannot explain how we made a medical decision as a result of AI, that could be a regulatory or privacy violation, or worse, lead to a poor clinical outcome. With transparent AI, you can lift up the hood and see everything the algorithm considered to, say, recommend a certain blood test. But we’re not there yet.
What is your biggest concern about AI?
As AI increasingly enables care delivery, we need to make sure that it’s not misused – for example, that it won’t exacerbate inequality. The black-box AI could examine not just clinical data but also social data that contribute to health outcomes. For instance, is the patient housed? Do they have access to transportation? This social data is in the electronic health record. And if it’s brought into the algorithms, we have to make sure the AI isn’t potentially incorporating bias into the output – the clinical recommendations it makes.
Consider an organ transplant list. You, a patient, said three years ago that you are food-insecure. Is an algorithm going to suggest that you be bumped down the list because food insecurity could affect your transplant result? I’m making up this example, but it’s a concern. We can’t let that happen. Ethics are at the heart of the AI conversation in health care.