Is AI the Cure for Health Care’s Biggest Challenges?

UCSF’s Robert Wachter, a leading voice on the impact of technology on medicine, explores what generative AI can — and can’t yet — do for patient care.

By Brandy Ford UCSF Magazine

Image
Robert Wachter sits on his couch in his home, resting is arm on the back with succulent pants in the foreground by the window.
Photo: Christopher Michel

Generative AI is developing rapidly and changing nearly everything we do, from planning vacations to taking notes during meetings or appointments. We spoke to Robert Wachter, MD, chair of the UCSF Department of Medicine and author of the upcoming book A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future, about the possibilities — and pitfalls — of AI in medicine.


What kind of questions do you tackle in this book?

In A Giant Leap, I aim to answer big questions about artificial intelligence. What will it mean to have a technology this smart? How should patients use AI? How will it interface with the health system? How will technology, clinical medicine, ethics, policy, and finances align to benefit patients? Will there be doctors in the future? I wrote it for a professional audience — people in health care and technology — but I think anyone who cares about their health and health care would find it fascinating.

I basically spent 18 months as a journalist — interviewing patients, doctors, nurses, health system leaders, and technologists to get answers to these questions.

How do different experts view AI’s role in health care?

Several decades ago, most clinicians didn’t understand tech very well, and people in tech were clueless about how medicine works. Tech leaders transformed industry after industry and didn’t think medicine would be any more difficult to disrupt. But efforts to transform medicine largely failed, in part because the tools weren’t very good and in part because no one fully appreciated the challenges of workflow, regulation, payment, policy, and culture. Moreover, early AI leaders in health care focused on diagnosis — the hardest problem and the one with the highest stakes. It didn’t work.

Now, technologists acknowledge that solving the day-to-day problems providers face is really complicated, but it’s crucial to start with these in order to gain trust. They also know the stakes are high, and medicine’s byzantine financial system makes the challenge exceedingly complex. At the same time, clinicians recognize that the health care system is terribly broken and that technology is essential to fixing it.

Can we trust AI to exercise good judgment in medicine, particularly when it comes to high-stakes uses like diagnosis?

The tools are immature but getting better fast. I’ve seen ChatGPT provide spectacular health-related answers, but also some bizarre and potentially dangerous advice. For the foreseeable future, keeping a doctor in the loop is essential to ensuring patient safety.

While patients can and will use AI for do-it-yourself diagnosis, they should be skeptical and have a low threshold for seeing a real live clinician. The tools are not yet fully trustworthy.

Training AI to have judgment is one of the biggest hurdles to overcome, and I don’t think we’re there yet.

Robert Wachter, MD

AI can learn to do anything we do if we can map out how we do it and give it a million examples to learn from. But what is good judgment? You just know it when you see it. Young physicians today may know more about the latest medicines than I do, but that doesn’t mean they have better judgment. Training AI to have judgment is one of the biggest hurdles to overcome, and I don’t think we’re there yet.

You note that electronic health records are hugely burdensome to providers, requiring doctors to input massive amounts of data into the system during patient appointments and even after working hours. How might AI help?

Doctors are constantly entering details into electronic health records. In fact, many spend two additional hours each night doing documentation or answering patient queries submitted through MyChart, UCSF’s patient portal; we call it “pajama time.” This paperwork has left clinicians overwhelmed by tasks that, to a large degree, don’t benefit patients.

AI is already playing a key role in automating some of these burdensome tasks. For example, every doctor in UCSF Health has access to an AI scribe to enter patient notes into the system during appointments, allowing them to focus on the patient in front of them rather than on their computer screen. In the near future, we’ll be rolling out chart summarization tools, which is crucial, given that one in five patient charts is longer than Moby Dick. We’ll then move to harder but even more impactful uses, like diagnostic assistance and treatment recommendations.

One risk you discuss in the book is that doctors who rely on this technology might erode their own skill set. How do we guard against that?

The risk is real. In a recent study, experienced gastroenterologists used an AI tool for three months to highlight precancerous lesions during routine colonoscopies. When the researchers turned off the AI co-pilot, the physicians’ ability to identify the lesions on their own declined significantly.

To counteract this type of de-skilling, we’ll need to be intentional about how we use AI and develop strategies that keep us vigilant. We may even need to design AI systems that periodically provide wrong answers on purpose, much like the phony phishing emails IT departments send to test employee attentiveness.

People are going to have to do something we aren’t really good at — we’re going to have to be disciplined and say to ourselves, “If I get lazy here, I’m going to start de-skilling, and then I can never get it back.” And, for our trainees, we need to worry not just about de-skilling — we need to prevent “never-skilling.”

On the other hand, how can AI aid in enhancing skills?

I’m excited about AI’s potential to up-skill providers in places where specialists are scarce. But even patients in well-resourced areas often face long wait times.

AI isn’t going to stitch up your wound or take out your appendix — not yet — but it can offer providers the firepower of someone who has a higher level of training.

Robert Wachter, MD

AI isn’t going to stitch up your wound or take out your appendix — not yet — but it can offer providers the firepower of someone who has a higher level of training. For example, by tapping into AI, a nurse practitioner could provide a level of care traditionally requiring an MD, or a general practitioner could have a specialist’s knowledge at their fingertips. This could lead to faster, safer, and more cost-effective care.

Did AI help you write this book?

I insisted on writing everything myself, but there were times when I felt certain paragraphs were good but could be better. I would plug them into Claude, an AI writing tool, or GPT and tell it to maintain my voice but improve it — and it would, more than half the time. The tools are pretty remarkable.

Robert Wachter is also the Holly Smith Distinguished Professor of Science and Medicine and the Lynne and Marc Benioff Professor of Hospital Medicine.

UCSF Magazine

Still curious?

Read more stories

UCSF News Topics