3 min read

Opinion: AI chatbots can improve health — but only with help from humans

Opinion: AI chatbots can improve health — but only with help from humans Smisha Agarwal and Rose Weeks

Chatbots like OpenAI’s ChatGPT can hold fun conversations across many topics. But when it comes to providing people with accurate health information, they need help from humans.

As tech enthusiasts who research and develop AI-driven chatbots in health care, we are optimistic about the role these agents will play in providing consumer-centered health information. But they must be developed with specific uses in mind and be built with precautions to safeguard their users.

When we asked ChatGPT in January 2023 about whether children under the age of 12 should get vaccinated for Covid-19 vaccines, the response was “no.” It also suggested that an older person should rest up to address his Covid-19 infection, but did not know Paxlovid was the recommended therapy. Such guidance may have been true when the algorithm was first trained, based on accepted knowledge, but it hadn’t been updated.

When Covid-19 vaccines were first being rolled out, we asked young people in U.S. cities on the East Coast what would make them want to use a chatbot to get information about Covid-19. They told us that chatbots felt easier and faster than web searches, since they gave a condensed, instantly focused answer. In contrast, searching for that information on the web might retrieve millions of results and searches could quickly spiral into increasingly alarming topics — a persistent cough becomes cancerous within a one-page scroll. Our respondents also disliked the targeted ads they got after a health-related web search.

Chatbots also offered the impression of anonymity, presenting themselves as a safe space where any question, even a scary one, can be asked without creating an obvious digital trail. Further, the bots’ frequently anodyne personas seemed nonjudgmental.

In the second year of the pandemic, we developed the Vaccine Information Resource Assistant (VIRA) chatbot to address questions people had about Covid-19 vaccines. Similar to ChatGPT, VIRA uses natural language processing. But we review VIRA’s programming weekly, updating it as needed, so the chatbot can respond with updated health information. Its engagement with users has helped facilitate judgement-free conversations about the Covid-19 vaccines.

We also continue to monitor the questions people ask VIRA (all questions are anonymous in our data, with IP addresses stripped out) as well as the chatbot’s answers so we can improve its responses, identify emerging areas of misinformation and counter these, and identify emerging community concerns.

Several health departments have adopted VIRA to help respond to their constituents with accurate, up-to-date information about Covid-19 vaccines.

Our experience is part of the growing body of evidence to support the use of chatbots to inform, support, diagnose, and even offer therapy. These agents are increasingly being used to address anxiety, depression, and substance use between provider visits or even in the absence of any clinical intervention.

Further, chatbots like Wysa and Woebot have shown promising early results in creating human-like bonds with users in the delivery of cognitive behavioral therapy and reducing self-reported measures of depression. Planned Parenthood has a chatbot offering vetted, confidential guidance on sexual health, and several other chatbots now offer anonymous guidance on abortion care.

Large health organizations understand the value of this type of technology, with an estimated $1 billion market for them by 2032. With growing constraints on labor and unmanageably high and increasing volumes of patient messages to providers, chatbots offer a pressure valve. Incredible advancements in AI capability over the past decade pave the way for deeper conversations and subsequently greater uptake.

While ChatGPT can write a poem, plan a party, or even generate a school essay in seconds, such chatbots cannot be safely deployed to address health, and conversations around health topics should be made off-limits to them to avoid doing harm. ChatGPT provides disclaimers and encourages users to consult physicians, but such verbiage is wrapped around persuasive, comprehensive responses to health queries. It is already programmed to avoid commenting on politics and profanity; health care should be no different.

The appetite for risk in health care is low, with good reason, as potential negative consequences can be grave. Without necessary checks and balances, skeptics of chatbots have plenty of reasons to be concerned.

Yet the science around how chatbots operate, and how they can be used in health care, is evolving, brick by brick, experiment by experiment. They may someday offer unparalleled opportunities to support natural human questioning. In ancient Greece, Hippocrates cautioned revered, god-like healers to do no harm. AI tech must heed this call today.

Smisha Agarwal is the director of the Center for Global Digital Health Innovations and an assistant professor of digital health in the Department of International Health at the Johns Hopkins Bloomberg School of Public Health. Rose Weeks is a research associate at the International Vaccine Access Center in the Department of International Health at the Johns Hopkins Bloomberg School of Public Health. The opinions of the authors, who lead the team that developed and launched the Vaccine Information Resource Assistant, do not represent necessarily reflect those of their employer.