Preparing for the World of Generative AI
Preparing for the World of Generative AI John Halamka
ChatGPT and similar systems will increasingly be part of our lives, including health care. We need guidelines to ensure their ethical deployment.
By John Halamka, M.D., president, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform.
Generative AI systems like ChatGPT, a chatbot based on a generative pre-trained transformer, have captured the public’s attention, resulting in a flurry of positive and negative speculation about their potential. They have even found their way into popular comic strips. In one Dilbert strip, for instance, the boss asks Wally if his status report was written by a commercial grade AI. Wally says yes and thanks him for his compliment, at which point the boss recommends adding some mistakes, so it looks more like a human wrote it. While the comic makes light of the situation, college professors, health care executives, and many others have genuine concerns about the potential misuse of these powerful tools.
By one definition, generative AI “refers to artificial intelligence that can generate novel content, rather than simply analyzing or acting on existing data. Generative AI models produce text and images: blog posts, program code, poetry, and artwork. The software uses complex machine learning models to predict the next word based on previous word sequences, or the next image based on words describing previous images.” On a positive note, the technology can be used to improve translations, answer patient questions, and perform sentiment analysis, which might enable health care providers to better understand patients’ positive and negative experiences while interacting with their hospital or office practice. Eventually, these tools may also improve the diagnostic process and enhance new drug discoveries.
But there are also questionable uses that need to be addressed. Babson College Professor, Thomas Davenport, and Nitin Mittal, head of U.S. artificial intelligence growth at Deloitte, point out that the technology can create fake documents and images that are much harder to detect. Imagine a bad actor or unscrupulous app developer who wants to market software as a medical device (SAMD) but doesn’t want to do the expensive due diligence to analyze tens of thousands of medical images to train its algorithm, and instead develops its “solution” using ChatGPT-enabled fake images. If products like this gain traction in the medical community, it would put lives at risk. Or imagine a medical school student using generative AI apps to write up assignments to assess their diagnostic skills or their knowledge of pathophysiology. Humanity might lose their ability to perform foundational tasks when it begins relying more on generative AI.
Generative AI systems to date have lacked transparency. While the documents they create may read smoothly and their logic appear sound, they are only as current as their latest training data, which in the case of ChatGPT, is prior to 2021. And in health care technology, that is a significant flaw since the knowledge changes so rapidly. Even more serious is the fact that these digital tools often rely on unfiltered internet and social media content, which has been flooded with misinformation. A 2022 artificial intelligence report from Stanford University found that most generative models are truthful only 25% of the time. Similarly, a report in Statpoints out that these tools have “the potential to deepen distrust in medicine by lowering the barrier to creating misinformation.” That fear is well justified, given the realization these bots sometimes invent facts but write them up in a very plausible way.
With these concerns in mind, we need to create policies and regulations to govern generative AI and other new AI-based systems. The Coalition for Health AI is providing guidelines for the responsible use of AI in health care. Similarly, the European Union has developed ethical guidelines for trustworthy AI, as has the Center for Democracy & Technology, and several other groups. Individuals and health care providers and vendors also need to be aware of the potential consequences of using generative AI tools on a project-by-project basis. Developers should be imagining how their new content might be misused or misinterpreted and take steps to thwart such possibilities. One way to do this is with the assistance of fact checkers whose responsibility is to look for errors and misinterpreted facts.
ChatGPT takes artificial intelligence into a new realm, one that can create real value and palpable harm. But we don’t believe in artificial intelligence, we believe in augmented intelligence — that is to say, as humans, giving us a statistical analysis of data of the past to help us make decisions in the future is wonderful. However, turning the decision making over to a statistical model is fraught with dangerous possibilities.
The post Preparing for the World of Generative AI appeared first on Mayo Clinic Platform.