5 min read

6 Potential Medical Use Cases For ChatGPT - The Medical Futurist

6 Potential Medical Use Cases For ChatGPT - The Medical Futurist unknown

medicalfuturist.com - Why is everyone talking about ChatGPT and what can this algorithm do for healthcare workers? We found six potential use cases in medicine.

6 Potential Medical Use Cases For ChatGPT

The internet is buzzing with news on ChatGPT, and how everyone uses it to write love poems, homework, summaries of articles, or Python codes. ChatGPT is the latest generation of a large language model, designed to generate text upon user input. There is a lot of discussion regarding its potential use in medicine, so let’s see what you can expect from it and what you should not use it for – at least in its current form.

Unlike the text-to-image tool Midjourney, which we introduced earlier, and which is a GAN (generative adversarial network, explained here) algorithm, ChatGPT is a different kind of algorithm – although there is a lot of misinformation about it if you google the topic. So for the seasoned readers: ChatGPT is a transformer, and the acronym GPT stands for Generative Pretrained Transformer.

digital health devices recovery illness tmf

Anyways, considered the best-in-class at the moment, it can be used for a wide range of things, but it has some limitations.

Starting with the no-gos

In general, it is important to note that I would certainly not use it in any way that could potentially harm patients, like finding a diagnosis, where the slightest error could have dire consequences.

Like the algorithms we used in this article about AI text generators, ChatGPT has similar flaws, although based on what I see after using it every now and then for a while, it is improving fast in determining its own capabilities, and more and more often comes up with warnings if you ask something too scientific/too specific.

When I first started playing with it, it was prone to come up with fake stuff that looked convincing, just as it came up with three non-existing articles proving the benefits of echocardiograms for Systematic Sclerosis. It looked legit, but if you try to find the referenced articles in the scientific literature, you will find none, as many commenters pointed out. They are just things the algorithm fabricated.

If you, as a doctor, send this letter to the insurance company, and the diagnostic test gets rejected because it cites non-existing literature, you just lost valuable time for your patient. On the other hand, the letter itself is fine, and if you include real references, it can be sent, and it still saved you some time – if you don’t already have such templates ready.

The algorithm will follow (with almost) whatever guidance it gets from you, I asked it to find studies that explain to the insurance company why my knee bursitis patient would have a better recovery if they had a brain CT, and I was also provided with three (non-existing) articles proving my point.*

But to be fair with ChatGPT, it does better than other AI text generators in fact-checking. I also asked it to write about the benefits of organic food over chemotherapy in breast cancer treatment, and it told me that there seems to be no literature proving the benefits of organic food in cancer treatments, that chemotherapy is a well-established treatment and that I should consult my oncologist. This is a HUGE improvement!

With a little more questioning, it also told me that it doesn’t have the ability to browse the internet or access scientific literature, and can only provide information on topics it was trained on. I tried to find the line where the algorithm refuses a request or fabricates the referenced literature, but am not sure what determines the outcome.

[*UPDATE: Also not sure if the algorithm was improved or I lost my credit due to the large number of ridiculous requests I made during testing it, but recently ChatGPT started refusing to include scientific (looking) articles in the text it creates, and on a repeated attempt on the brain CT letter, it pointed out that brain CTs are not typically used in relation to diagnosing knee conditions. Well done, ChatGPT!]

Let’s see the use cases now

Now that we see the limitations, let’s take a look at the use cases where it can actually serve healthcare professionals.

1) Summarizing medical records based on patients’ family history, symptoms and lab results, among others

The algorithm does fairly well in interpreting English-language texts, thus I trust it would do a good job in providing summaries based on existing medical records.

2) Summarizing and analyzing research papers: list keywords in an abstract or summarize a long and detailed research paper for physicians not working in that field of interest

If it was given access to a full-text version of an article, I trust it would do a good job of grasping the main points. However, I was not able to test this function, as the algorithm is not able to follow URLs, and refused my request to summarize the text when I copied it into the interface. But I’m sure it is an area the developers are focusing on.

3) Writing general texts, like emails, a book blurb or anything that saves time and you can further customize it for your needs and personal style

Without a doubt, it does great in such tasks, much better than any of the algorithms I tried earlier. It writes excellent emails on all kinds of topics (requesting a refund for an item that has been delivered broken, inviting neighbours over to dinner, or suggesting pool resources to buy a nice Christmas gift to the main teacher of the kids). Just like the above, it will do great in general communication with patients.

4) Answer broad questions

ChatGPT seems to do quite well in answering broad general questions in many areas, from ‘What is diabetes?’ through ‘How to ensure financial safety at an old age?’ to ‘Why are patients not involved in healthcare?’. However, the answers you receive will be very general. You will get a correct summary of the basics, but will most likely not succeed in learning about the specifics.

The algorithm explains well what an artificial pancreas is and how it works, but can’t advise you on specific technical details. Similarly, it gives a great general summary of the goal of my favourite board game, but it can’t answer a simple yes/no question regarding a specific – very obvious – game situation.

5) Work as a chatbot to answer FAQ-like questions for the doctors’ office, or handle appointments

ChatGPT could be also very handy as your chatbot, answering frequently occurring questions, given that it is sufficiently trained with the sets of answers. Or as the algorithm explained it:

It could be also used to schedule appointments, manage bookings and carry out similar tasks for a surgery or a hospital unit.

6) Have fun with it to build a relationship with AI

Similarly to Midjourney, ChatGPT is also a great tool to get used to working with AI, build a relationship with it, and understand its capabilities and limitations.

Artificial intelligence algorithms will arrive in healthcare and all medical specialties, there is no question about it. It will fulfil a number of roles, and its involvement will vary between different medical fields. But no field will be left untouched, that is certain. Getting used to it before you need to work with it will make this transition easier and more fun.