Impact Of Large Language Models In Healthcare

  • Elena Paspel Master of Science in Engineering (Digital Health) - Tallinn University of Technology, Estonia

Introduction

AI in healthcare simplified

Artificial Intelligence (AI) is revolutionising healthcare. It uses advanced algorithms and software to process medical data like the human brain. This change is crucial for better medical research, patient interaction, and decision-making. 

Large Language Models (LLMs), are a type of generative AI. Generative AI is a type of AI that creates new content based on its training data. Some LLMs are specially designed for medical use. Trained with a wide range of medical information, including literature and patient records, these models can create diagnostic reports, treatment suggestions, and educational materials for patients. They can even mimic conversations between doctors and patients. LLMs help healthcare professionals by giving insights from large datasets, aiding better decision-making, patient care, and research.1 However, using them in healthcare means being careful about their accuracy, keeping patient information private, and considering ethical issues. Simply put, this generative AI works like an expert assistant.2 Learning from existing health information creates new, helpful medical content.

The impact of large language models (LLMs)

LLMs are transforming healthcare. Studies have shown that they help communicate complex medical information easily between patients and healthcare providers.2 These models, especially ChatGPT by OpenAI, have quickly become popular for their human-like text generation. They're vital in natural language processing (NLP) and have reached over a billion users, showing the vast potential and interest in AI-driven health content.1

This article focuses on the role of AI and LLMs in transforming healthcare, aligning with Klarity's commitment to providing accurate, actionable, and individualised health information. It explores the benefits and challenges of using generative AI, like LLMs, their applications, and ethical considerations in healthcare settings.

The revolutionary role of LLMs in healthcare

Overview of LLMs: ChatGPT and beyond

Large Language Models (LLMs) marked a significant leap in artificial intelligence, mastering the human-like generation and understanding of language. ChatGPT, released by OpenAI, quickly captured the public's attention, showcasing an ability to engage in detailed and nuanced conversations, which has been invaluable across various disciplines, including healthcare.1 Other LLM models, such as Med-PaLM 2, are designed for even more precise and context-aware tasks in healthcare, aiming for an expert-level grasp of medical queries. These models strive towards expert-level understanding, especially in answering medical questions, a task crucial for healthcare applications.3

Key benefits of LLMs in healthcare

Enhanced medical research and writing: LLMs are reshaping medical research and documentation. From drafting research articles to generating hypotheses, they are becoming integral to the scientific community. Some researchers have started to enlist tools like ChatGPT to contribute to their articles, with some papers even crediting ChatGPT as a co-author, reflecting its growing role in research.1 Med-PaLM 2 takes this a step further, potentially providing expert-level assistance in generating research questions and hypotheses.3 Furthermore, BioBERT and ClinicalBERT, built using large biomedical and clinical datasets, demonstrate superior performance in biomedical NLP tasks.4

Improved patient communication and education: The capability of LLMs to convey complex medical information understandably has vast implications for patient education and communication. For example, LLMs can generate patient education materials, respond to patient inquiries online, and support healthcare professionals in delivering empathetic and informed care. Models like ChatDoctor and Baize-healthcare, fine-tuned on extensive patient-physician conversations, show significant improvements in understanding patients' needs​.4 This could lead to better patient understanding of their health conditions and treatment options.5 

Support in medical decision-making: LLMs are becoming indispensable tools in clinical settings where timely and informed decisions are paramount. LLMs can synthesise medical research and assist with diagnostic suggestions. They can generate concise summaries of patients' medical backgrounds, guide physicians in selecting appropriate radiological investigations, and even enhance the interpretability of computer-aided diagnosis (CAD) systems​.4 LLMs like Med-PaLM 2 are advancing towards providing real-time support in healthcare settings.3 By synthesising the latest research and aiding in the diagnostic process, LLMs can support healthcare professionals in delivering precision care. This capability is precious in high-pressure medical environments where timely, accurate information is essential.6

The multimodal future of LLMs

Introduction to multimodal LLMs (M-LLMs)

The future of Large Language Models (LLMs) lies in their evolution into Multimodal LLMs (M-LLMs). These advanced models are not limited to text; they can understand and generate content across various formats, such as images, videos, sound, and comprehensive documents. This multidimensional capability aligns perfectly with the multifaceted nature of medicine, a field where data comes in diverse forms ranging from radiological images to patient interviews.7 

For example, Med-PaLM and Med-PaLM 2 (M-LLMs), developed by Google, achieved state-of-the-art performance in USMLE questions, surpassing ChatGPT's performance​​.4 The USMLE, or United States Medical Licensing Examination, is a series of exams that medical students and graduates must pass to become licensed physicians in the United States.

The performance of Med-PaLM models in these questions highlights their potential in understanding and synthesising complex medical information, an essential skill for any healthcare professional. Thus, M-LLMs represent a significant step forward in AI, potentially offering more holistic patient assessments and fostering interdisciplinary collaboration in healthcare.

Potential applications of M-LLMs in healthcare

Image and video analysis for diagnosis: M-LLMs could revolutionise diagnostic processes by analysing medical imagery such as radiology images, pathology slides, or photos of skin lesions. For example, these models could assist in identifying diseases based on tissue samples or detect retinopathy by analysing retina images, thus playing a crucial role in early and accurate diagnosis.7

Voice recognition for patient monitoring: The ability to handle sound samples opens new doors in patient monitoring. They could be employed for tasks such as identifying vocal biomarkers, performing cough analysis, analysing heart and lung sounds for abnormalities, or even diagnosing sleep disorders like sleep apnea.7

Integrating diverse medical data for comprehensive care: M-LLMs can analyse and synthesise varied medical data types, enabling comprehensive patient care. By integrating text, images, audio, and video, these models can better understand patient conditions, enhancing the decision-making process and personalised treatment plans.7

Future scenarios and innovations in M-LLMs

Remote diagnosis support: M-LLMs could assist healthcare providers in remote areas by combining textual data with medical images for comprehensive patient assessment.7

Aid in physical therapy and rehabilitation: By monitoring video content, M-LLMs could track a patient's progress in physical therapy, offering real-time insights and guidance.7

Support in surgical procedures: M-LLMs could provide invaluable assistance during surgical procedures by offering real-time analysis of visual cues, thus enhancing the precision and safety of surgeries.7

Education and research support: M-LLMs could facilitate medical education by analysing textbooks, papers, and teaching materials and aid in research by conducting comprehensive literature reviews.7

In conclusion, the emergence of M-LLMs in healthcare opens up possibilities for advanced patient care, research, and medical education. As these technologies evolve, they promise to bring about transformative changes in the healthcare landscape.

Ethical and practical challenges

AI's role in spreading misinformation

LLMs may perpetuate inaccurate information from open internet sources, known as the ‘hallucination effect.’4 The rise of LLMs, particularly ChatGPT, has brought about concerns of an ‘AI-driven infodemic,’1 where the rapid generation of text by these models could contribute to the spread of misinformation on an unprecedented scale. 

An ‘infodemic’ refers to the widespread dissemination of false health information, especially during crises, particularly during outbreaks and disasters, negatively impacting mental health, increasing vaccine hesitancy, and causing delayed healthcare. A recent World Health Organization (WHO) review categorises misinformation as intentionally false data and disinformation as misleading or biased content.

The spread of false information poses a serious risk to public health. During the COVID-19 pandemic, such misinformation affected healthcare decisions and actions. LLMs, capable of creating realistic, human-like text, can be used to make fake news or misleading information. This makes it hard to separate accurate information from false.1

Patient privacy and data security

When LLMs, especially multimodal ones, handle sensitive health data, protecting patient privacy and data security is crucial. There are serious risks of leaking confidential patient details with LLM use.4 Thus, using these models in healthcare means following strict data protection and privacy rules to prevent unauthorised access or misuse of patient data.8

Regulatory and policy considerations

The use of LLMs for medical information extraction highlights the need for regulatory frameworks to guide their application, particularly in areas involving patient data and clinical decision-making. As LLMs become more integrated into healthcare systems, establishing guidelines for their ethical and secure use is essential.8

Tackling AI challenges in healthcare

Educating in AI for healthcare workers and patients

As AI, including tools like ChatGPT, becomes more common in healthcare, it's essential to educate medical professionals and patients about AI. They need to understand AI's role in healthcare, its risks and benefits, and the need for transparent and accountable AI uses. Better AI knowledge will help healthcare workers guide patients, leading to more informed choices and trust in AI-supported healthcare.1

Joint efforts in policy-making

To face AI's challenges in healthcare, a team approach is necessary. This team should include healthcare experts, data scientists, ethicists, and policymakers. Together, they should create rules and standards for AI's safe, effective, and ethical use. The goal is to create a balance between AI's advantages and risks, ensuring it benefits public health.1

Summary

LLMs and M-LLMs, which are generative AI trained with medical data, are changing healthcare. They offer significant benefits, for example, enhancing research, educating patients, aiding clinical decisions, and improving communication. Yet, there are concerns about misinformation and privacy. So, it's crucial to have strong safeguards and rules for using these models in healthcare. A well-thought-out and collaborative approach is vital. This way, LLMs and M-LLMs can safely boost human skills, helping to progress in medicine and patient care.

References

  1. De Angelis L, Baglivo F, Arzilli G, Privitera GP, Ferragina P, Tozzi AE, et al. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Frontiers in Public Health [Internet]. 2023 [cited 2024 Jan 5];11. Available from: https://www.frontiersin.org/articles/10.3389/fpubh.2023.1166120
  2. Liu S, McCoy AB, Wright AP, Carew B, Genkins JZ, Huang SS, et al. Leveraging large language models for generating responses to patient messages [Internet]. medRxiv; 2023 [cited 2024 Jan 5]. Available from: https://www.medrxiv.org/content/10.1101/2023.07.14.23292669v1 
  3. Singhal K, Tu T, Gottweis J, Sayres R, Wulczyn E, Hou L, et al. Towards expert-level medical question answering with large language models [Internet]. arXiv; 2023 [cited 2024 Jan 5]. Available from: http://arxiv.org/abs/2305.09617
  4. Yang R, Tan TF, Lu W, Thirunavukarasu AJ, Ting DSW, Liu N. Large language models in health care: Development, applications, and challenges. Health Care Science [Internet]. 2023 Aug [cited 2024 Jan 5];2(4):255–63. Available from: https://onlinelibrary.wiley.com/doi/10.1002/hcs2.61 
  5. Sallam M. Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare [Internet]. 2023 Jan [cited 2024 Jan 5];11(6):887. Available from: https://www.mdpi.com/2227-9032/11/6/887
  6. Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. EBioMedicine. 2023 Mar 14. Available from: https://doi.org/10.1016/j.ebiom.2023.104512 
  7. Meskó B. The impact of multimodal large language models on health care’s future. Journal of Medical Internet Research [Internet]. 2023 Nov 2 [cited 2024 Jan 5];25(1):e52865. Available from: https://www.jmir.org/2023/1/e52865
  8. Goel A, Gueta A, Gilon O, Liu C, Erell S, Nguyen LH, et al. Synthical. 2023 [cited 2024 Jan 5]. Llms accelerate annotation for medical information extraction. Available from: https://synthical.com/article/b7126c06-be00-44a9-a042-34fc226f934d
This content is purely informational and isn’t medical guidance. It shouldn’t replace professional medical counsel. Always consult your physician regarding treatment risks and benefits. See our editorial standards for more details.

Get our health newsletter

Get daily health and wellness advice from our medical team.
Your privacy is important to us. Any information you provide to this website may be placed by us on our servers. If you do not agree do not provide the information.

Elena Paspel

Master of Science in Engineering (Digital Health) - Tallinn University of Technology, Estonia

Bachelor of Laws - LLB (Hons), London Metropolitan University, UK

An experienced professional with a diverse background spanning law, pricing, and eHealth/Digital Health. Proficient in copywriting, medical terminology, healthcare interoperability standards, and MedTech regulations. A strong foundation in scientific research methodologies and user experience research supports the creation of compelling content for the biopharmaceutical, CROs, medical technology, and eHealth sectors.

Proven expertise in driving product vision, synthesizing complex information, and delivering user-centric solutions. Adept at streamlining workflows and processes, and drafting documentation and SOPs. Always open to collaborations and eager to connect with like-minded professionals.

my.klarity.health presents all health information in line with our terms and conditions. It is essential to understand that the medical information available on our platform is not intended to substitute the relationship between a patient and their physician or doctor, as well as any medical guidance they offer. Always consult with a healthcare professional before making any decisions based on the information found on our website.
Klarity is a citizen-centric health data management platform that enables citizens to securely access, control and share their own health data. Klarity Health Library aims to provide clear and evidence-based health and wellness related informative articles. 
Email:
Klarity / Managed Self Ltd
Alum House
5 Alum Chine Road
Westbourne Bournemouth BH4 8DT
VAT Number: 362 5758 74
Company Number: 10696687

Phone Number:

 +44 20 3239 9818