The Integration Of AI In Primary Care: Benefits And Challenges
Published on: September 25, 2024
The Integration Of AI In Primary Care: Benefits And Challenges
Article author photo

Cao Hantian

Bachelor of Science, BSc in Medical Biosciences, Imperial College London

Article reviewer photo

Adam Young

Doctor of Medicine, MBBS, UCL

Artificial Intelligence (AI) has infiltrated many aspects of our lives. Some areas have already been dominated by AI, such as search engines and chatbots, while other fields, like healthcare, have been slower to incorporate this new technology.

Introduction

AI is a technology that involves computers tackling problems that could once only be solved by human intelligence. It ranges from completing monotonous tasks to machine learning via sophisticated algorithms. AI excels in the speed and accuracy of repetitive tasks where humans easily tire. By sifting through large amounts of data, AI can also detect patterns that are not readily observed by humans.

Should the healthcare industry increase its use of AI? 

It is undoubtedly a wise decision considering the numerous benefits this powerful tool can provide to both healthcare providers and patients. But since your health is involved, the decisions have to be made with caution to minimise any risks. Getting to know the pros and cons of AI tools can help you understand why certain aspects of primary care have incorporated AI and whether opting for certain AI devices in primary care systems is sensible.

Benefits of AI integration in primary care

Enhanced efficiency

Do you know what clinicians are doing during work hours? 

In one study, clinicians spent twice the amount of time completing electronic health records compared to time face-to-face with patients on office days, and even some of their personal time had to be spent on desk work.1 This problem could be alleviated by using AI-driven dictation tools.2

If electronic health record systems can integrate AI further, by allowing clinicians to input the minimum amount of information required to generate the complete record, your clinicians can spend more time taking care of your health instead of manually typing notes. 

Improved diagnostic accuracy

How do your clinicians diagnose any health issues? 

Typically they use your medical history (as well as blood results and scans) to decide on the most likely diagnosis. In some cases, electronic health records are equipped with rule-based systems that can assist with diagnostic decisions.3

Nonetheless compared to human practitioners and rule-based systems, AI – specifically predictive machine learning models – is capable of interpreting subtle patterns from large amounts of data and updating the model when new data becomes available. AI systems can reach an accuracy of around 95% for a wide range of diseases.4 The undesirable misdiagnosis rate is reduced by AI systems (like CoDoC) but, in many cases, more prospective studies are needed to validate the use of AI in clinical practice.

Although AI may be influenced by bias, according to the data input, systems are not subject to the innate biases of the human mind on an individual basis.6

Personalised treatment plans

As medical research evolves, diseases are often further classified into subtypes. This is especially true in the field of cancer research. Although cancers have certain common characteristics – like genetic mutations that lead to the potential spread of tumours – cancers have a unique genetic signature. Suitable treatments are therefore different for each individual case. Using specific information about each disease case to devise an individualised treatment plan is known as personalised medicine

Because it requires processing and combining large sets of historical data, including DNA sequences and follow-up checks often spanning many years, AI systems are especially equipped for this scenario. When trained with an enormous volume of such data, AI models can generate more effective treatment plans tailored specifically for individual patients compared to those based on clinician knowledge and experience.7

Furthermore, many tech companies, including Apple, are working on health tracking tools that record your health status like sleep duration and exercise levels. These can provide valuable information to clinicians allowing them to make more detailed suggestions regarding lifestyle and treatment. These data can facilitate screening and early detection of diseases like cancer, through AI systems, one thing that Klarity intends to achieve.

Challenges of AI integration in primary care

Ethical issues

Biases of training data

Although AI models per se are free from cognitive biases, they are still prone to biases in their human-assembled training data. A phenomenon known as AI bias occurs when low representation of minority or disadvantaged groups leads to poor accuracy of output data. For example, low-income groups have reduced access to good healthcare, so less accurate data about the diagnosis and treatment of disorders such as malnutrition is available for training AI models.8 This can further aggravate the inequalities already experienced by these groups in healthcare.

Lack of interpretability

Despite extreme accuracy, many sophisticated AI models are hard to interpret. For example, deep learning simulates the human learning process by using layers of neural networks. Trying to work out how the networks predict is like asking Michael Jordan how to make successful shoots. He may teach you techniques, but he can’t just transfer his shooting skills to your muscles. Similarly, the “black-box” model, a common hurdle with AI technology, doesn’t explain why certain inputs lead to specific outputs. This poses problems, especially in primary care, where diseases have to be diagnosed and treated with medical evidence and moral consideration.9 People can’t easily trust a virtual doctor who gives direct answers without explanations. Clinicians may likewise be frustrated when the reasoning is ambiguous.

Legal issues

Data protection

Understandably, you do not want your personal health records to be shared with AI companies for training models without your consent. Even if software companies are responsible when using the data, there is a risk of data being obtained by other companies, who want to use it for profit.

Governments globally are implementing new policies to tackle this problem and help protect your personal data. Since some governments are doing this differently, however, minor variations in data protection laws may be exploited to share data across countries, including in healthcare settings.10 

Liability in case of errors

An accuracy of around 95% is still imperfect. Who should be held responsible when AI systems make wrong decisions that involve immense costs to patients? Clinicians? But they haven’t made the decisions. AI companies? But they may have warned hospitals of potential inaccuracies. Hospitals? However, they may have informed clinicians of possible flaws. Robust liability frameworks for such cases should be developed in the future. 

Practical issues

The healthcare sector is concerned about the integration of AI, including potential job losses and a lack of empathy in patient care.2 The IT infrastructure in many hospitals is also outdated, necessitating new computers and operating systems, which can be costly to hospitals.

Strategies for overcoming challenges

Improving the model training process

AI bias can be addressed by improving practices in the training process, such as reviewing the quality of training data, choosing models taking deficits in data into account, and collecting more data from underrepresented groups. Regarding interpretability, methods have been created to explain non-interpretable models.9

Strengthening data security measures

Initiatives have been taken against international healthcare data breaches, the most effective being federated learning. Instead of aggregating data around the world and using them to train a global model, models are trained locally and sent to a central manager who aggregates them into a global model – without the transfer of sensitive data across borders.11 Using this secure approach, data from more countries contribute to the final model.This results in increasing the accuracy of AI detectors, like those for cardiovascular diseases and diabetes. 12

Providing education and training for healthcare providers

Healthcare is traditionally believed to be unrelated to computer science and AI. Changing this perception and adding introductory AI modules to medical education can improve the willingness of future practitioners to embrace AI tools in their work. Further, policies can be enacted to ensure AI companies offer sufficient information to healthcare providers regarding the safe use of their tools so that patients are minimally affected by AI mistakes.13

Summary

The power of AI should not be underestimated. This is especially the case in healthcare where diagnostic accuracy, treatment effectiveness and work efficiency can be bolstered by AI. Of course, this change is impeded by a number of ethical, legal and practical issues. Balancing the risks and benefits of integrating AI, tech companies, hospitals, and governments will collaborate to ensure patients can safely enjoy the benefits of AI in primary care.

References

  1. Sinsky C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, et al. Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties. Ann Intern Med [Internet]. 2016 [cited 2024 Aug 12]; 165(11):753. Available from: http://annals.org/article.aspx?doi=10.7326/M16-0961.
  2. Amisha, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care [Internet]. 2019 [cited 2024 Aug 12]; 8(7):2328–31. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6691444/.
  3. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J [Internet]. 2019 [cited 2024 Aug 13]; 6(2):94–8. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/.
  4. Ghaffar Nia N, Kaplanoglu E, Nasab A. Evaluation of artificial intelligence techniques in disease diagnosis and prediction. Discover Artificial Intelligence [Internet]. 2023 [cited 2024 Aug 13]; 3(1):5. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9885935/.
  5. Dvijotham K (Dj), Winkens J, Barsbey M, Ghaisas S, Stanforth R, Pawlowski N, et al. Enhancing the reliability and accuracy of AI-enabled diagnosis via complementarity-driven deferral to clinicians. Nat Med [Internet]. 2023 [cited 2024 Aug 13]; 29(7):1814–20. Available from: https://www.nature.com/articles/s41591-023-02437-x.
  6. Webster CS, Taylor S, Weller JM. Cognitive biases in diagnosis and decision making during anaesthesia and intensive care. BJA Educ [Internet]. 2021 [cited 2024 Aug 13]; 21(11):420–5. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8520040/.
  7. Johnson KB, Wei W, Weeraratne D, Frisse ME, Misulis K, Rhee K, et al. Precision Medicine, AI, and the Future of Personalized Health Care. Clin Transl Sci [Internet]. 2021 [cited 2024 Aug 13]; 14(1):86–93. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7877825/.
  8. Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med. 2023; 6(1):113.
  9. Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy [Internet]. 2021 [cited 2024 Aug 13]; 23(1):18. Available from: https://www.mdpi.com/1099-4300/23/1/18.
  10. Yadav N, Pandey S, Gupta A, Dudani P, Gupta S, Rangarajan K. Data Privacy in Healthcare: In the Era of Artificial Intelligence. Indian Dermatol Online J [Internet]. 2023 [cited 2024 Aug 12]; 14(6):788–92. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10718098/.
  11. McMahan B, Moore E, Ramage D, Hampson S, Arcas BA y. Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics [Internet]. PMLR; 2017 [cited 2024 Aug 13]. Available from: https://proceedings.mlr.press/v54/mcmahan17a.html.
  12. Moshawrab M, Adda M, Bouzouane A, Ibrahim H, Raad A. Reviewing Federated Machine Learning and Its Use in Diseases Prediction. Sensors (Basel) [Internet]. 2023 [cited 2024 Aug 13]; 23(4):2112. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9958993/.
  13. Policy Brief Understanding Liability Risk from Healthcare AI | Stanford HAI [Internet]. [cited 2024 Aug 13]. Available from: https://hai.stanford.edu/policy-brief-understanding-liability-risk-healthcare-ai.
Share

Cao Hantian

Bachelor of Science, BSc in Medical Biosciences, Imperial College London

Hantian is pursuing higher education in biomedical research that intersects with computer science. He has much exposure to molecular and cellular research with emphasis on cancer, neuroscience, and stem cells. He is also actively engaged in computational analysis of biological data that is dedicated to unravel the big molecular and cellular patterns underlying human diseases. In his part-time, he works as an English tutor for Chinese students for several years.

arrow-right