WHO Urges Oversight as Rapid AI Adoption Poses Risks of Medical Errors and Patient Harm

 The World Health Organization (WHO) has issued a warning regarding the rapid growth of artificial intelligence (AI) tools in healthcare, emphasizing the need for caution to ensure patient safety. The organization expressed concern that developers and others are disregarding the necessary prudence typically applied to new technologies due to the excitement surrounding platforms like ChatGPT, Bard, Bert, and others, which have the potential to improve patient health.

The WHO cautioned that hasty adoption of untested AI systems could result in errors by healthcare workers, harm patients, erode trust in AI, and hinder the long-term benefits of such technologies worldwide. Many AI systems in healthcare employ large language model tools (LLMs) to mimic human understanding, processing, and communication. The WHO stressed the importance of closely examining the risks associated with the meteoric rise of these technologies, many of which are still experimental, as they pose a threat to key values in healthcare and scientific research, such as transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.

The risks associated with using LLMs in healthcare include the potential bias in the data used to train AI, leading to the generation of misleading or inaccurate information. This can result in responses that seem authoritative or plausible but are actually incorrect. The WHO emphasized the need for rigorous oversight to ensure safe, effective, and ethical use of these technologies, particularly in improving access to health information, as decision-support tools, or to enhance diagnostic capacity in under-resourced settings, with the aim of protecting people's health and reducing inequities.

The WHO proposed addressing these concerns and gathering clear evidence of benefit before widespread implementation of LLMs in routine healthcare and medicine. These warnings echo statements made by Food and Drug Administration Commissioner Robert Califf, who emphasized the need for nimble regulation of large language models to avoid the healthcare system being overwhelmed by poorly understood technologies.

Medical Technology and AI

Sam Altman, CEO of OpenAI, also emphasized the importance of regulation in his testimony before a Senate subcommittee, stating that if AI technology goes awry, the consequences can be significant, and OpenAI is committed to collaborating with the government to prevent such occurrences.

Noteworthy recent examples of AI applications in medical technology include Smith and Nephew's planning and data visualization software for robotic surgery and BD's software designed to detect Methicillin-resistant Staphylococcus aureus (MRSA).

Comments