Î÷¹ÏÊÓƵ

[Skip to Navigation]
Sign In
Views 845
Comment & Response
February 26, 2024

Potential of Large Language Models as Tools Against Medical Disinformation

Author Affiliations
  • 1Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
  • 2Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
JAMA Intern Med. 2024;184(4):450. doi:10.1001/jamainternmed.2024.0020

To the Editor Menz et al1 highlight a concerning aspect of large language models (LLMs), emphasizing the urgent need for immediate attention and action in preventing the rapid generation of convincing but false medical information at scale. Although we agree with them, it is crucial to recognize that the spread of medical disinformation on the internet predates the advent of LLMs.2 Even if these artificial intelligence (AI) tools restrict their capacity to generate such content, malevolent individuals can still resort to other ways to make and spread falsehoods. Therefore, a more pressing need may be finding constructive solutions to empower internet users in discerning the reliability of online health information. Although the authors point out the potential negative effects of LLMs, we should not overlook the equal potential of well-trained LLMs themselves as powerful tools in combating health misinformation.3

Add or change institution
×