While artificial intelligence chatbots can be a valuable tool for many, it is important to stay vigilant in verifying any information gained from using one. In a study published by JAMA Ophthalmology in November 2023, researchers "used the technology behind the artificial intelligence (AI) chatbot ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim" (Naddaf, 2023). The authors used a combination of a large language model and a data analysis model to compare the outcomes of two surgical procedures. "Data generated by artificial intelligence included some 300 fake participants and showed that one procedure was better than another — a finding inconsistent with true clinical trials, which indicate outcomes from both procedures are similar for up to two years after the operations" (Blum, 2023). To the untrained eye, incorrect AI-generated information can be very convincing, which is why it is so important to be aware of what sources your information stems from.
Blum, K. (2023). Watch out for fake, AI-generated medical information. Association of Health Care Journalists.
Naddaf, M. (2023). ChatGPT generates fake data set to support hypothesis. Nature.
12:46 AI in Healthcare