While ChatGPT shows promise in healthcare, it has significant limitations. Its lack of real-world clinical experience hinders its ability to handle complex, nuanced medical situations and deliver consistent diagnostic accuracy, particularly for patients with intricate medical or psychosocial backgrounds. The model’s inability to process visual information limits its utility in image-reliant fields like radiology and pathology. Additionally, ChatGPT lacks emotional intelligence and empathy, which are critical in direct patient care. Source: PubMed Identifier - PMID: 39949509
Artificial Intelligence (AI) has tremendous potential to advance clinical practice and the delivery of patient care. The NEJM AI Grand Rounds podcast series includes informal conversations with a variety of unique experts exploring the deep issues at the intersection of artificial intelligence, machine learning, and medicine. You will learn how AI will change clinical practice and healthcare, how it will impact the patient experience, and about the people who are pushing for innovation. NEJM AI Grand Rounds - Podcast Series | NEJM AI Journal | JAMA + AI
Important Caveat: While AI tools offer exciting possibilities, it is important to approach their use with a full understanding of the inherent risks involved. It is also important to note that AI responses may not always reflect accurate information and should be validated against reliable sources.
Meanwhile, we highly recommend following the guidelines below when using AI tools.
Adhere to the Samuel Merritt University (SMU) AI Guidelines listed below.
Responsible Use of AI Tools (SMU)
With the growing use of artificial intelligence (AI) tools such as ChatGPT, Grammarly, or Turnitin, it is essential to use these technologies responsibly and in compliance with privacy laws and university policies.
Key considerations include: