Skip to Main Content

The AI Syllabus

This guide is designed to provide access to the concepts and resources comprising the AI Syllabus by Anne Kingsley and Emily Moss.

Healthcare and Medicine

Critical AI Literacies: 

  • Investigate the impact of AI on medical professional and/or patient experience. 
  • Engage with the discourse on AI use in healthcare.

On this page you will find multimedia sources that reflect and explore how AI is used in healthcare and medicine including research, diagnosis, patient care as well as public perception of these uses.  This page also includes key concepts, activities, and assignments to build understanding of and critically engage with the AI through the lens of healthcare and medicine. 

Sources

Healthcare and Medicine

Sources: 

 

ABC News. (2023, August 15). AI researcher breaks down tech’s social issues in new book ‘More Than a Glitch’ [Video.] Youtube. 

  • In this 5 minute interview, Broussard (author of More Than a Glitch) speaks in more detail about her experience with AI and her breast cancer diagnosis including the role that the machine played in conjunction with her doctors, the possibilities for AI in healthcare in the future and the concerns about how integrating technology that is not neutral may reinforce or introduce new biases into patient care. [With thanks to student researcher Natasha DeCoud for this contribution.] #Practical

 

Guo, L.N., Lee, M.S., Kassamali, B., Mita, C., & Nambudiri, V.E. (2021). Bias in, bias out: underreporting and underrepresentation of diverse skin types in machine learning research for skin cancer detection—a scoping review. Journal of the American Academy of Dermatology, 87(1), 157-159.

  • This short form journal article reviews data sets for classifying malignant or premalignant skin lesions to show how machine learning can create racial bias in medical diagnostic tools. #Practical

 

McCradden, M.D., Joshi, S., Anderson, J.A., Mazwi, M., Goldenberg, A., & Zlotnick Shaul, R. (2020, June 25). Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning. Journal of the American Medical Informatics Association, 27(12), 2024-2027. 

  • In this academic journal article, the authors explore how machine learning (ML) bias can negatively impact automated decision making and ultimately the patient care of people in already vulnerable groups exacerbating inequities in healthcare. They advocate for an ethical principle framework specific to ML and healthcare as well as increased regulation on the use of ML in medicine.  [With thanks to student researcher Natasha DeCoud for this contribution.] #Practical #Philosophical

 

How Americans view use of AI in healthcare. (2023, February 22). Pew Research Center. 

  • This report explores public perception on the relationship between artificial intelligence and healthcare. #Practical

 

Lohr, S. AI may someday work medical miracles. For now, it helps do paperwork. (2023,June 26). The New York Times.  

  • This article makes a distinction between algorithmic AI and generative AI as applied to medical care and suggests the latter technology (including ChatGPT) could reduce administrative workload and relieve burnout for healthcare providers. #Practical

 

Majovsky, M., Cerny, M., Kasal, M., Komarc, M., & Netuka, D. (2023). Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. Journal of Medical Internet Research, 25(46924) 

  • An open access journal article that looks at how the authors used ChatGPT to generate a scientific article related to the field of neurosurgery. #Practical

 

NBC News. (2023, February 4). High tech hospital uses artificial intelligence in patient care [Video]. Youtube. 

  • This 2 minute news story is about how the University of Florida Health Center is using AI to monitor patients. #Practical

 

NEJM AI Grand Rounds [Audio podcast]. (2023 - present).

  • Provided by the New England Journal of Medicine, this podcast explores the intersection of artificial intelligence and healthcare. Sample episodes feature topics like AI and cognitive therapy, cardiology, radiology, LLMs and medical data, etc. Updates monthly. #Practical #Philosophical

 

Building Critical AI Literacies

KNOWING: 

AI is revolutionizing healthcare by improving diagnostics, personalizing treatment plans, and enhancing drug discovery through machine learning and deep learning models. Current applications include AI-driven imaging analysis, predictive analytics for disease detection, and virtual health assistants, but concerns remain about bias in training data, which can lead to disparities in patient outcomes.

To critically engage with AI and healthcare, it's important to: 

  • Analyze how AI is used in decision-making, including its limitations and potential biases. 
  • Evaluate the ethical implications of AI in patient care such as privacy, consent, and disparities in access to technology-driven treatments. 
  • Examine the role of AI in medical research and diagnosis considering issues of accuracy, transparency, and human oversight. 

DOING: 

  • Google a particular hot topic/or major questions in use of AI in health and/or medicine that you are viewing. What AI search results appear when you first google this topic? Then, use specific Gen AI tools - ChatGPT, Claude, etc. to evaluate the topic and major questions the field is asking. What do you notice in comparison.
  • How will AI impact medical field work/labor? What work tasks in the medical field will AI impact? For example, medical billing. What is the conversation on work + AI in the medical field? What are the implications of use of AI in this work?
  • Mapping the conversation. Find a specific conversation on AI use/interventions in Healthcare and Medicine. What opportunities exist? What are the limitations and concerns? Who are the different names, tools, activities cited in this conversation? How would you synthesize? Stepping  back from your map, what is missing in the conversation? 
  • What models are there for AI and Medical Ethics in terms of data bias as well as reprogramming and retraining? What do you find when you research this conversation on rethinking data algorithms in medicine or medical diagnosis/intervention? What do you find when you ask AI to propose a model for ethical data training in medicine and healthcare? 
  • What is the conversation (opportunities and limitations) of AI and Medical Advocacy? What can you create - with or without AI - that can help patients advocate for themselves for better medical care?  
  • Can AI empower patients to have more agency (and understanding) of their healthcare needs? What are the opportunities and limitations? Can AI be a useful medical partner for patients to articulate or understand their needs? Interview diverse perspectives (from patients to medical practitioners and researchers to cover a wide range of input).