Skip to Main Content

The AI Syllabus

This guide is designed to provide access to the concepts and resources comprising the AI Syllabus by Anne Kingsley and Emily Moss.

What is AI?

Critical AI Literacies: 

  • Know the basic principles of AI including what it is, how it works, and who is making it.
  • Understand the fundamental differences of AI types including how they are trained. 

On this page you will find multimedia sources explaining and demonstrating foundational concepts related to artificial intelligence including machine learning, large language models, neural networks, natural language processing, and AI outputs as well as key concepts, activities, and assignments to build understanding of and critically engage with the fundamentals of AI. 

Sources

Fundamentals of AI 

Sources: 

 

60 Minutes. (2023, March 5). ChatGPT and large language model bias [Video]. Youtube. 

  • In this 6 minute interview, Timnit Gebru (one of the authors of the Stochastic Parrot model paper and former co-head of AI Ethics Team at Google) discusses how generative AI tools are trained on large language models and the biases that arise from training GenAI tools with massive indiscriminate data from the internet as “size doesn’t guarantee diversity”. #Practical

 

AssemblyAI. (2023, July 5). A complete look at large language models [Video]. Youtube. 

  • This 11 min explainer video explores the main concepts related to building and using LLMs. “ChatGPT belongs to a class of AI systems called Large Language Models, which can perform an outstanding variety of cognitive tasks involving natural language.” #Practical

 

Bender, E.M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623.

  • In this academic paper, Bender, Gebru, McMillan-Major, and Shmitchell explore the output of large language models to convey the idea that “text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind” (p. 616). #Practical #Philosophical

 

Brown, S. (2021, April 21). Machine learning, explained. MIT Management Sloan School. 

  • This explainer article describes what qualifies as machine learning, how it works, who is using it and why, and the ethical concerns surrounding its design and implementation. #Practical

 

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. 

  • Crawford’s foundational text uses the metaphor of an atlas to create a map of the themes and discourses relevant to our current understanding of AI including earth and the environmental impact of AI, labor, data, classification, affect, and state power and warfare to produce a well-researched and cogent critique of how AI comes into the world and the various harms it creates and perpetuates from the perspective of an AI expert. #Practical #Philosophical

 

Exploring the differences between narrow AI, general AI, and superintelligent AI. Institute of Data. (2023, October 6). 

  • This article includes a useful breakdown of the distinctions, applications, and nuances between narrow AI (aka weak AI) or task-oriented AI which requires human intervention, general AI or AGI which is AI with human-like capacities, and superintelligent AI which is AI that surpasses human capacity. Significantly, it notes that AGI has yet to be achieved. #Practical

 

Howard, A., and Isbell, C. (2020, September 21). Diversity in AI: The invisible men and women. MIT Sloan Management Review. 

  • This article details how a lack of diversity in the workforce related to AI development impacts AI output and its effects on users. The piece includes cited data on demographics of tech employees as well as recommendations for how to impove diversity at every level of the AI process - development, training, testing, revising, etc. #Practical

 

IBM Technology. (2024, August 5). AI, machine learning, deep learning and generative AI explained [Video]. Youtube. 

  • This 10 minute video featuring an IBM engineer clearly breaks down the distinctions between machine learning, deep learning and generative AI with examples and important use contexts in an accessible dialogue from the POV of an AI expert. #Practical

 

LIS - The London Interdisciplinary School. (2023, August 11). How AI image generators make bias worse [Video]. Youtube. 

  • This 8 minute video looks at the results of a research study that examined thousands of Midjourney image outputs to conclude that the racial and gender biases reflected in the images of different roles in society are more extreme than the racial and gender biases that exist in those same roles in reality. This source includes a useful and accessible demonstration of how image generators are trained and how that process inevitably leads to representational harms. #Practical #Philosophical

 

Moveworks. (2024, January 24). Stochastic parrots explained [Video]. Youtube.

  • This 2 minute video conveys the risks in assigning comprehension abilities to text-based GenAI outputs given that they are essentially “word salad” using the example of the stochastic parrot. Notably, the video advocates for increased training data while the original paper cautions on the use of large(r) language models. #Practical

 

Ramos, G. (2022, August 22). Why we must act now to close the gender gap in AI. World Economic Forum.

  • This article serves as a useful and accessible summary of the gender gap in work related to AI including that those who identify as women make up only 22% of AI professionals, globally. Other data includes a gender breakdown of AI and computer science PhDs, Information and Communication Technologies graduates, manager and leadership positions in tech companies, and the percentage of venture capital directed at start-ups founded by women (2%). The reporting was produced by UNESCO (United Nations Education, Scientific, and Cultural Organizations) and led to their Recommendation on the Ethics for Artificial Intelligence. #Practical

 

USC Annenberg. (2021, April 14). Kate Crawford maps a world of extraction and exploitation in her book ‘Atlas of AI’ [Video]. Youtube.

  • This 2 minute video serves as a useful introduction to and overview of the themes that Crawford explores in her text on AI through a conversation with Crawford herself. #Practical

 

Building Critical AI Literacies

KNOWING: 

Algorithmic AI follows set rules to solve specific problems, while generative AI creates new content like text or images by learning from data. The data comes from a wide range of publicly available and licensed sources as well as from users interacting with the AI tools. Narrow AI focuses on one task while general AI aims to think and learn like a human across many tasks. Machine learning powers both types of AI by helping them improve through experience instead of just following fixed rules.

To critically engage with the fundamentals of AI, it's important to: 

  • Identify the different types of AI that we use including algorithmic AI and generative AI as well as their subtypes.
  • Understand that your input, clicks, or other contributions become additional training data when you interact with AI tools. 
  • Analyze the possibilities and limitations of AI tools to determine if, when, where, and how they are useful to you. 
  • Evaluate output to account for biases inherent to the development, training, and testing of all AI tools. 
  • Remember that GenAI especially is not magic*. 

*"This is not magic, it is statistical analysis at scale" (Kate Crawford, Atlas of AI, 215); "If you just stop with the magic, and trust that it works, you stop looking for flaws" (Meredith Broussard, More Than a Glitch, 188)

DOING: 

  • Synthesize recent research on ethical or equitable approaches to Machine Learning/Training data on AI. Who is doing what? What is their approach? What are the models for ethical approaches? Taking key concepts, ideas, or approaches from these models, what model would you recommend? 
  • Select an AI tool or service and briefly explain its purpose and user base. Summarize the company’s data policy, focusing on how information is collected, stored, and managed. Use a large language model or similar AI tool to analyze the policy’s clarity, identifying any potential issues or ambiguities. Connect to recent research/sources on ethical approaches to AI use to evaluate the policy’s strengths and weaknesses, and propose 2–3 improvements..
  • Train an AI with a data set of your choosing/creation through PlayLab or find a tool where you can train an AI model.

FEELING: 

How is the algorithm working on you? Choose a social media platform on which you have an account and look at your for you page and/or recommended videos, reels, tweets, feed, etc. These are not accounts you follow but rather what the platform is presenting to you as thinking you'd be interested in it. Consider these reflective questions as you explore the results. 

  • What does it feel like the algorithm knows about you?
  • What feels accurate to who you are?
  • What feels like a misunderstanding or misapprehension of who you are?
  • Does it feel in-line with what you post, search, watch, etc? 
  • How do you feel knowing that you are training the algorithm to learn more about who you are? 
  • Who do you feel benefits from your engagement on the platform? You? The company? Both? (Advertisers?)