Skip to Main Content

The AI Syllabus

This guide is designed to provide access to the concepts and resources comprising the AI Syllabus by Anne Kingsley and Emily Moss.

Biases, Ethics, Racial and Social Justice

Critical AI Literacies: 

  • Recognize that AI is never neutral. 
  • Evaluate the impact of AI on a societal level. 

On this page you will find multimedia sources curated to create an expansive and critical exploration of AI ethics. This page highlights the work of foundational scholars in racial and digital justice, critical Indigenous AI scholars, and other journalists and academics considering the myriad ways AI impacts humans. Additionally, this page includes key concepts, activities, and assignments to build understanding of and critically engage with AI ethics. 

Sources

Biases, Ethics, Racial and Social Justice

Sources: 

 

Abundant intelligences. (2023). 

  • This is the current research project of Indigenous AI scholar, Jason E. Lewis which explores Indigenous approaches to artificial intelligence. The project began in 2023 and runs until 2029. All information on participants, approach, and research inquiries is available through the linked project website. #Practical #Philosophical

 

 Amini, A. (2021, March 6). MIT 6. S191: AI bias and fairness [Video]. Youtube.

  • This 45 min video lecture with Ava Soleimany from the MIT course Intro to Deep Learning defines algorithmic bias in terms of object classification and shows connection to income and geography, explores different manifestations of these biases as well as strategies for mitigating biases. Lecture outline with timestamps linked in description. #Practical

 

Arista, N., et. al. (2021). Against reduction: Designing a human future with machines. Massachusetts Institute of Technology Press. 

  • This open access book is a collection of essays that consider what it means to embrace the irreducibility of the relationship between human and machine. Each essay was written in response to Joichi Ito’s manifesto, “Resisting Reduction”. This compilation includes the essay “Making Kin with Machines” authored by J.E. Lewis, S. Kite, N. Arista, and A. Pechawis which explores AI through the lens of Indigenous ontologies and ways of knowing. #Philosophical

 

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity.

  • In her foundational work, Benjamin evidences the necessary relationship between race and technology and suggests that race itself is a technology in that it’s been used as a social and political tool to separate, stratify, sanctify and to convey and create meaning. Therefore, technology (design, execution, marketing, use) is always already racialized even as tech companies, creators, and tech-evangelists make the argument for tech’s inherent objectivity and neutrality. Benjamin’s book precedes the explosion of GenAI but does include useful discussion of algorithmic AI, algorithmic bias, and digital justice as it should be considered in AI discourse and policy. [Available in print for check out at the DVC Library.] #Philosophical #Practical

 

Broussard, M. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. The MIT Press. 

  • Broussard is the Research Director at the NYU Alliance for Public Interest Technology and an established digital scholar on the topic of artificial intelligence. In this accessible and personal work, she explores the basics of machine learning to convey how bias is built into every aspect of the technologies we use every day. This text is useful for case studies on the inequitable impacts of AI including in the field of healthcare which Broussard investigates through her own lived experience as a breast cancer patient. [Available in print for check out at the DVC Library.] #Practical #Philosophical

 

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St Martin’s Press. 

  • Eubanks is a founding member of Our Data Bodies project and Political Science professor. In her timely work, she examines how algorithmic decision-making systems reinforce discrimination and deepen poverty in the US. She argues that these technologies, used in welfare, healthcare, and housing, rely on biased data and flawed assumptions, often targeting low-income communities with increased surveillance and reduced support. [Available in print for check out at the DVC Library.] #Practical #Philosophical

 

Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A human rights-based approach to responsible AI [Conference paper]. arXiv.

  • This academic paper argues for a human rights framework in considering the impact of AI focusing less on what the machines do and more on who is harmed by their design and output. #Practical #Philosophical

 

Haraway, D. (2016).  Cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. University of Minnesota Press. 

  • This long form essay (66 pages) explores the blurring boundaries between human and machine to subvert traditional boundaries of identity, gender, and nature. #Philosophical

 

Hsu, T. (2023, August 7). What can you do when AI lies about you? The New York Times. 

  • This article explores several instances where artificial intelligence mischaracterized or slandered people and also discusses how there is currently little legal precedent for AI operations. #Practical

 

Indigenous AI. Indigenous Protocol and Artificial Intelligence Working Group. (2019). 

  • Considers how to use Indigenous epistemologies and perspectives to approach AI includes Indigenous AI Resources reading list. Workgroup organizers include Jason Edward Lewis and this workgroup evolved into the Abundant Intelligences research project. #Practical #Philosophical

 

Montreal Brief  (2021). 

  • This website hosts AI Ethics Brief and other open access resources to create more knowledge and community around AI. #Practical

 

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. 

  • This book explores data discrimination and racial biases in data sets in search engines such as Google and digital media platforms, demonstrates how misrepresentation can lead to oppression and marginalization by amplifying some voices and silencing others, and critiques outcomes of monopolistic search engines and their impact on women of color. [Available in print and as an unlimited use e-book for check out at the DVC Library.] #Practical #Philosophical

 

O’Neil, L. (2023, August 12). These women tried to warn us about AI. Rolling Stone. 

 

PdF Youtube. (2016, June 15). Challenging the algorithms of oppression [Video]. Youtube. 

  • This 12 minute video lecture by Safiya Noble explains how major search engines create racial biases and reinforce oppressive narratives. #Practical

 

Radical AI Podcast. (2020-2023). 

  • Hosted by two digital scholars, this podcast features interviews and discussions with those working at the intersection of technology and social and racial justice. #Practical #Philosophical

 

Sankin, A., and Mattu, S. (2023, October 2). Predictive policing software terrible at predicting crimes. The Markup. 

  • This article, authored by data journalists, is an in-depth investigation that includes statistics to evidence the ineffectiveness of algorithmic predictive policing. It specifically examines the work of Geolitica, rebranded from PredPol in 2021 and rebranded again as SoundThinking in 2024. The article also considers the unintended consequences of incorrect crime prediction and increased police presence on the communities of people of color throughout the US. #Practical #Philosophical

 

Walker, D. (2024). Deprogramming implicit bias: The case for public interest technology. Daedalus, 123 (1), 268-275. Doi 

  • In this article, Walker explains how public interest technology, as an interdisciplinary and equity-minded field of study, can combat algorithmic injustices including consequences of predictive policing, facial recognition software, healthcare assessment, risk assessment used by judges to determine parole and probation, and more all of which disproportionately impact Black people. He also cautions against technochauvinism as the belief that problems with tech can be fixed with more or better tech. Note: Walker is the current president of the Ford Foundation and this article promotes the work of the Ford Foundation. #Practical #Philosophical

 

WCBV Channel 5 Boston. (2024, November 10). Cityline: A conversation with Dr. Joy Buolamwini [Video]. Youtube. 

  • In Part 1 of this interview, Dr. Buolamwini, founder of the Algorithmic Justice League and poet of code, discusses the project where she first saw the impact of algorithmic bias, the widespread impact algorithmic bias has had on society, and how and why she coined the term ‘coded gaze’.  In Part 2, she talks about her background writing code as a 10 year old and developing robots as an undergraduate as well as the mission of her organization, Algorithmic Justice League, and where she sees AI going in the future. #Practical #Philosophical

 

Building Critical AI Literacies

KNOWING: 

Technology is often seen as neutral, but AI and other digital tools are shaped by racial, gender, and other biases embedded in the tech industry that creates them. The development of AI reflects existing social inequities and its use often disproportionately impacts already marginalized people. Ethical use of AI must include consideration of the people/systems acting upon the AI and the people/systems the AI is acting upon to evaluate the impact of AI on a societal level.

To critically engage with AI ethics, it's important to: 

  • Examine the relationship between race and technology to understand how inequities shape innovations.
  • Recognize algorithmic biases by identifying what they are, how they work, and who they disproportionately impact. 
  • Understand that any discussion of ethics is incomplete without a racial lens.
  • Explore the work of digital scholars and activists who challenge biased technologies and advocate for ethical AI development. 

DOING: 

  • Who’s who in the discussion of AI and Ethics? Consult with your peers to create a map of different and unique figures (or pair up and work together as a team). For your chosen figure, investigate their publications, conferences, materials, etc. What do they do? What discussions do they take part in? Who is in their web or network of influence? Who do they collaborate with both people, organizations, institutions, companies, etc.? Compare this figure to other figures chosen by your peers. What connections can be made? Reflect on what are the broad consensus points around AI Ethics as well as what are the unique insights. Finally, step back and highlight gaps in the AI ethics conversation.
  • Review different generative AI’s responses to a chosen ethical question centered on AI. Track answers over time and tool used. Identify any changes or shifts you notice (1) across different tools and (2) across time. Share results with your peers. Reflect on how these evolving perspectives impact the discussion of AI and ethics and responsible AI development.
  • Can AI be used for social change? Why or why not? What community support project could generative AI support or help implement? Gather data and information across different media and resources. For example, review published research; review AI tool responses; interview your peers; interview community members. Put your findings together and present a report.
  • What would a 1 page guide to AI and social justice look like? Synthesize conversations on AI and social justice. Use research to bolster or prioritize/evaluate/map the conversation. Use a variety of tools, people, etc. to implement. Ensure that diverse perspectives, approaches, voices, etc. are represented.

FEELING: 

After engaging with the resources on this page, consider these critical self-reflection questions about what you've learned. 

  • How do I feel about the idea that AI can reinforce racial biases? Does it challenge my assumptions about technology being neutral?

  • In what ways might my own experiences or background shape how I perceive AI and its ethical concerns? How might others with different lived experiences see these issues differently?

  • What responsibility do I have as a user to question and challenge bias in AI? How can I take action to mitigate harm and promote accountability in technology?