Biases, Ethics, Racial and Social Justice
Sources:
Abundant intelligences. (2023).
- This is the current research project of Indigenous AI scholar, Jason E. Lewis which explores Indigenous approaches to artificial intelligence. The project began in 2023 and runs until 2029. All information on participants, approach, and research inquiries is available through the linked project website. #Practical #Philosophical
Amini, A. (2021, March 6). MIT 6. S191: AI bias and fairness [Video]. Youtube.
- This 45 min video lecture with Ava Soleimany from the MIT course Intro to Deep Learning defines algorithmic bias in terms of object classification and shows connection to income and geography, explores different manifestations of these biases as well as strategies for mitigating biases. Lecture outline with timestamps linked in description. #Practical
Arista, N., et. al. (2021). Against reduction: Designing a human future with machines. Massachusetts Institute of Technology Press.
- This open access book is a collection of essays that consider what it means to embrace the irreducibility of the relationship between human and machine. Each essay was written in response to Joichi Ito’s manifesto, “Resisting Reduction”. This compilation includes the essay “Making Kin with Machines” authored by J.E. Lewis, S. Kite, N. Arista, and A. Pechawis which explores AI through the lens of Indigenous ontologies and ways of knowing. #Philosophical
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity.
- In her foundational work, Benjamin evidences the necessary relationship between race and technology and suggests that race itself is a technology in that it’s been used as a social and political tool to separate, stratify, sanctify and to convey and create meaning. Therefore, technology (design, execution, marketing, use) is always already racialized even as tech companies, creators, and tech-evangelists make the argument for tech’s inherent objectivity and neutrality. Benjamin’s book precedes the explosion of GenAI but does include useful discussion of algorithmic AI, algorithmic bias, and digital justice as it should be considered in AI discourse and policy. [Available in print for check out at the DVC Library.] #Philosophical #Practical
Broussard, M. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. The MIT Press.
- Broussard is the Research Director at the NYU Alliance for Public Interest Technology and an established digital scholar on the topic of artificial intelligence. In this accessible and personal work, she explores the basics of machine learning to convey how bias is built into every aspect of the technologies we use every day. This text is useful for case studies on the inequitable impacts of AI including in the field of healthcare which Broussard investigates through her own lived experience as a breast cancer patient. [Available in print for check out at the DVC Library.] #Practical #Philosophical
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St Martin’s Press.
- Eubanks is a founding member of Our Data Bodies project and Political Science professor. In her timely work, she examines how algorithmic decision-making systems reinforce discrimination and deepen poverty in the US. She argues that these technologies, used in welfare, healthcare, and housing, rely on biased data and flawed assumptions, often targeting low-income communities with increased surveillance and reduced support. [Available in print for check out at the DVC Library.] #Practical #Philosophical
Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A human rights-based approach to responsible AI [Conference paper]. arXiv.
- This academic paper argues for a human rights framework in considering the impact of AI focusing less on what the machines do and more on who is harmed by their design and output. #Practical #Philosophical
Haraway, D. (2016). Cyborg manifesto: Science, technology, and socialist-feminism in the late twentieth century. University of Minnesota Press.
- This long form essay (66 pages) explores the blurring boundaries between human and machine to subvert traditional boundaries of identity, gender, and nature. #Philosophical
Hsu, T. (2023, August 7). What can you do when AI lies about you? The New York Times.
- This article explores several instances where artificial intelligence mischaracterized or slandered people and also discusses how there is currently little legal precedent for AI operations. #Practical
Indigenous AI. Indigenous Protocol and Artificial Intelligence Working Group. (2019).
- Considers how to use Indigenous epistemologies and perspectives to approach AI includes Indigenous AI Resources reading list. Workgroup organizers include Jason Edward Lewis and this workgroup evolved into the Abundant Intelligences research project. #Practical #Philosophical
Montreal Brief (2021).
- This website hosts AI Ethics Brief and other open access resources to create more knowledge and community around AI. #Practical
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
- This book explores data discrimination and racial biases in data sets in search engines such as Google and digital media platforms, demonstrates how misrepresentation can lead to oppression and marginalization by amplifying some voices and silencing others, and critiques outcomes of monopolistic search engines and their impact on women of color. [Available in print and as an unlimited use e-book for check out at the DVC Library.] #Practical #Philosophical
O’Neil, L. (2023, August 12). These women tried to warn us about AI. Rolling Stone.
PdF Youtube. (2016, June 15). Challenging the algorithms of oppression [Video]. Youtube.
- This 12 minute video lecture by Safiya Noble explains how major search engines create racial biases and reinforce oppressive narratives. #Practical
Radical AI Podcast. (2020-2023).
- Hosted by two digital scholars, this podcast features interviews and discussions with those working at the intersection of technology and social and racial justice. #Practical #Philosophical
Sankin, A., and Mattu, S. (2023, October 2). Predictive policing software terrible at predicting crimes. The Markup.
- This article, authored by data journalists, is an in-depth investigation that includes statistics to evidence the ineffectiveness of algorithmic predictive policing. It specifically examines the work of Geolitica, rebranded from PredPol in 2021 and rebranded again as SoundThinking in 2024. The article also considers the unintended consequences of incorrect crime prediction and increased police presence on the communities of people of color throughout the US. #Practical #Philosophical
Walker, D. (2024). Deprogramming implicit bias: The case for public interest technology. Daedalus, 123 (1), 268-275. Doi
- In this article, Walker explains how public interest technology, as an interdisciplinary and equity-minded field of study, can combat algorithmic injustices including consequences of predictive policing, facial recognition software, healthcare assessment, risk assessment used by judges to determine parole and probation, and more all of which disproportionately impact Black people. He also cautions against technochauvinism as the belief that problems with tech can be fixed with more or better tech. Note: Walker is the current president of the Ford Foundation and this article promotes the work of the Ford Foundation. #Practical #Philosophical
WCBV Channel 5 Boston. (2024, November 10). Cityline: A conversation with Dr. Joy Buolamwini [Video]. Youtube.
- In Part 1 of this interview, Dr. Buolamwini, founder of the Algorithmic Justice League and poet of code, discusses the project where she first saw the impact of algorithmic bias, the widespread impact algorithmic bias has had on society, and how and why she coined the term ‘coded gaze’. In Part 2, she talks about her background writing code as a 10 year old and developing robots as an undergraduate as well as the mission of her organization, Algorithmic Justice League, and where she sees AI going in the future. #Practical #Philosophical