Embed systemic equity throughout industrial ecology applications: How to address machine learning unfairness and bias

Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle —from conception to implementation—with an emphasis on reducing inequity in artificial intelligence (AI) and machine learning (ML) applications. Simply stating that equity shoul...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of industrial ecology 2024-06
Hauptverfasser: Bozeman, Joe F., Hollauer, Catharina, Ramshankar, Arjun Thangaraj, Nakkasunchi, Shalini, Jambeck, Jenna, Hicks, Andrea, Bilec, Melissa, McCauley, Darren, Heidrich, Oliver
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent calls have been made for equity tools and frameworks to be integrated throughout the research and design life cycle —from conception to implementation—with an emphasis on reducing inequity in artificial intelligence (AI) and machine learning (ML) applications. Simply stating that equity should be integrated throughout, however, leaves much to be desired as industrial ecology (IE) researchers, practitioners, and decision‐makers attempt to employ equitable practices. In this forum piece, we use a critical review approach to explain how socioecological inequities emerge in ML applications across their life cycle stages by leveraging the food system. We exemplify the use of a comprehensive questionnaire to delineate unfair ML bias across data bias, algorithmic bias, and selection and deployment bias categories. Finally, we provide consolidated guidance and tailored strategies to help address AI/ML unfair bias and inequity in IE applications. Specifically, the guidance and tools help to address sensitivity, reliability, and uncertainty challenges. There is also discussion on how bias and inequity in AI/ML affect other IE research and design domains, besides the food system—such as living labs and circularity. We conclude with an explanation of the future directions IE should take to address unfair bias and inequity in AI/ML. Last, we call for systemic equity to be embedded throughout IE applications to fundamentally understand domain‐specific socioecological inequities, identify potential unfairness in ML, and select mitigation strategies in a manner that translates across different research domains.
ISSN:1088-1980
1530-9290
DOI:10.1111/jiec.13509