Fairness and Bias in Multimodal AI: A Survey
The importance of addressing fairness and bias in artificial intelligence (AI) systems cannot be over-emphasized. Mainstream media has been awashed with news of incidents around stereotypes and other types of bias in many of these systems in recent years. In this survey, we fill a gap with regards t...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The importance of addressing fairness and bias in artificial intelligence
(AI) systems cannot be over-emphasized. Mainstream media has been awashed with
news of incidents around stereotypes and other types of bias in many of these
systems in recent years. In this survey, we fill a gap with regards to the
relatively minimal study of fairness and bias in Large Multimodal Models (LMMs)
compared to Large Language Models (LLMs), providing 50 examples of datasets and
models related to both types of AI along with the challenges of bias affecting
them. We discuss the less-mentioned category of mitigating bias, preprocessing
(with particular attention on the first part of it, which we call preuse). The
method is less-mentioned compared to the two well-known ones in the literature:
intrinsic and extrinsic mitigation methods. We critically discuss the various
ways researchers are addressing these challenges. Our method involved two
slightly different search queries on two reputable search engines, Google
Scholar and Web of Science (WoS), which revealed that for the queries 'Fairness
and bias in Large Multimodal Models' and 'Fairness and bias in Large Language
Models', 33,400 and 538,000 links are the initial results, respectively, for
Scholar while 4 and 50 links are the initial results, respectively, for WoS.
For reproducibility and verification, we provide links to the search results
and the citations to all the final reviewed papers. We believe this work
contributes to filling this gap and providing insight to researchers and other
stakeholders on ways to address the challenges of fairness and bias in
multimodal and language AI. |
---|---|
DOI: | 10.48550/arxiv.2406.19097 |