Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation
The recent advancement of large and powerful models with Text-to-Image (T2I) generation abilities -- such as OpenAI's DALLE-3 and Google's Gemini -- enables users to generate high-quality images from textual prompts. However, it has become increasingly evident that even simple prompts coul...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wan, Yixin Subramonian, Arjun Ovalle, Anaelia Lin, Zongyu Suvarna, Ashima Chance, Christina Bansal, Hritik Pattichis, Rebecca Chang, Kai-Wei |
description | The recent advancement of large and powerful models with Text-to-Image (T2I)
generation abilities -- such as OpenAI's DALLE-3 and Google's Gemini -- enables
users to generate high-quality images from textual prompts. However, it has
become increasingly evident that even simple prompts could cause T2I models to
exhibit conspicuous social bias in generated images. Such bias might lead to
both allocational and representational harms in society, further marginalizing
minority groups. Noting this problem, a large body of recent works has been
dedicated to investigating different dimensions of bias in T2I systems.
However, an extensive review of these studies is lacking, hindering a
systematic understanding of current progress and research gaps. We present the
first extensive survey on bias in T2I generative models. In this survey, we
review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture.
Specifically, we discuss how these works define, evaluate, and mitigate
different aspects of bias. We found that: (1) while gender and skintone biases
are widely studied, geo-cultural bias remains under-explored; (2) most works on
gender and skintone bias investigated occupational association, while other
aspects are less frequently studied; (3) almost all gender bias works overlook
non-binary identities in their studies; (4) evaluation datasets and metrics are
scattered, with no unified framework for measuring biases; and (5) current
mitigation methods fail to resolve biases comprehensively. Based on current
limitations, we point out future research directions that contribute to
human-centric definitions, evaluations, and mitigation of biases. We hope to
highlight the importance of studying biases in T2I systems, as well as
encourage future efforts to holistically understand and tackle biases, building
fair and trustworthy T2I technologies for everyone. |
doi_str_mv | 10.48550/arxiv.2404.01030 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_01030</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_01030</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-9347a310b4b06258a37ab53e8d4175b314cf2f4351af7b9d66dbea56586b19fb3</originalsourceid><addsrcrecordid>eNotj8tuwjAURL1hUUE_oKv6A-rUzvUj6a6lFCKBKrXZR9eNjSyBU5kQwd-3BFZzZhYjHUIeBM9koRR_xnQKQ5ZLLjMuOPA78vV9TIM7087Tt4AHWkVau1PP-o5Ve9w6unTRJexDF1_ou_Mhhgs_0cWAuyNeGWNLN__7duwzMvG4O7j7W05J_bGo5yu2_lxW89c1Q204K0EaBMGttFznqkAwaBW4opXCKAtC_vjcS1ACvbFlq3VrHSqtCm1F6S1MyeP1dpRqflPYYzo3F7lmlIM_Cr1JBA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation</title><source>arXiv.org</source><creator>Wan, Yixin ; Subramonian, Arjun ; Ovalle, Anaelia ; Lin, Zongyu ; Suvarna, Ashima ; Chance, Christina ; Bansal, Hritik ; Pattichis, Rebecca ; Chang, Kai-Wei</creator><creatorcontrib>Wan, Yixin ; Subramonian, Arjun ; Ovalle, Anaelia ; Lin, Zongyu ; Suvarna, Ashima ; Chance, Christina ; Bansal, Hritik ; Pattichis, Rebecca ; Chang, Kai-Wei</creatorcontrib><description>The recent advancement of large and powerful models with Text-to-Image (T2I)
generation abilities -- such as OpenAI's DALLE-3 and Google's Gemini -- enables
users to generate high-quality images from textual prompts. However, it has
become increasingly evident that even simple prompts could cause T2I models to
exhibit conspicuous social bias in generated images. Such bias might lead to
both allocational and representational harms in society, further marginalizing
minority groups. Noting this problem, a large body of recent works has been
dedicated to investigating different dimensions of bias in T2I systems.
However, an extensive review of these studies is lacking, hindering a
systematic understanding of current progress and research gaps. We present the
first extensive survey on bias in T2I generative models. In this survey, we
review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture.
Specifically, we discuss how these works define, evaluate, and mitigate
different aspects of bias. We found that: (1) while gender and skintone biases
are widely studied, geo-cultural bias remains under-explored; (2) most works on
gender and skintone bias investigated occupational association, while other
aspects are less frequently studied; (3) almost all gender bias works overlook
non-binary identities in their studies; (4) evaluation datasets and metrics are
scattered, with no unified framework for measuring biases; and (5) current
mitigation methods fail to resolve biases comprehensively. Based on current
limitations, we point out future research directions that contribute to
human-centric definitions, evaluations, and mitigation of biases. We hope to
highlight the importance of studying biases in T2I systems, as well as
encourage future efforts to holistically understand and tackle biases, building
fair and trustworthy T2I technologies for everyone.</description><identifier>DOI: 10.48550/arxiv.2404.01030</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Computers and Society</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/publicdomain/zero/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.01030$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.01030$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wan, Yixin</creatorcontrib><creatorcontrib>Subramonian, Arjun</creatorcontrib><creatorcontrib>Ovalle, Anaelia</creatorcontrib><creatorcontrib>Lin, Zongyu</creatorcontrib><creatorcontrib>Suvarna, Ashima</creatorcontrib><creatorcontrib>Chance, Christina</creatorcontrib><creatorcontrib>Bansal, Hritik</creatorcontrib><creatorcontrib>Pattichis, Rebecca</creatorcontrib><creatorcontrib>Chang, Kai-Wei</creatorcontrib><title>Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation</title><description>The recent advancement of large and powerful models with Text-to-Image (T2I)
generation abilities -- such as OpenAI's DALLE-3 and Google's Gemini -- enables
users to generate high-quality images from textual prompts. However, it has
become increasingly evident that even simple prompts could cause T2I models to
exhibit conspicuous social bias in generated images. Such bias might lead to
both allocational and representational harms in society, further marginalizing
minority groups. Noting this problem, a large body of recent works has been
dedicated to investigating different dimensions of bias in T2I systems.
However, an extensive review of these studies is lacking, hindering a
systematic understanding of current progress and research gaps. We present the
first extensive survey on bias in T2I generative models. In this survey, we
review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture.
Specifically, we discuss how these works define, evaluate, and mitigate
different aspects of bias. We found that: (1) while gender and skintone biases
are widely studied, geo-cultural bias remains under-explored; (2) most works on
gender and skintone bias investigated occupational association, while other
aspects are less frequently studied; (3) almost all gender bias works overlook
non-binary identities in their studies; (4) evaluation datasets and metrics are
scattered, with no unified framework for measuring biases; and (5) current
mitigation methods fail to resolve biases comprehensively. Based on current
limitations, we point out future research directions that contribute to
human-centric definitions, evaluations, and mitigation of biases. We hope to
highlight the importance of studying biases in T2I systems, as well as
encourage future efforts to holistically understand and tackle biases, building
fair and trustworthy T2I technologies for everyone.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Computers and Society</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tuwjAURL1hUUE_oKv6A-rUzvUj6a6lFCKBKrXZR9eNjSyBU5kQwd-3BFZzZhYjHUIeBM9koRR_xnQKQ5ZLLjMuOPA78vV9TIM7087Tt4AHWkVau1PP-o5Ve9w6unTRJexDF1_ou_Mhhgs_0cWAuyNeGWNLN__7duwzMvG4O7j7W05J_bGo5yu2_lxW89c1Q204K0EaBMGttFznqkAwaBW4opXCKAtC_vjcS1ACvbFlq3VrHSqtCm1F6S1MyeP1dpRqflPYYzo3F7lmlIM_Cr1JBA</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Wan, Yixin</creator><creator>Subramonian, Arjun</creator><creator>Ovalle, Anaelia</creator><creator>Lin, Zongyu</creator><creator>Suvarna, Ashima</creator><creator>Chance, Christina</creator><creator>Bansal, Hritik</creator><creator>Pattichis, Rebecca</creator><creator>Chang, Kai-Wei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240401</creationdate><title>Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation</title><author>Wan, Yixin ; Subramonian, Arjun ; Ovalle, Anaelia ; Lin, Zongyu ; Suvarna, Ashima ; Chance, Christina ; Bansal, Hritik ; Pattichis, Rebecca ; Chang, Kai-Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-9347a310b4b06258a37ab53e8d4175b314cf2f4351af7b9d66dbea56586b19fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Computers and Society</topic><toplevel>online_resources</toplevel><creatorcontrib>Wan, Yixin</creatorcontrib><creatorcontrib>Subramonian, Arjun</creatorcontrib><creatorcontrib>Ovalle, Anaelia</creatorcontrib><creatorcontrib>Lin, Zongyu</creatorcontrib><creatorcontrib>Suvarna, Ashima</creatorcontrib><creatorcontrib>Chance, Christina</creatorcontrib><creatorcontrib>Bansal, Hritik</creatorcontrib><creatorcontrib>Pattichis, Rebecca</creatorcontrib><creatorcontrib>Chang, Kai-Wei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wan, Yixin</au><au>Subramonian, Arjun</au><au>Ovalle, Anaelia</au><au>Lin, Zongyu</au><au>Suvarna, Ashima</au><au>Chance, Christina</au><au>Bansal, Hritik</au><au>Pattichis, Rebecca</au><au>Chang, Kai-Wei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation</atitle><date>2024-04-01</date><risdate>2024</risdate><abstract>The recent advancement of large and powerful models with Text-to-Image (T2I)
generation abilities -- such as OpenAI's DALLE-3 and Google's Gemini -- enables
users to generate high-quality images from textual prompts. However, it has
become increasingly evident that even simple prompts could cause T2I models to
exhibit conspicuous social bias in generated images. Such bias might lead to
both allocational and representational harms in society, further marginalizing
minority groups. Noting this problem, a large body of recent works has been
dedicated to investigating different dimensions of bias in T2I systems.
However, an extensive review of these studies is lacking, hindering a
systematic understanding of current progress and research gaps. We present the
first extensive survey on bias in T2I generative models. In this survey, we
review prior studies on dimensions of bias: Gender, Skintone, and Geo-Culture.
Specifically, we discuss how these works define, evaluate, and mitigate
different aspects of bias. We found that: (1) while gender and skintone biases
are widely studied, geo-cultural bias remains under-explored; (2) most works on
gender and skintone bias investigated occupational association, while other
aspects are less frequently studied; (3) almost all gender bias works overlook
non-binary identities in their studies; (4) evaluation datasets and metrics are
scattered, with no unified framework for measuring biases; and (5) current
mitigation methods fail to resolve biases comprehensively. Based on current
limitations, we point out future research directions that contribute to
human-centric definitions, evaluations, and mitigation of biases. We hope to
highlight the importance of studying biases in T2I systems, as well as
encourage future efforts to holistically understand and tackle biases, building
fair and trustworthy T2I technologies for everyone.</abstract><doi>10.48550/arxiv.2404.01030</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2404.01030 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2404_01030 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Computers and Society |
title | Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T10%3A56%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Survey%20of%20Bias%20In%20Text-to-Image%20Generation:%20Definition,%20Evaluation,%20and%20Mitigation&rft.au=Wan,%20Yixin&rft.date=2024-04-01&rft_id=info:doi/10.48550/arxiv.2404.01030&rft_dat=%3Carxiv_GOX%3E2404_01030%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |