GABInsight: Exploring Gender-Activity Binding Bias in Vision-Language Models
Vision-language models (VLMs) are intensively used in many downstream tasks, including those requiring assessments of individuals appearing in the images. While VLMs perform well in simple single-person scenarios, in real-world applications, we often face complex situations in which there are person...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-10 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Abdollahi, Ali Ghaznavi, Mahdi Karimi Nejad, Mohammad Reza Arash Mari Oriyad Abbasi, Reza Salesi, Ali Behjati, Melika Mohammad Hossein Rohban Mahdieh Soleymani Baghshah |
description | Vision-language models (VLMs) are intensively used in many downstream tasks, including those requiring assessments of individuals appearing in the images. While VLMs perform well in simple single-person scenarios, in real-world applications, we often face complex situations in which there are persons of different genders doing different activities. We show that in such cases, VLMs are biased towards identifying the individual with the expected gender (according to ingrained gender stereotypes in the model or other forms of sample selection bias) as the performer of the activity. We refer to this bias in associating an activity with the gender of its actual performer in an image or text as the Gender-Activity Binding (GAB) bias and analyze how this bias is internalized in VLMs. To assess this bias, we have introduced the GAB dataset with approximately 5500 AI-generated images that represent a variety of activities, addressing the scarcity of real-world images for some scenarios. To have extensive quality control, the generated images are evaluated for their diversity, quality, and realism. We have tested 12 renowned pre-trained VLMs on this dataset in the context of text-to-image and image-to-text retrieval to measure the effect of this bias on their predictions. Additionally, we have carried out supplementary experiments to quantify the bias in VLMs' text encoders and to evaluate VLMs' capability to recognize activities. Our experiments indicate that VLMs experience an average performance decline of about 13.2% when confronted with gender-activity binding bias. |
doi_str_mv | 10.48550/arxiv.2407.21001 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2407_21001</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3086454189</sourcerecordid><originalsourceid>FETCH-LOGICAL-a529-274a8850f9c04dbce5758c720e24bacfbe4d87182936995564902697ebe8ee253</originalsourceid><addsrcrecordid>eNotj1FrwjAUhcNgMHH-gD2tsOe69CZpkr2puE7o2IvstaTtbRdxqUuq6L-36p4OHA6H7yPkKaFTroSgr8Yf7WEKnMopJJQmd2QEjCWx4gAPZBLChlIKqQQh2Ijk2Wy-csG2P_1btDzutp23ro0ydDX6eFb19mD7UzS3rr70c2tCZF30bYPtXJwb1-5Ni9FnV-M2PJL7xmwDTv5zTNbvy_XiI86_stVilsdGgI5BcqOUoI2uKK_LCoUUqpJAEXhpqqZEXiuZKNAs1VqIlOuBV0ssUSGCYGPyfLu9qhY7b3-NPxUX5eKqPCxeboud7_72GPpi0-29G5gKRlXKBU-UZme73le2</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3086454189</pqid></control><display><type>article</type><title>GABInsight: Exploring Gender-Activity Binding Bias in Vision-Language Models</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Abdollahi, Ali ; Ghaznavi, Mahdi ; Karimi Nejad, Mohammad Reza ; Arash Mari Oriyad ; Abbasi, Reza ; Salesi, Ali ; Behjati, Melika ; Mohammad Hossein Rohban ; Mahdieh Soleymani Baghshah</creator><creatorcontrib>Abdollahi, Ali ; Ghaznavi, Mahdi ; Karimi Nejad, Mohammad Reza ; Arash Mari Oriyad ; Abbasi, Reza ; Salesi, Ali ; Behjati, Melika ; Mohammad Hossein Rohban ; Mahdieh Soleymani Baghshah</creatorcontrib><description>Vision-language models (VLMs) are intensively used in many downstream tasks, including those requiring assessments of individuals appearing in the images. While VLMs perform well in simple single-person scenarios, in real-world applications, we often face complex situations in which there are persons of different genders doing different activities. We show that in such cases, VLMs are biased towards identifying the individual with the expected gender (according to ingrained gender stereotypes in the model or other forms of sample selection bias) as the performer of the activity. We refer to this bias in associating an activity with the gender of its actual performer in an image or text as the Gender-Activity Binding (GAB) bias and analyze how this bias is internalized in VLMs. To assess this bias, we have introduced the GAB dataset with approximately 5500 AI-generated images that represent a variety of activities, addressing the scarcity of real-world images for some scenarios. To have extensive quality control, the generated images are evaluated for their diversity, quality, and realism. We have tested 12 renowned pre-trained VLMs on this dataset in the context of text-to-image and image-to-text retrieval to measure the effect of this bias on their predictions. Additionally, we have carried out supplementary experiments to quantify the bias in VLMs' text encoders and to evaluate VLMs' capability to recognize activities. Our experiments indicate that VLMs experience an average performance decline of about 13.2% when confronted with gender-activity binding bias.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2407.21001</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bias ; Binding ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Datasets ; Gender ; Image quality ; Performance evaluation ; Quality control ; Task complexity</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27923</link.rule.ids><backlink>$$Uhttps://doi.org/10.3233/FAIA240555$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.21001$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Abdollahi, Ali</creatorcontrib><creatorcontrib>Ghaznavi, Mahdi</creatorcontrib><creatorcontrib>Karimi Nejad, Mohammad Reza</creatorcontrib><creatorcontrib>Arash Mari Oriyad</creatorcontrib><creatorcontrib>Abbasi, Reza</creatorcontrib><creatorcontrib>Salesi, Ali</creatorcontrib><creatorcontrib>Behjati, Melika</creatorcontrib><creatorcontrib>Mohammad Hossein Rohban</creatorcontrib><creatorcontrib>Mahdieh Soleymani Baghshah</creatorcontrib><title>GABInsight: Exploring Gender-Activity Binding Bias in Vision-Language Models</title><title>arXiv.org</title><description>Vision-language models (VLMs) are intensively used in many downstream tasks, including those requiring assessments of individuals appearing in the images. While VLMs perform well in simple single-person scenarios, in real-world applications, we often face complex situations in which there are persons of different genders doing different activities. We show that in such cases, VLMs are biased towards identifying the individual with the expected gender (according to ingrained gender stereotypes in the model or other forms of sample selection bias) as the performer of the activity. We refer to this bias in associating an activity with the gender of its actual performer in an image or text as the Gender-Activity Binding (GAB) bias and analyze how this bias is internalized in VLMs. To assess this bias, we have introduced the GAB dataset with approximately 5500 AI-generated images that represent a variety of activities, addressing the scarcity of real-world images for some scenarios. To have extensive quality control, the generated images are evaluated for their diversity, quality, and realism. We have tested 12 renowned pre-trained VLMs on this dataset in the context of text-to-image and image-to-text retrieval to measure the effect of this bias on their predictions. Additionally, we have carried out supplementary experiments to quantify the bias in VLMs' text encoders and to evaluate VLMs' capability to recognize activities. Our experiments indicate that VLMs experience an average performance decline of about 13.2% when confronted with gender-activity binding bias.</description><subject>Bias</subject><subject>Binding</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Datasets</subject><subject>Gender</subject><subject>Image quality</subject><subject>Performance evaluation</subject><subject>Quality control</subject><subject>Task complexity</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj1FrwjAUhcNgMHH-gD2tsOe69CZpkr2puE7o2IvstaTtbRdxqUuq6L-36p4OHA6H7yPkKaFTroSgr8Yf7WEKnMopJJQmd2QEjCWx4gAPZBLChlIKqQQh2Ijk2Wy-csG2P_1btDzutp23ro0ydDX6eFb19mD7UzS3rr70c2tCZF30bYPtXJwb1-5Ni9FnV-M2PJL7xmwDTv5zTNbvy_XiI86_stVilsdGgI5BcqOUoI2uKK_LCoUUqpJAEXhpqqZEXiuZKNAs1VqIlOuBV0ssUSGCYGPyfLu9qhY7b3-NPxUX5eKqPCxeboud7_72GPpi0-29G5gKRlXKBU-UZme73le2</recordid><startdate>20241025</startdate><enddate>20241025</enddate><creator>Abdollahi, Ali</creator><creator>Ghaznavi, Mahdi</creator><creator>Karimi Nejad, Mohammad Reza</creator><creator>Arash Mari Oriyad</creator><creator>Abbasi, Reza</creator><creator>Salesi, Ali</creator><creator>Behjati, Melika</creator><creator>Mohammad Hossein Rohban</creator><creator>Mahdieh Soleymani Baghshah</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241025</creationdate><title>GABInsight: Exploring Gender-Activity Binding Bias in Vision-Language Models</title><author>Abdollahi, Ali ; Ghaznavi, Mahdi ; Karimi Nejad, Mohammad Reza ; Arash Mari Oriyad ; Abbasi, Reza ; Salesi, Ali ; Behjati, Melika ; Mohammad Hossein Rohban ; Mahdieh Soleymani Baghshah</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a529-274a8850f9c04dbce5758c720e24bacfbe4d87182936995564902697ebe8ee253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Bias</topic><topic>Binding</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Datasets</topic><topic>Gender</topic><topic>Image quality</topic><topic>Performance evaluation</topic><topic>Quality control</topic><topic>Task complexity</topic><toplevel>online_resources</toplevel><creatorcontrib>Abdollahi, Ali</creatorcontrib><creatorcontrib>Ghaznavi, Mahdi</creatorcontrib><creatorcontrib>Karimi Nejad, Mohammad Reza</creatorcontrib><creatorcontrib>Arash Mari Oriyad</creatorcontrib><creatorcontrib>Abbasi, Reza</creatorcontrib><creatorcontrib>Salesi, Ali</creatorcontrib><creatorcontrib>Behjati, Melika</creatorcontrib><creatorcontrib>Mohammad Hossein Rohban</creatorcontrib><creatorcontrib>Mahdieh Soleymani Baghshah</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Abdollahi, Ali</au><au>Ghaznavi, Mahdi</au><au>Karimi Nejad, Mohammad Reza</au><au>Arash Mari Oriyad</au><au>Abbasi, Reza</au><au>Salesi, Ali</au><au>Behjati, Melika</au><au>Mohammad Hossein Rohban</au><au>Mahdieh Soleymani Baghshah</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GABInsight: Exploring Gender-Activity Binding Bias in Vision-Language Models</atitle><jtitle>arXiv.org</jtitle><date>2024-10-25</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Vision-language models (VLMs) are intensively used in many downstream tasks, including those requiring assessments of individuals appearing in the images. While VLMs perform well in simple single-person scenarios, in real-world applications, we often face complex situations in which there are persons of different genders doing different activities. We show that in such cases, VLMs are biased towards identifying the individual with the expected gender (according to ingrained gender stereotypes in the model or other forms of sample selection bias) as the performer of the activity. We refer to this bias in associating an activity with the gender of its actual performer in an image or text as the Gender-Activity Binding (GAB) bias and analyze how this bias is internalized in VLMs. To assess this bias, we have introduced the GAB dataset with approximately 5500 AI-generated images that represent a variety of activities, addressing the scarcity of real-world images for some scenarios. To have extensive quality control, the generated images are evaluated for their diversity, quality, and realism. We have tested 12 renowned pre-trained VLMs on this dataset in the context of text-to-image and image-to-text retrieval to measure the effect of this bias on their predictions. Additionally, we have carried out supplementary experiments to quantify the bias in VLMs' text encoders and to evaluate VLMs' capability to recognize activities. Our experiments indicate that VLMs experience an average performance decline of about 13.2% when confronted with gender-activity binding bias.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2407.21001</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2407_21001 |
source | arXiv.org; Free E- Journals |
subjects | Bias Binding Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning Datasets Gender Image quality Performance evaluation Quality control Task complexity |
title | GABInsight: Exploring Gender-Activity Binding Bias in Vision-Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T18%3A25%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GABInsight:%20Exploring%20Gender-Activity%20Binding%20Bias%20in%20Vision-Language%20Models&rft.jtitle=arXiv.org&rft.au=Abdollahi,%20Ali&rft.date=2024-10-25&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2407.21001&rft_dat=%3Cproquest_arxiv%3E3086454189%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3086454189&rft_id=info:pmid/&rfr_iscdi=true |