Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision

Computer vision models have been known to encode harmful biases, leading to the potentially unfair treatment of historically marginalized groups, such as people of color. However, there remains a lack of datasets balanced along demographic traits that can be used to evaluate the downstream fairness...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lui, Nicholas, Chia, Bryan, Berrios, William, Ross, Candace, Kiela, Douwe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lui, Nicholas
Chia, Bryan
Berrios, William
Ross, Candace
Kiela, Douwe
description Computer vision models have been known to encode harmful biases, leading to the potentially unfair treatment of historically marginalized groups, such as people of color. However, there remains a lack of datasets balanced along demographic traits that can be used to evaluate the downstream fairness of these models. In this work, we demonstrate that diffusion models can be leveraged to create such a dataset. We first use a diffusion model to generate a large set of images depicting various occupations. Subsequently, each image is edited using inpainting to generate multiple variants, where each variant refers to a different perceived race. Using this dataset, we benchmark several vision-language models on a multi-class occupation classification task. We find that images generated with non-Caucasian labels have a significantly higher occupation misclassification rate than images generated with Caucasian labels, and that several misclassifications are suggestive of racial biases. We measure a model's downstream fairness by computing the standard deviation in the probability of predicting the true occupation label across the different perceived identity groups. Using this fairness metric, we find significant disparities between the evaluated vision-and-language models. We hope that our work demonstrates the potential value of diffusion methods for fairness evaluations.
doi_str_mv 10.48550/arxiv.2311.15108
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_15108</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_15108</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-1a7775b2c43b286e4acf0158fbe16407615db91290d1c0c741ebc972c87e6e3a3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIKvEz-yRIECUhAsqm6ja_e6stQmlZ1U8PeQwmo00pyRDmN3IMraKiUeMH3FcykrgBIUCHvNuo7OlHAfhz1_iiHMOY4D_6Q0zcnh9FsyD2Pi74R5TstqjTENlDOPA2_H42meKPFtXLgbdhXwkOn2P1dss37etK9F9_Hy1j52BWpjC0BjjHLS15WTVlONPghQNjgCXQujQe1cA7IRO_DCmxrI-cZIbw1pqrBasfu_24tOf0rxiOm7X7T6i1b1A0Z3SHU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision</title><source>arXiv.org</source><creator>Lui, Nicholas ; Chia, Bryan ; Berrios, William ; Ross, Candace ; Kiela, Douwe</creator><creatorcontrib>Lui, Nicholas ; Chia, Bryan ; Berrios, William ; Ross, Candace ; Kiela, Douwe</creatorcontrib><description>Computer vision models have been known to encode harmful biases, leading to the potentially unfair treatment of historically marginalized groups, such as people of color. However, there remains a lack of datasets balanced along demographic traits that can be used to evaluate the downstream fairness of these models. In this work, we demonstrate that diffusion models can be leveraged to create such a dataset. We first use a diffusion model to generate a large set of images depicting various occupations. Subsequently, each image is edited using inpainting to generate multiple variants, where each variant refers to a different perceived race. Using this dataset, we benchmark several vision-language models on a multi-class occupation classification task. We find that images generated with non-Caucasian labels have a significantly higher occupation misclassification rate than images generated with Caucasian labels, and that several misclassifications are suggestive of racial biases. We measure a model's downstream fairness by computing the standard deviation in the probability of predicting the true occupation label across the different perceived identity groups. Using this fairness metric, we find significant disparities between the evaluated vision-and-language models. We hope that our work demonstrates the potential value of diffusion methods for fairness evaluations.</description><identifier>DOI: 10.48550/arxiv.2311.15108</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.15108$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.15108$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lui, Nicholas</creatorcontrib><creatorcontrib>Chia, Bryan</creatorcontrib><creatorcontrib>Berrios, William</creatorcontrib><creatorcontrib>Ross, Candace</creatorcontrib><creatorcontrib>Kiela, Douwe</creatorcontrib><title>Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision</title><description>Computer vision models have been known to encode harmful biases, leading to the potentially unfair treatment of historically marginalized groups, such as people of color. However, there remains a lack of datasets balanced along demographic traits that can be used to evaluate the downstream fairness of these models. In this work, we demonstrate that diffusion models can be leveraged to create such a dataset. We first use a diffusion model to generate a large set of images depicting various occupations. Subsequently, each image is edited using inpainting to generate multiple variants, where each variant refers to a different perceived race. Using this dataset, we benchmark several vision-language models on a multi-class occupation classification task. We find that images generated with non-Caucasian labels have a significantly higher occupation misclassification rate than images generated with Caucasian labels, and that several misclassifications are suggestive of racial biases. We measure a model's downstream fairness by computing the standard deviation in the probability of predicting the true occupation label across the different perceived identity groups. Using this fairness metric, we find significant disparities between the evaluated vision-and-language models. We hope that our work demonstrates the potential value of diffusion methods for fairness evaluations.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIKvEz-yRIECUhAsqm6ja_e6stQmlZ1U8PeQwmo00pyRDmN3IMraKiUeMH3FcykrgBIUCHvNuo7OlHAfhz1_iiHMOY4D_6Q0zcnh9FsyD2Pi74R5TstqjTENlDOPA2_H42meKPFtXLgbdhXwkOn2P1dss37etK9F9_Hy1j52BWpjC0BjjHLS15WTVlONPghQNjgCXQujQe1cA7IRO_DCmxrI-cZIbw1pqrBasfu_24tOf0rxiOm7X7T6i1b1A0Z3SHU</recordid><startdate>20231125</startdate><enddate>20231125</enddate><creator>Lui, Nicholas</creator><creator>Chia, Bryan</creator><creator>Berrios, William</creator><creator>Ross, Candace</creator><creator>Kiela, Douwe</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231125</creationdate><title>Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision</title><author>Lui, Nicholas ; Chia, Bryan ; Berrios, William ; Ross, Candace ; Kiela, Douwe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-1a7775b2c43b286e4acf0158fbe16407615db91290d1c0c741ebc972c87e6e3a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Lui, Nicholas</creatorcontrib><creatorcontrib>Chia, Bryan</creatorcontrib><creatorcontrib>Berrios, William</creatorcontrib><creatorcontrib>Ross, Candace</creatorcontrib><creatorcontrib>Kiela, Douwe</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lui, Nicholas</au><au>Chia, Bryan</au><au>Berrios, William</au><au>Ross, Candace</au><au>Kiela, Douwe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision</atitle><date>2023-11-25</date><risdate>2023</risdate><abstract>Computer vision models have been known to encode harmful biases, leading to the potentially unfair treatment of historically marginalized groups, such as people of color. However, there remains a lack of datasets balanced along demographic traits that can be used to evaluate the downstream fairness of these models. In this work, we demonstrate that diffusion models can be leveraged to create such a dataset. We first use a diffusion model to generate a large set of images depicting various occupations. Subsequently, each image is edited using inpainting to generate multiple variants, where each variant refers to a different perceived race. Using this dataset, we benchmark several vision-language models on a multi-class occupation classification task. We find that images generated with non-Caucasian labels have a significantly higher occupation misclassification rate than images generated with Caucasian labels, and that several misclassifications are suggestive of racial biases. We measure a model's downstream fairness by computing the standard deviation in the probability of predicting the true occupation label across the different perceived identity groups. Using this fairness metric, we find significant disparities between the evaluated vision-and-language models. We hope that our work demonstrates the potential value of diffusion methods for fairness evaluations.</abstract><doi>10.48550/arxiv.2311.15108</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2311.15108
ispartof
issn
language eng
recordid cdi_arxiv_primary_2311_15108
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T12%3A39%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Leveraging%20Diffusion%20Perturbations%20for%20Measuring%20Fairness%20in%20Computer%20Vision&rft.au=Lui,%20Nicholas&rft.date=2023-11-25&rft_id=info:doi/10.48550/arxiv.2311.15108&rft_dat=%3Carxiv_GOX%3E2311_15108%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true