Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews

se of machine learning for automatic analysis of job interview videos has recently seen increased interest. Despite claims of fair output regarding sensitive information such as gender or ethnicity of the candidates, the current approaches rarely provide proof of unbiased decision-making, or that se...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-10
Hauptverfasser: Hemamou, Léo, Guillon, Arthur, Martin, Jean-Claude, Clavel, Chloé
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Hemamou, Léo
Guillon, Arthur
Martin, Jean-Claude
Clavel, Chloé
description se of machine learning for automatic analysis of job interview videos has recently seen increased interest. Despite claims of fair output regarding sensitive information such as gender or ethnicity of the candidates, the current approaches rarely provide proof of unbiased decision-making, or that sensitive information is not used. Recently, adversarial methods have been proved to effectively remove sensitive information from the latent representation of neural networks. However, these methods rely on the use of explicitly labeled protected variables (e.g. gender), which cannot be collected in the context of recruiting in some countries (e.g. France). In this article, we propose a new adversarial approach to remove sensitive information from the latent representation of neural networks without the need to collect any sensitive variable. Using only a few frames of the interview, we train our model to not be able to find the face of the candidate related to the job interview in the inner layers of the model. This, in turn, allows us to remove relevant private information from these layers. Comparing our approach to a standard baseline on a public dataset with gender and ethnicity annotations, we show that it effectively removes sensitive information from the main network. Moreover, to the best of our knowledge, this is the first application of adversarial techniques for obtaining a multimodal fair representation in the context of video job interviews. In summary, our contributions aim at improving fairness of the upcoming automatic systems processing videos of job interviews for equality in job selection.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2583233204</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2583233204</sourcerecordid><originalsourceid>FETCH-proquest_journals_25832332043</originalsourceid><addsrcrecordid>eNqNT8tKw0AUDYLQov2HCy5cFeJMU4u7oAZbSBdV3JZp5tZOSe6Nd2Yi-SJ_0wH9AFfnwHlwzkU2VVrfzVcLpSbZzPtznudqea-KQk-z7yem2wCbaD8QaoTDCPUIlWkQHqAkWJN1gk2A0g4o3ogzLZR9L2yaEwSGHXY8ILwieRdcYms6snQmOCaohDuoYxtcxzYFtxglwQ57QY8Ufl2OoPQjNSdh4uhhwwd4dxY5dQWUweGXv84uj6b1OPvDq-ymen57fJmnJZ8RfdifOQolaa-KlU6PVb7Q_3P9ADepXgw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2583233204</pqid></control><display><type>article</type><title>Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews</title><source>Free E- Journals</source><creator>Hemamou, Léo ; Guillon, Arthur ; Martin, Jean-Claude ; Clavel, Chloé</creator><creatorcontrib>Hemamou, Léo ; Guillon, Arthur ; Martin, Jean-Claude ; Clavel, Chloé</creatorcontrib><description>se of machine learning for automatic analysis of job interview videos has recently seen increased interest. Despite claims of fair output regarding sensitive information such as gender or ethnicity of the candidates, the current approaches rarely provide proof of unbiased decision-making, or that sensitive information is not used. Recently, adversarial methods have been proved to effectively remove sensitive information from the latent representation of neural networks. However, these methods rely on the use of explicitly labeled protected variables (e.g. gender), which cannot be collected in the context of recruiting in some countries (e.g. France). In this article, we propose a new adversarial approach to remove sensitive information from the latent representation of neural networks without the need to collect any sensitive variable. Using only a few frames of the interview, we train our model to not be able to find the face of the candidate related to the job interview in the inner layers of the model. This, in turn, allows us to remove relevant private information from these layers. Comparing our approach to a standard baseline on a public dataset with gender and ethnicity annotations, we show that it effectively removes sensitive information from the main network. Moreover, to the best of our knowledge, this is the first application of adversarial techniques for obtaining a multimodal fair representation in the context of video job interviews. In summary, our contributions aim at improving fairness of the upcoming automatic systems processing videos of job interviews for equality in job selection.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Context ; Decision making ; Employment interviews ; Ethnicity ; Gender ; Machine learning ; Neural networks ; Representations ; Video</subject><ispartof>arXiv.org, 2021-10</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Hemamou, Léo</creatorcontrib><creatorcontrib>Guillon, Arthur</creatorcontrib><creatorcontrib>Martin, Jean-Claude</creatorcontrib><creatorcontrib>Clavel, Chloé</creatorcontrib><title>Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews</title><title>arXiv.org</title><description>se of machine learning for automatic analysis of job interview videos has recently seen increased interest. Despite claims of fair output regarding sensitive information such as gender or ethnicity of the candidates, the current approaches rarely provide proof of unbiased decision-making, or that sensitive information is not used. Recently, adversarial methods have been proved to effectively remove sensitive information from the latent representation of neural networks. However, these methods rely on the use of explicitly labeled protected variables (e.g. gender), which cannot be collected in the context of recruiting in some countries (e.g. France). In this article, we propose a new adversarial approach to remove sensitive information from the latent representation of neural networks without the need to collect any sensitive variable. Using only a few frames of the interview, we train our model to not be able to find the face of the candidate related to the job interview in the inner layers of the model. This, in turn, allows us to remove relevant private information from these layers. Comparing our approach to a standard baseline on a public dataset with gender and ethnicity annotations, we show that it effectively removes sensitive information from the main network. Moreover, to the best of our knowledge, this is the first application of adversarial techniques for obtaining a multimodal fair representation in the context of video job interviews. In summary, our contributions aim at improving fairness of the upcoming automatic systems processing videos of job interviews for equality in job selection.</description><subject>Annotations</subject><subject>Context</subject><subject>Decision making</subject><subject>Employment interviews</subject><subject>Ethnicity</subject><subject>Gender</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Representations</subject><subject>Video</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNT8tKw0AUDYLQov2HCy5cFeJMU4u7oAZbSBdV3JZp5tZOSe6Nd2Yi-SJ_0wH9AFfnwHlwzkU2VVrfzVcLpSbZzPtznudqea-KQk-z7yem2wCbaD8QaoTDCPUIlWkQHqAkWJN1gk2A0g4o3ogzLZR9L2yaEwSGHXY8ILwieRdcYms6snQmOCaohDuoYxtcxzYFtxglwQ57QY8Ufl2OoPQjNSdh4uhhwwd4dxY5dQWUweGXv84uj6b1OPvDq-ymen57fJmnJZ8RfdifOQolaa-KlU6PVb7Q_3P9ADepXgw</recordid><startdate>20211018</startdate><enddate>20211018</enddate><creator>Hemamou, Léo</creator><creator>Guillon, Arthur</creator><creator>Martin, Jean-Claude</creator><creator>Clavel, Chloé</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211018</creationdate><title>Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews</title><author>Hemamou, Léo ; Guillon, Arthur ; Martin, Jean-Claude ; Clavel, Chloé</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25832332043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Annotations</topic><topic>Context</topic><topic>Decision making</topic><topic>Employment interviews</topic><topic>Ethnicity</topic><topic>Gender</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Representations</topic><topic>Video</topic><toplevel>online_resources</toplevel><creatorcontrib>Hemamou, Léo</creatorcontrib><creatorcontrib>Guillon, Arthur</creatorcontrib><creatorcontrib>Martin, Jean-Claude</creatorcontrib><creatorcontrib>Clavel, Chloé</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hemamou, Léo</au><au>Guillon, Arthur</au><au>Martin, Jean-Claude</au><au>Clavel, Chloé</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews</atitle><jtitle>arXiv.org</jtitle><date>2021-10-18</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>se of machine learning for automatic analysis of job interview videos has recently seen increased interest. Despite claims of fair output regarding sensitive information such as gender or ethnicity of the candidates, the current approaches rarely provide proof of unbiased decision-making, or that sensitive information is not used. Recently, adversarial methods have been proved to effectively remove sensitive information from the latent representation of neural networks. However, these methods rely on the use of explicitly labeled protected variables (e.g. gender), which cannot be collected in the context of recruiting in some countries (e.g. France). In this article, we propose a new adversarial approach to remove sensitive information from the latent representation of neural networks without the need to collect any sensitive variable. Using only a few frames of the interview, we train our model to not be able to find the face of the candidate related to the job interview in the inner layers of the model. This, in turn, allows us to remove relevant private information from these layers. Comparing our approach to a standard baseline on a public dataset with gender and ethnicity annotations, we show that it effectively removes sensitive information from the main network. Moreover, to the best of our knowledge, this is the first application of adversarial techniques for obtaining a multimodal fair representation in the context of video job interviews. In summary, our contributions aim at improving fairness of the upcoming automatic systems processing videos of job interviews for equality in job selection.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2583233204
source Free E- Journals
subjects Annotations
Context
Decision making
Employment interviews
Ethnicity
Gender
Machine learning
Neural networks
Representations
Video
title Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T12%3A45%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Don't%20Judge%20Me%20by%20My%20Face%20:%20An%20Indirect%20Adversarial%20Approach%20to%20Remove%20Sensitive%20Information%20From%20Multimodal%20Neural%20Representation%20in%20Asynchronous%20Job%20Video%20Interviews&rft.jtitle=arXiv.org&rft.au=Hemamou,%20L%C3%A9o&rft.date=2021-10-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2583233204%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2583233204&rft_id=info:pmid/&rfr_iscdi=true