MODEL BIAS DETECTION

Aspects of the present disclosure provide techniques for detecting latent bias in machine learning models. Embodiments include receiving a data set comprising features of a plurality of individuals. Embodiments include receiving identifying information for each individual of the plurality of individ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: HORESH, Yair, MEIR LADOR, Shir, DE SHETLER, Natalie Grace, BEN ARIE, Aviv, MISHRAKY, Elhanan
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator HORESH, Yair
MEIR LADOR, Shir
DE SHETLER, Natalie Grace
BEN ARIE, Aviv
MISHRAKY, Elhanan
description Aspects of the present disclosure provide techniques for detecting latent bias in machine learning models. Embodiments include receiving a data set comprising features of a plurality of individuals. Embodiments include receiving identifying information for each individual of the plurality of individuals. Embodiments include predicting, for each respective individual of the plurality of individuals, a probability that the respective individual belongs to a given class based on the identifying information for the given individual. Embodiments include providing, as inputs to a machine learning model, the features of the plurality of individuals from the data set. Embodiments include receiving outputs from the machine learning model in response to the inputs. Embodiments include determining whether the machine learning model is biased against the given class based on the outputs and the probability that each respective individual of the plurality of individuals belongs to the given class.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US2022351068A1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US2022351068A1</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US2022351068A13</originalsourceid><addsrcrecordid>eNrjZBDx9Xdx9VFw8nQMVnBxDXF1DvH09-NhYE1LzClO5YXS3AzKbq4hzh66qQX58anFBYnJqXmpJfGhwUYGRkbGpoYGZhaOhsbEqQIA4G4fWA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>MODEL BIAS DETECTION</title><source>esp@cenet</source><creator>HORESH, Yair ; MEIR LADOR, Shir ; DE SHETLER, Natalie Grace ; BEN ARIE, Aviv ; MISHRAKY, Elhanan</creator><creatorcontrib>HORESH, Yair ; MEIR LADOR, Shir ; DE SHETLER, Natalie Grace ; BEN ARIE, Aviv ; MISHRAKY, Elhanan</creatorcontrib><description>Aspects of the present disclosure provide techniques for detecting latent bias in machine learning models. Embodiments include receiving a data set comprising features of a plurality of individuals. Embodiments include receiving identifying information for each individual of the plurality of individuals. Embodiments include predicting, for each respective individual of the plurality of individuals, a probability that the respective individual belongs to a given class based on the identifying information for the given individual. Embodiments include providing, as inputs to a machine learning model, the features of the plurality of individuals from the data set. Embodiments include receiving outputs from the machine learning model in response to the inputs. Embodiments include determining whether the machine learning model is biased against the given class based on the outputs and the probability that each respective individual of the plurality of individuals belongs to the given class.</description><language>eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; PHYSICS</subject><creationdate>2022</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20221103&amp;DB=EPODOC&amp;CC=US&amp;NR=2022351068A1$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25542,76289</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20221103&amp;DB=EPODOC&amp;CC=US&amp;NR=2022351068A1$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>HORESH, Yair</creatorcontrib><creatorcontrib>MEIR LADOR, Shir</creatorcontrib><creatorcontrib>DE SHETLER, Natalie Grace</creatorcontrib><creatorcontrib>BEN ARIE, Aviv</creatorcontrib><creatorcontrib>MISHRAKY, Elhanan</creatorcontrib><title>MODEL BIAS DETECTION</title><description>Aspects of the present disclosure provide techniques for detecting latent bias in machine learning models. Embodiments include receiving a data set comprising features of a plurality of individuals. Embodiments include receiving identifying information for each individual of the plurality of individuals. Embodiments include predicting, for each respective individual of the plurality of individuals, a probability that the respective individual belongs to a given class based on the identifying information for the given individual. Embodiments include providing, as inputs to a machine learning model, the features of the plurality of individuals from the data set. Embodiments include receiving outputs from the machine learning model in response to the inputs. Embodiments include determining whether the machine learning model is biased against the given class based on the outputs and the probability that each respective individual of the plurality of individuals belongs to the given class.</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2022</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZBDx9Xdx9VFw8nQMVnBxDXF1DvH09-NhYE1LzClO5YXS3AzKbq4hzh66qQX58anFBYnJqXmpJfGhwUYGRkbGpoYGZhaOhsbEqQIA4G4fWA</recordid><startdate>20221103</startdate><enddate>20221103</enddate><creator>HORESH, Yair</creator><creator>MEIR LADOR, Shir</creator><creator>DE SHETLER, Natalie Grace</creator><creator>BEN ARIE, Aviv</creator><creator>MISHRAKY, Elhanan</creator><scope>EVB</scope></search><sort><creationdate>20221103</creationdate><title>MODEL BIAS DETECTION</title><author>HORESH, Yair ; MEIR LADOR, Shir ; DE SHETLER, Natalie Grace ; BEN ARIE, Aviv ; MISHRAKY, Elhanan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US2022351068A13</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2022</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>HORESH, Yair</creatorcontrib><creatorcontrib>MEIR LADOR, Shir</creatorcontrib><creatorcontrib>DE SHETLER, Natalie Grace</creatorcontrib><creatorcontrib>BEN ARIE, Aviv</creatorcontrib><creatorcontrib>MISHRAKY, Elhanan</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>HORESH, Yair</au><au>MEIR LADOR, Shir</au><au>DE SHETLER, Natalie Grace</au><au>BEN ARIE, Aviv</au><au>MISHRAKY, Elhanan</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>MODEL BIAS DETECTION</title><date>2022-11-03</date><risdate>2022</risdate><abstract>Aspects of the present disclosure provide techniques for detecting latent bias in machine learning models. Embodiments include receiving a data set comprising features of a plurality of individuals. Embodiments include receiving identifying information for each individual of the plurality of individuals. Embodiments include predicting, for each respective individual of the plurality of individuals, a probability that the respective individual belongs to a given class based on the identifying information for the given individual. Embodiments include providing, as inputs to a machine learning model, the features of the plurality of individuals from the data set. Embodiments include receiving outputs from the machine learning model in response to the inputs. Embodiments include determining whether the machine learning model is biased against the given class based on the outputs and the probability that each respective individual of the plurality of individuals belongs to the given class.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US2022351068A1
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
PHYSICS
title MODEL BIAS DETECTION
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T11%3A06%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=HORESH,%20Yair&rft.date=2022-11-03&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS2022351068A1%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true