An Exploration of Multicalibration Uniform Convergence Bounds

Recent works have investigated the sample complexity necessary for fair machine learning. The most advanced of such sample complexity bounds are developed by analyzing multicalibration uniform convergence for a given predictor class. We present a framework which yields multicalibration error uniform...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Rosenberg, Harrison, Bhattacharjee, Robi, Fawaz, Kassem, Jha, Somesh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Rosenberg, Harrison
Bhattacharjee, Robi
Fawaz, Kassem
Jha, Somesh
description Recent works have investigated the sample complexity necessary for fair machine learning. The most advanced of such sample complexity bounds are developed by analyzing multicalibration uniform convergence for a given predictor class. We present a framework which yields multicalibration error uniform convergence bounds by reparametrizing sample complexities for Empirical Risk Minimization (ERM) learning. From this framework, we demonstrate that multicalibration error exhibits dependence on the classifier architecture as well as the underlying data distribution. We perform an experimental evaluation to investigate the behavior of multicalibration error for different families of classifiers. We compare the results of this evaluation to multicalibration error concentration bounds. Our investigation provides additional perspective on both algorithmic fairness and multicalibration error convergence bounds. Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.
doi_str_mv 10.48550/arxiv.2202.04530
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2202_04530</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2202_04530</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-8047cb030101968467271b8207a393c1d1a864d634d12a4d8d4b17c7cf58ecea3</originalsourceid><addsrcrecordid>eNotj7tuwjAYRr0wVNAH6IRfIOnvS2wzMEAEtBKIBeboj-0gS8FG5iJ4-6rAdKRvOPoOIV8MSmmqCr4x38Ot5Bx4CbIS8EGms0gX91OfMl5CijR1dHPtL8FiH9r3to-hS_lI6xRvPh98tJ7O0zW684gMOuzP_vPNIdktF7v6p1hvV7_1bF2g0lAYkNq2IIABmygjleaatYaDRjERljmGRkmnhHSMo3TGyZZpq21XGW89iiEZv7TP_80phyPmR_Pf0Tw7xB_hb0H_</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>An Exploration of Multicalibration Uniform Convergence Bounds</title><source>arXiv.org</source><creator>Rosenberg, Harrison ; Bhattacharjee, Robi ; Fawaz, Kassem ; Jha, Somesh</creator><creatorcontrib>Rosenberg, Harrison ; Bhattacharjee, Robi ; Fawaz, Kassem ; Jha, Somesh</creatorcontrib><description>Recent works have investigated the sample complexity necessary for fair machine learning. The most advanced of such sample complexity bounds are developed by analyzing multicalibration uniform convergence for a given predictor class. We present a framework which yields multicalibration error uniform convergence bounds by reparametrizing sample complexities for Empirical Risk Minimization (ERM) learning. From this framework, we demonstrate that multicalibration error exhibits dependence on the classifier architecture as well as the underlying data distribution. We perform an experimental evaluation to investigate the behavior of multicalibration error for different families of classifiers. We compare the results of this evaluation to multicalibration error concentration bounds. Our investigation provides additional perspective on both algorithmic fairness and multicalibration error convergence bounds. Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.</description><identifier>DOI: 10.48550/arxiv.2202.04530</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2022-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2202.04530$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2202.04530$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Rosenberg, Harrison</creatorcontrib><creatorcontrib>Bhattacharjee, Robi</creatorcontrib><creatorcontrib>Fawaz, Kassem</creatorcontrib><creatorcontrib>Jha, Somesh</creatorcontrib><title>An Exploration of Multicalibration Uniform Convergence Bounds</title><description>Recent works have investigated the sample complexity necessary for fair machine learning. The most advanced of such sample complexity bounds are developed by analyzing multicalibration uniform convergence for a given predictor class. We present a framework which yields multicalibration error uniform convergence bounds by reparametrizing sample complexities for Empirical Risk Minimization (ERM) learning. From this framework, we demonstrate that multicalibration error exhibits dependence on the classifier architecture as well as the underlying data distribution. We perform an experimental evaluation to investigate the behavior of multicalibration error for different families of classifiers. We compare the results of this evaluation to multicalibration error concentration bounds. Our investigation provides additional perspective on both algorithmic fairness and multicalibration error convergence bounds. Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj7tuwjAYRr0wVNAH6IRfIOnvS2wzMEAEtBKIBeboj-0gS8FG5iJ4-6rAdKRvOPoOIV8MSmmqCr4x38Ot5Bx4CbIS8EGms0gX91OfMl5CijR1dHPtL8FiH9r3to-hS_lI6xRvPh98tJ7O0zW684gMOuzP_vPNIdktF7v6p1hvV7_1bF2g0lAYkNq2IIABmygjleaatYaDRjERljmGRkmnhHSMo3TGyZZpq21XGW89iiEZv7TP_80phyPmR_Pf0Tw7xB_hb0H_</recordid><startdate>20220209</startdate><enddate>20220209</enddate><creator>Rosenberg, Harrison</creator><creator>Bhattacharjee, Robi</creator><creator>Fawaz, Kassem</creator><creator>Jha, Somesh</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220209</creationdate><title>An Exploration of Multicalibration Uniform Convergence Bounds</title><author>Rosenberg, Harrison ; Bhattacharjee, Robi ; Fawaz, Kassem ; Jha, Somesh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-8047cb030101968467271b8207a393c1d1a864d634d12a4d8d4b17c7cf58ecea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Rosenberg, Harrison</creatorcontrib><creatorcontrib>Bhattacharjee, Robi</creatorcontrib><creatorcontrib>Fawaz, Kassem</creatorcontrib><creatorcontrib>Jha, Somesh</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rosenberg, Harrison</au><au>Bhattacharjee, Robi</au><au>Fawaz, Kassem</au><au>Jha, Somesh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Exploration of Multicalibration Uniform Convergence Bounds</atitle><date>2022-02-09</date><risdate>2022</risdate><abstract>Recent works have investigated the sample complexity necessary for fair machine learning. The most advanced of such sample complexity bounds are developed by analyzing multicalibration uniform convergence for a given predictor class. We present a framework which yields multicalibration error uniform convergence bounds by reparametrizing sample complexities for Empirical Risk Minimization (ERM) learning. From this framework, we demonstrate that multicalibration error exhibits dependence on the classifier architecture as well as the underlying data distribution. We perform an experimental evaluation to investigate the behavior of multicalibration error for different families of classifiers. We compare the results of this evaluation to multicalibration error concentration bounds. Our investigation provides additional perspective on both algorithmic fairness and multicalibration error convergence bounds. Given the prevalence of ERM sample complexity bounds, our proposed framework enables machine learning practitioners to easily understand the convergence behavior of multicalibration error for a myriad of classifier architectures.</abstract><doi>10.48550/arxiv.2202.04530</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2202.04530
ispartof
issn
language eng
recordid cdi_arxiv_primary_2202_04530
source arXiv.org
subjects Computer Science - Learning
title An Exploration of Multicalibration Uniform Convergence Bounds
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T04%3A08%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Exploration%20of%20Multicalibration%20Uniform%20Convergence%20Bounds&rft.au=Rosenberg,%20Harrison&rft.date=2022-02-09&rft_id=info:doi/10.48550/arxiv.2202.04530&rft_dat=%3Carxiv_GOX%3E2202_04530%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true