Responsible AI (RAI) Games and Ensembles
Several recent works have studied the societal effects of AI; these include issues such as fairness, robustness, and safety. In many of these objectives, a learner seeks to minimize its worst-case loss over a set of predefined distributions (known as uncertainty sets), with usual examples being pert...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Several recent works have studied the societal effects of AI; these include
issues such as fairness, robustness, and safety. In many of these objectives, a
learner seeks to minimize its worst-case loss over a set of predefined
distributions (known as uncertainty sets), with usual examples being perturbed
versions of the empirical distribution. In other words, aforementioned problems
can be written as min-max problems over these uncertainty sets. In this work,
we provide a general framework for studying these problems, which we refer to
as Responsible AI (RAI) games. We provide two classes of algorithms for solving
these games: (a) game-play based algorithms, and (b) greedy stagewise
estimation algorithms. The former class is motivated by online learning and
game theory, whereas the latter class is motivated by the classical statistical
literature on boosting, and regression. We empirically demonstrate the
applicability and competitive performance of our techniques for solving several
RAI problems, particularly around subpopulation shift. |
---|---|
DOI: | 10.48550/arxiv.2310.18832 |