How to Be Helpful to Multiple People at Once

When someone hosts a party, when governments choose an aid program, or when assistive robots decide what meal to serve to a family, decision‐makers must determine how to help even when their recipients have very different preferences. Which combination of people’s desires should a decision‐maker ser...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Cognitive science 2020-06, Vol.44 (6), p.e12841-n/a
Hauptverfasser: Gates, Vael, Griffiths, Thomas L., Dragan, Anca D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:When someone hosts a party, when governments choose an aid program, or when assistive robots decide what meal to serve to a family, decision‐makers must determine how to help even when their recipients have very different preferences. Which combination of people’s desires should a decision‐maker serve? To provide a potential answer, we turned to psychology: What do people think is best when multiple people have different utilities over options? We developed a quantitative model of what people consider desirable behavior, characterizing participants’ preferences by inferring which combination of “metrics” (maximax, maxsum, maximin, or inequality aversion [IA]) best explained participants’ decisions in a drink‐choosing task. We found that participants’ behavior was best described by the maximin metric, describing the desire to maximize the happiness of the worst‐off person, though participant behavior was also consistent with maximizing group utility (the maxsum metric) and the IA metric to a lesser extent. Participant behavior was consistent across variation in the agents involved and  tended to become more maxsum‐oriented when participants were told they were players in the task (Experiment 1). In later experiments, participants maintained maximin behavior across multi‐step tasks rather than shortsightedly focusing on the individual steps therein (Experiment 2, Experiment 3). By repeatedly asking participants what choices they would hope for in an optimal, just decision‐maker, and carefully disambiguating which quantitative metrics describe these nuanced choices, we help constrain the space of what behavior we desire in leaders, artificial intelligence systems helping decision‐makers, and the assistive robots and decision‐makers of the future.
ISSN:0364-0213
1551-6709
DOI:10.1111/cogs.12841