Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error

Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well‐documented depa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Economica (London) 2023-01, Vol.90 (357), p.315-338
Hauptverfasser: Markussen, Thomas, Putterman, Louis, Wang, Liangjun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 338
container_issue 357
container_start_page 315
container_title Economica (London)
container_volume 90
creator Markussen, Thomas
Putterman, Louis
Wang, Liangjun
description Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well‐documented departure from non‐behavioural game‐theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule‐based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi‐continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence. This paper is part of the Economica 100 Series. Economica, the LSE “house journal” is now 100 years old. To commemorate this achievement, we are publishing 100 papers by former students, as well as current and former faculty. Thomas Markussen is a Professor of Economics at the University of Copenhagen. He received his MSc in Comparative Politics from the LSE.
doi_str_mv 10.1111/ecca.12443
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2746117289</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2746117289</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3503-3f3dced43fdbdef0ff4de68a7e990cd7793f8f3a4e902d661be0e0656ef8d7043</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMoWKsbf0HAnTA1mWRe7srQaqFQ8LEe0uSmkzqd1GRa6b834wjuPJtzFx_n3nsQuqVkQoMeQEoxoTHn7AyNKE_ziDCWnKMRIZRFhMfZJbryfkuCkjgboY9ps7HOdPXOSLyEoxFdLVpsHV60yhyNOogGl7U1Eh57t960G_wqWtkZ2_bzC2zMDjw2Le5qwHMhAVuNV2sP7ih6KiTMnLPuGl1o0Xi4-fUxep_P3srnaLl6WpTTZSRZQljENFMSFGdarRVoojVXkOYig6IgUmVZwXSumeBQkFilKV0DAZImKehcZYSzMbobcvfOfh7Ad9XWHlw4w1dxxlNKszgvAnU_UNJZ7x3oau_MTrhTRUnVl1n1ZVY_ZQYYDzDI8LX_Q3NGQ82U9VvpgHyZBk7_hFWzspwOsd-R-oIy</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2746117289</pqid></control><display><type>article</type><title>Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error</title><source>PAIS Index</source><source>EBSCOhost Business Source Complete</source><source>Access via Wiley Online Library</source><creator>Markussen, Thomas ; Putterman, Louis ; Wang, Liangjun</creator><creatorcontrib>Markussen, Thomas ; Putterman, Louis ; Wang, Liangjun</creatorcontrib><description>Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well‐documented departure from non‐behavioural game‐theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule‐based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi‐continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence. This paper is part of the Economica 100 Series. Economica, the LSE “house journal” is now 100 years old. To commemorate this achievement, we are publishing 100 papers by former students, as well as current and former faculty. Thomas Markussen is a Professor of Economics at the University of Copenhagen. He received his MSc in Comparative Politics from the LSE.</description><identifier>ISSN: 0013-0427</identifier><identifier>EISSN: 1468-0335</identifier><identifier>DOI: 10.1111/ecca.12443</identifier><language>eng</language><publisher>London: Blackwell Publishing Ltd</publisher><subject>Algorithms ; Artificial intelligence ; Authority ; Enforcement ; Experiments ; Gruppenentscheidung ; Peers ; Punishment ; Sanktion ; Theorie ; Trittbrettfahrerverhalten ; Unvollkommene Information ; Variants</subject><ispartof>Economica (London), 2023-01, Vol.90 (357), p.315-338</ispartof><rights>2022 London School of Economics and Political Science.</rights><rights>2023 The London School of Economics and Political Science</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c3503-3f3dced43fdbdef0ff4de68a7e990cd7793f8f3a4e902d661be0e0656ef8d7043</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fecca.12443$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fecca.12443$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27866,27924,27925,45574,45575</link.rule.ids></links><search><creatorcontrib>Markussen, Thomas</creatorcontrib><creatorcontrib>Putterman, Louis</creatorcontrib><creatorcontrib>Wang, Liangjun</creatorcontrib><title>Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error</title><title>Economica (London)</title><description>Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well‐documented departure from non‐behavioural game‐theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule‐based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi‐continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence. This paper is part of the Economica 100 Series. Economica, the LSE “house journal” is now 100 years old. To commemorate this achievement, we are publishing 100 papers by former students, as well as current and former faculty. Thomas Markussen is a Professor of Economics at the University of Copenhagen. He received his MSc in Comparative Politics from the LSE.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>Authority</subject><subject>Enforcement</subject><subject>Experiments</subject><subject>Gruppenentscheidung</subject><subject>Peers</subject><subject>Punishment</subject><subject>Sanktion</subject><subject>Theorie</subject><subject>Trittbrettfahrerverhalten</subject><subject>Unvollkommene Information</subject><subject>Variants</subject><issn>0013-0427</issn><issn>1468-0335</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>7TQ</sourceid><recordid>eNp9kEtLAzEUhYMoWKsbf0HAnTA1mWRe7srQaqFQ8LEe0uSmkzqd1GRa6b834wjuPJtzFx_n3nsQuqVkQoMeQEoxoTHn7AyNKE_ziDCWnKMRIZRFhMfZJbryfkuCkjgboY9ps7HOdPXOSLyEoxFdLVpsHV60yhyNOogGl7U1Eh57t960G_wqWtkZ2_bzC2zMDjw2Le5qwHMhAVuNV2sP7ih6KiTMnLPuGl1o0Xi4-fUxep_P3srnaLl6WpTTZSRZQljENFMSFGdarRVoojVXkOYig6IgUmVZwXSumeBQkFilKV0DAZImKehcZYSzMbobcvfOfh7Ad9XWHlw4w1dxxlNKszgvAnU_UNJZ7x3oau_MTrhTRUnVl1n1ZVY_ZQYYDzDI8LX_Q3NGQ82U9VvpgHyZBk7_hFWzspwOsd-R-oIy</recordid><startdate>202301</startdate><enddate>202301</enddate><creator>Markussen, Thomas</creator><creator>Putterman, Louis</creator><creator>Wang, Liangjun</creator><general>Blackwell Publishing Ltd</general><scope>OQ6</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7TQ</scope><scope>8BJ</scope><scope>DHY</scope><scope>DON</scope><scope>FQK</scope><scope>JBE</scope></search><sort><creationdate>202301</creationdate><title>Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error</title><author>Markussen, Thomas ; Putterman, Louis ; Wang, Liangjun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3503-3f3dced43fdbdef0ff4de68a7e990cd7793f8f3a4e902d661be0e0656ef8d7043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>Authority</topic><topic>Enforcement</topic><topic>Experiments</topic><topic>Gruppenentscheidung</topic><topic>Peers</topic><topic>Punishment</topic><topic>Sanktion</topic><topic>Theorie</topic><topic>Trittbrettfahrerverhalten</topic><topic>Unvollkommene Information</topic><topic>Variants</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Markussen, Thomas</creatorcontrib><creatorcontrib>Putterman, Louis</creatorcontrib><creatorcontrib>Wang, Liangjun</creatorcontrib><collection>ECONIS</collection><collection>CrossRef</collection><collection>PAIS Index</collection><collection>International Bibliography of the Social Sciences (IBSS)</collection><collection>PAIS International</collection><collection>PAIS International (Ovid)</collection><collection>International Bibliography of the Social Sciences</collection><collection>International Bibliography of the Social Sciences</collection><jtitle>Economica (London)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Markussen, Thomas</au><au>Putterman, Louis</au><au>Wang, Liangjun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error</atitle><jtitle>Economica (London)</jtitle><date>2023-01</date><risdate>2023</risdate><volume>90</volume><issue>357</issue><spage>315</spage><epage>338</epage><pages>315-338</pages><issn>0013-0427</issn><eissn>1468-0335</eissn><abstract>Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well‐documented departure from non‐behavioural game‐theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule‐based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi‐continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence. This paper is part of the Economica 100 Series. Economica, the LSE “house journal” is now 100 years old. To commemorate this achievement, we are publishing 100 papers by former students, as well as current and former faculty. Thomas Markussen is a Professor of Economics at the University of Copenhagen. He received his MSc in Comparative Politics from the LSE.</abstract><cop>London</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/ecca.12443</doi><tpages>24</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0013-0427
ispartof Economica (London), 2023-01, Vol.90 (357), p.315-338
issn 0013-0427
1468-0335
language eng
recordid cdi_proquest_journals_2746117289
source PAIS Index; EBSCOhost Business Source Complete; Access via Wiley Online Library
subjects Algorithms
Artificial intelligence
Authority
Enforcement
Experiments
Gruppenentscheidung
Peers
Punishment
Sanktion
Theorie
Trittbrettfahrerverhalten
Unvollkommene Information
Variants
title Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T20%3A41%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Algorithmic%20Leviathan%20or%20Individual%20Choice:%20Choosing%20Sanctioning%20Regimes%20in%20the%20Face%20of%20Observational%20Error&rft.jtitle=Economica%20(London)&rft.au=Markussen,%20Thomas&rft.date=2023-01&rft.volume=90&rft.issue=357&rft.spage=315&rft.epage=338&rft.pages=315-338&rft.issn=0013-0427&rft.eissn=1468-0335&rft_id=info:doi/10.1111/ecca.12443&rft_dat=%3Cproquest_cross%3E2746117289%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2746117289&rft_id=info:pmid/&rfr_iscdi=true