Algorithms and Influence Artificial Intelligence and Crisis Decision-Making

Countries around the world are increasingly investing in artificial intelligence (AI) to automate military tasks that traditionally required human involvement. Despite growing interest in AI-enabled systems, relatively little research explores whether and how AI affects military decision-making. Yet...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International studies quarterly 2022-10, Vol.66 (4)
Hauptverfasser: Horowitz, Michael C, Lin-Greenberg, Erik
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 4
container_start_page
container_title International studies quarterly
container_volume 66
creator Horowitz, Michael C
Lin-Greenberg, Erik
description Countries around the world are increasingly investing in artificial intelligence (AI) to automate military tasks that traditionally required human involvement. Despite growing interest in AI-enabled systems, relatively little research explores whether and how AI affects military decision-making. Yet, national security practitioners may perceive the judgments of and actions taken by algorithms differently than those of humans. This variation may subsequently affect decisions on the use of force. Using two original survey experiments fielded on a sample of US national security experts, we find that AI use by both friendly and rival forces affects decision-making during interstate crises. National security experts are less likely to take military action when AI is used to analyze intelligence than when humans conduct the analysis. Experts also viewed an accident involving a rival's AI-enabled weapon that kills American troops as more deserving of retaliation than an accident involving only human operators, suggesting that national security practitioners are less forgiving of errant AI systems than of similarly erring humans. Our findings suggest emerging technologies such as AI can affect decisionmakers’ perceptions in ways that shape political outcomes. Even in a world of algorithms, human decisions will still have important consequences for international security.
doi_str_mv 10.1093/isq/sqac069
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1093_isq_sqac069</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1093_isq_sqac069</sourcerecordid><originalsourceid>FETCH-LOGICAL-c233t-8d9d17f8af9277cc28ecc2ad1f6a351567dd131581aedf47f67b337ac40d8cea3</originalsourceid><addsrcrecordid>eNotkLtOxDAURC0EEmGh4gfSI7N2bhLbZRReKxbRQB1d_AgGb8LaoeDvSWCbGc1oNMUh5JKza84UrH3ar9MeNavVEcl4WUtalEIek4yxglEpAU7JWUofbMlKZeSxCf0Y_fS-SzkOJt8MLnzbQdu8iZN3XnsMcznZEHz_1y-rNvrkU35j9ezjQJ_w0w_9OTlxGJK9OPiKvN7dvrQPdPt8v2mbLdUFwESlUYYLJ9GpQgitC2lnQcNdjVDxqhbGcOCV5GiNK4WrxRuAQF0yI7VFWJGr_18dx5Sidd1X9DuMPx1n3cKhmzl0Bw7wC90SU48</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Algorithms and Influence Artificial Intelligence and Crisis Decision-Making</title><source>Oxford University Press Journals All Titles (1996-Current)</source><source>Political Science Complete</source><creator>Horowitz, Michael C ; Lin-Greenberg, Erik</creator><creatorcontrib>Horowitz, Michael C ; Lin-Greenberg, Erik</creatorcontrib><description>Countries around the world are increasingly investing in artificial intelligence (AI) to automate military tasks that traditionally required human involvement. Despite growing interest in AI-enabled systems, relatively little research explores whether and how AI affects military decision-making. Yet, national security practitioners may perceive the judgments of and actions taken by algorithms differently than those of humans. This variation may subsequently affect decisions on the use of force. Using two original survey experiments fielded on a sample of US national security experts, we find that AI use by both friendly and rival forces affects decision-making during interstate crises. National security experts are less likely to take military action when AI is used to analyze intelligence than when humans conduct the analysis. Experts also viewed an accident involving a rival's AI-enabled weapon that kills American troops as more deserving of retaliation than an accident involving only human operators, suggesting that national security practitioners are less forgiving of errant AI systems than of similarly erring humans. Our findings suggest emerging technologies such as AI can affect decisionmakers’ perceptions in ways that shape political outcomes. Even in a world of algorithms, human decisions will still have important consequences for international security.</description><identifier>ISSN: 0020-8833</identifier><identifier>EISSN: 1468-2478</identifier><identifier>DOI: 10.1093/isq/sqac069</identifier><language>eng</language><ispartof>International studies quarterly, 2022-10, Vol.66 (4)</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c233t-8d9d17f8af9277cc28ecc2ad1f6a351567dd131581aedf47f67b337ac40d8cea3</citedby><cites>FETCH-LOGICAL-c233t-8d9d17f8af9277cc28ecc2ad1f6a351567dd131581aedf47f67b337ac40d8cea3</cites><orcidid>0000-0003-3627-9181 ; 0000-0002-3067-7157</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,781,785,27929,27930</link.rule.ids></links><search><creatorcontrib>Horowitz, Michael C</creatorcontrib><creatorcontrib>Lin-Greenberg, Erik</creatorcontrib><title>Algorithms and Influence Artificial Intelligence and Crisis Decision-Making</title><title>International studies quarterly</title><description>Countries around the world are increasingly investing in artificial intelligence (AI) to automate military tasks that traditionally required human involvement. Despite growing interest in AI-enabled systems, relatively little research explores whether and how AI affects military decision-making. Yet, national security practitioners may perceive the judgments of and actions taken by algorithms differently than those of humans. This variation may subsequently affect decisions on the use of force. Using two original survey experiments fielded on a sample of US national security experts, we find that AI use by both friendly and rival forces affects decision-making during interstate crises. National security experts are less likely to take military action when AI is used to analyze intelligence than when humans conduct the analysis. Experts also viewed an accident involving a rival's AI-enabled weapon that kills American troops as more deserving of retaliation than an accident involving only human operators, suggesting that national security practitioners are less forgiving of errant AI systems than of similarly erring humans. Our findings suggest emerging technologies such as AI can affect decisionmakers’ perceptions in ways that shape political outcomes. Even in a world of algorithms, human decisions will still have important consequences for international security.</description><issn>0020-8833</issn><issn>1468-2478</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNotkLtOxDAURC0EEmGh4gfSI7N2bhLbZRReKxbRQB1d_AgGb8LaoeDvSWCbGc1oNMUh5JKza84UrH3ar9MeNavVEcl4WUtalEIek4yxglEpAU7JWUofbMlKZeSxCf0Y_fS-SzkOJt8MLnzbQdu8iZN3XnsMcznZEHz_1y-rNvrkU35j9ezjQJ_w0w_9OTlxGJK9OPiKvN7dvrQPdPt8v2mbLdUFwESlUYYLJ9GpQgitC2lnQcNdjVDxqhbGcOCV5GiNK4WrxRuAQF0yI7VFWJGr_18dx5Sidd1X9DuMPx1n3cKhmzl0Bw7wC90SU48</recordid><startdate>20221007</startdate><enddate>20221007</enddate><creator>Horowitz, Michael C</creator><creator>Lin-Greenberg, Erik</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-3627-9181</orcidid><orcidid>https://orcid.org/0000-0002-3067-7157</orcidid></search><sort><creationdate>20221007</creationdate><title>Algorithms and Influence Artificial Intelligence and Crisis Decision-Making</title><author>Horowitz, Michael C ; Lin-Greenberg, Erik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c233t-8d9d17f8af9277cc28ecc2ad1f6a351567dd131581aedf47f67b337ac40d8cea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Horowitz, Michael C</creatorcontrib><creatorcontrib>Lin-Greenberg, Erik</creatorcontrib><collection>CrossRef</collection><jtitle>International studies quarterly</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Horowitz, Michael C</au><au>Lin-Greenberg, Erik</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Algorithms and Influence Artificial Intelligence and Crisis Decision-Making</atitle><jtitle>International studies quarterly</jtitle><date>2022-10-07</date><risdate>2022</risdate><volume>66</volume><issue>4</issue><issn>0020-8833</issn><eissn>1468-2478</eissn><abstract>Countries around the world are increasingly investing in artificial intelligence (AI) to automate military tasks that traditionally required human involvement. Despite growing interest in AI-enabled systems, relatively little research explores whether and how AI affects military decision-making. Yet, national security practitioners may perceive the judgments of and actions taken by algorithms differently than those of humans. This variation may subsequently affect decisions on the use of force. Using two original survey experiments fielded on a sample of US national security experts, we find that AI use by both friendly and rival forces affects decision-making during interstate crises. National security experts are less likely to take military action when AI is used to analyze intelligence than when humans conduct the analysis. Experts also viewed an accident involving a rival's AI-enabled weapon that kills American troops as more deserving of retaliation than an accident involving only human operators, suggesting that national security practitioners are less forgiving of errant AI systems than of similarly erring humans. Our findings suggest emerging technologies such as AI can affect decisionmakers’ perceptions in ways that shape political outcomes. Even in a world of algorithms, human decisions will still have important consequences for international security.</abstract><doi>10.1093/isq/sqac069</doi><orcidid>https://orcid.org/0000-0003-3627-9181</orcidid><orcidid>https://orcid.org/0000-0002-3067-7157</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0020-8833
ispartof International studies quarterly, 2022-10, Vol.66 (4)
issn 0020-8833
1468-2478
language eng
recordid cdi_crossref_primary_10_1093_isq_sqac069
source Oxford University Press Journals All Titles (1996-Current); Political Science Complete
title Algorithms and Influence Artificial Intelligence and Crisis Decision-Making
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T19%3A50%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Algorithms%20and%20Influence%20Artificial%20Intelligence%20and%20Crisis%20Decision-Making&rft.jtitle=International%20studies%20quarterly&rft.au=Horowitz,%20Michael%20C&rft.date=2022-10-07&rft.volume=66&rft.issue=4&rft.issn=0020-8833&rft.eissn=1468-2478&rft_id=info:doi/10.1093/isq/sqac069&rft_dat=%3Ccrossref%3E10_1093_isq_sqac069%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true