AI Loyalty: A New Paradigm for Aligning Stakeholder Interests
When we consult a doctor, lawyer, or financial advisor, we assume that they are acting in our best interests. But what should we assume when we interact with an artificial intelligence (AI) system? AI-driven personal assistants, such as Alexa and Siri, already serve as interfaces between consumers a...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on technology and society 2020-09, Vol.1 (3), p.128-137 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 137 |
---|---|
container_issue | 3 |
container_start_page | 128 |
container_title | IEEE transactions on technology and society |
container_volume | 1 |
creator | Aguirre, Anthony Dempsey, Gaia Surden, Harry Reiner, Peter B. |
description | When we consult a doctor, lawyer, or financial advisor, we assume that they are acting in our best interests. But what should we assume when we interact with an artificial intelligence (AI) system? AI-driven personal assistants, such as Alexa and Siri, already serve as interfaces between consumers and information on the Web, and users routinely rely upon these and similar systems to take automated actions or provide information. Superficially, they may appear to be acting according to user interests, but many are designed with embedded conflicts of interests. To address this problem, we introduce the concept of AI loyalty. AI systems are loyal to the degree that they minimize and make transparent, conflicts of interest, and act in ways that prioritize the interests of users. Loyal AI products hold obvious appeal for the end-user and could promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics, such as fairness, accountability privacy, and equity. |
doi_str_mv | 10.1109/TTS.2020.3013490 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TTS_2020_3013490</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9157977</ieee_id><sourcerecordid>2531559048</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1368-f722ded1675cedb9cf700bd7dc5ff1aaafb035cc0a0ce0e9239c9c18931462a13</originalsourceid><addsrcrecordid>eNpNkN9LwzAUhYMoOObeBV8CPnfemzRNI_hQhj8GQ4XN55Clyezs2pl0yP57OzbEp3sfvnMOfIRcI4wRQd0tFvMxAwZjDshTBWdkwDIukyxFcf7vvySjGNcAwARijvmAPBRTOmv3pu7297Sgr-6Hvptgymq1ob4NtKirVVM1KzrvzJf7bOvSBTptOhdc7OIVufCmjm50ukPy8fS4mLwks7fn6aSYJRZ5lideMla6EjMprCuXynoJsCxlaYX3aIzxS-DCWjBgHTjFuLLKYq44phkzyIfk9ti7De33rl_W63YXmn5SM8FRCAVp3lNwpGxoYwzO622oNibsNYI-eNK9J33wpE-e-sjNMVI55_5whUIqKfkvnyNiUg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2531559048</pqid></control><display><type>article</type><title>AI Loyalty: A New Paradigm for Aligning Stakeholder Interests</title><source>IEEE Electronic Library (IEL)</source><creator>Aguirre, Anthony ; Dempsey, Gaia ; Surden, Harry ; Reiner, Peter B.</creator><creatorcontrib>Aguirre, Anthony ; Dempsey, Gaia ; Surden, Harry ; Reiner, Peter B.</creatorcontrib><description>When we consult a doctor, lawyer, or financial advisor, we assume that they are acting in our best interests. But what should we assume when we interact with an artificial intelligence (AI) system? AI-driven personal assistants, such as Alexa and Siri, already serve as interfaces between consumers and information on the Web, and users routinely rely upon these and similar systems to take automated actions or provide information. Superficially, they may appear to be acting according to user interests, but many are designed with embedded conflicts of interests. To address this problem, we introduce the concept of AI loyalty. AI systems are loyal to the degree that they minimize and make transparent, conflicts of interest, and act in ways that prioritize the interests of users. Loyal AI products hold obvious appeal for the end-user and could promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics, such as fairness, accountability privacy, and equity.</description><identifier>ISSN: 2637-6415</identifier><identifier>EISSN: 2637-6415</identifier><identifier>DOI: 10.1109/TTS.2020.3013490</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>AI ethics ; Artificial intelligence ; Artificial intelligence (AI) assistant ; conflict of interest ; Conflicts of interest ; Ethics ; fiduciary duty ; Law ; Loyalty ; Privacy ; Task analysis ; value alignment</subject><ispartof>IEEE transactions on technology and society, 2020-09, Vol.1 (3), p.128-137</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c1368-f722ded1675cedb9cf700bd7dc5ff1aaafb035cc0a0ce0e9239c9c18931462a13</citedby><cites>FETCH-LOGICAL-c1368-f722ded1675cedb9cf700bd7dc5ff1aaafb035cc0a0ce0e9239c9c18931462a13</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9157977$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9157977$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Aguirre, Anthony</creatorcontrib><creatorcontrib>Dempsey, Gaia</creatorcontrib><creatorcontrib>Surden, Harry</creatorcontrib><creatorcontrib>Reiner, Peter B.</creatorcontrib><title>AI Loyalty: A New Paradigm for Aligning Stakeholder Interests</title><title>IEEE transactions on technology and society</title><addtitle>TTS</addtitle><description>When we consult a doctor, lawyer, or financial advisor, we assume that they are acting in our best interests. But what should we assume when we interact with an artificial intelligence (AI) system? AI-driven personal assistants, such as Alexa and Siri, already serve as interfaces between consumers and information on the Web, and users routinely rely upon these and similar systems to take automated actions or provide information. Superficially, they may appear to be acting according to user interests, but many are designed with embedded conflicts of interests. To address this problem, we introduce the concept of AI loyalty. AI systems are loyal to the degree that they minimize and make transparent, conflicts of interest, and act in ways that prioritize the interests of users. Loyal AI products hold obvious appeal for the end-user and could promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics, such as fairness, accountability privacy, and equity.</description><subject>AI ethics</subject><subject>Artificial intelligence</subject><subject>Artificial intelligence (AI) assistant</subject><subject>conflict of interest</subject><subject>Conflicts of interest</subject><subject>Ethics</subject><subject>fiduciary duty</subject><subject>Law</subject><subject>Loyalty</subject><subject>Privacy</subject><subject>Task analysis</subject><subject>value alignment</subject><issn>2637-6415</issn><issn>2637-6415</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkN9LwzAUhYMoOObeBV8CPnfemzRNI_hQhj8GQ4XN55Clyezs2pl0yP57OzbEp3sfvnMOfIRcI4wRQd0tFvMxAwZjDshTBWdkwDIukyxFcf7vvySjGNcAwARijvmAPBRTOmv3pu7297Sgr-6Hvptgymq1ob4NtKirVVM1KzrvzJf7bOvSBTptOhdc7OIVufCmjm50ukPy8fS4mLwks7fn6aSYJRZ5lideMla6EjMprCuXynoJsCxlaYX3aIzxS-DCWjBgHTjFuLLKYq44phkzyIfk9ti7De33rl_W63YXmn5SM8FRCAVp3lNwpGxoYwzO622oNibsNYI-eNK9J33wpE-e-sjNMVI55_5whUIqKfkvnyNiUg</recordid><startdate>202009</startdate><enddate>202009</enddate><creator>Aguirre, Anthony</creator><creator>Dempsey, Gaia</creator><creator>Surden, Harry</creator><creator>Reiner, Peter B.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope></search><sort><creationdate>202009</creationdate><title>AI Loyalty: A New Paradigm for Aligning Stakeholder Interests</title><author>Aguirre, Anthony ; Dempsey, Gaia ; Surden, Harry ; Reiner, Peter B.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1368-f722ded1675cedb9cf700bd7dc5ff1aaafb035cc0a0ce0e9239c9c18931462a13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>AI ethics</topic><topic>Artificial intelligence</topic><topic>Artificial intelligence (AI) assistant</topic><topic>conflict of interest</topic><topic>Conflicts of interest</topic><topic>Ethics</topic><topic>fiduciary duty</topic><topic>Law</topic><topic>Loyalty</topic><topic>Privacy</topic><topic>Task analysis</topic><topic>value alignment</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Aguirre, Anthony</creatorcontrib><creatorcontrib>Dempsey, Gaia</creatorcontrib><creatorcontrib>Surden, Harry</creatorcontrib><creatorcontrib>Reiner, Peter B.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><jtitle>IEEE transactions on technology and society</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Aguirre, Anthony</au><au>Dempsey, Gaia</au><au>Surden, Harry</au><au>Reiner, Peter B.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AI Loyalty: A New Paradigm for Aligning Stakeholder Interests</atitle><jtitle>IEEE transactions on technology and society</jtitle><stitle>TTS</stitle><date>2020-09</date><risdate>2020</risdate><volume>1</volume><issue>3</issue><spage>128</spage><epage>137</epage><pages>128-137</pages><issn>2637-6415</issn><eissn>2637-6415</eissn><abstract>When we consult a doctor, lawyer, or financial advisor, we assume that they are acting in our best interests. But what should we assume when we interact with an artificial intelligence (AI) system? AI-driven personal assistants, such as Alexa and Siri, already serve as interfaces between consumers and information on the Web, and users routinely rely upon these and similar systems to take automated actions or provide information. Superficially, they may appear to be acting according to user interests, but many are designed with embedded conflicts of interests. To address this problem, we introduce the concept of AI loyalty. AI systems are loyal to the degree that they minimize and make transparent, conflicts of interest, and act in ways that prioritize the interests of users. Loyal AI products hold obvious appeal for the end-user and could promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics, such as fairness, accountability privacy, and equity.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TTS.2020.3013490</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2637-6415 |
ispartof | IEEE transactions on technology and society, 2020-09, Vol.1 (3), p.128-137 |
issn | 2637-6415 2637-6415 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TTS_2020_3013490 |
source | IEEE Electronic Library (IEL) |
subjects | AI ethics Artificial intelligence Artificial intelligence (AI) assistant conflict of interest Conflicts of interest Ethics fiduciary duty Law Loyalty Privacy Task analysis value alignment |
title | AI Loyalty: A New Paradigm for Aligning Stakeholder Interests |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T13%3A08%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AI%20Loyalty:%20A%20New%20Paradigm%20for%20Aligning%20Stakeholder%20Interests&rft.jtitle=IEEE%20transactions%20on%20technology%20and%20society&rft.au=Aguirre,%20Anthony&rft.date=2020-09&rft.volume=1&rft.issue=3&rft.spage=128&rft.epage=137&rft.pages=128-137&rft.issn=2637-6415&rft.eissn=2637-6415&rft_id=info:doi/10.1109/TTS.2020.3013490&rft_dat=%3Cproquest_RIE%3E2531559048%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2531559048&rft_id=info:pmid/&rft_ieee_id=9157977&rfr_iscdi=true |