AI Can Stop Mass Shootings, and More
We propose to build directly upon our longstanding, prior r&d in AI/machine ethics in order to attempt to make real the blue-sky idea of AI that can thwart mass shootings, by bringing to bear its ethical reasoning. The r&d in question is overtly and avowedly logicist in form, and since we ar...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bringsjord, Selmer Govindarajulu, Naveen Sundar Giancola, Michael |
description | We propose to build directly upon our longstanding, prior r&d in AI/machine
ethics in order to attempt to make real the blue-sky idea of AI that can thwart
mass shootings, by bringing to bear its ethical reasoning. The r&d in question
is overtly and avowedly logicist in form, and since we are hardly the only ones
who have established a firm foundation in the attempt to imbue AI's with their
own ethical sensibility, the pursuit of our proposal by those in different
methodological camps should, we believe, be considered as well. We seek herein
to make our vision at least somewhat concrete by anchoring our exposition to
two simulations, one in which the AI saves the lives of innocents by locking
out a malevolent human's gun, and a second in which this malevolent agent is
allowed by the AI to be neutralized by law enforcement. Along the way, some
objections are anticipated, and rebutted. |
doi_str_mv | 10.48550/arxiv.2102.09343 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2102_09343</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2102_09343</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-b4fa4484ea8c009b2ca2f195e5c8c4a5e3bfb221b5482f4a878ac316e2f8097b3</originalsourceid><addsrcrecordid>eNotzrsKwjAUgOEsDqI-gJMZHG3N5cSmoxRv0OJQ93ISEy1oK42Ivr14mf7t5yNkzFkMWik2x-5ZP2LBmYhZKkH2yXS5oxk2tLy3N1pgCLQ8t-29bk5hRrE50qLt3JD0PF6CG_07IIf16pBto3y_2WXLPMJFIiMDHgE0ONSWsdQIi8LzVDlltQVUThpvhOBGgRYeUCcareQLJ7xmaWLkgEx-2y-zunX1FbtX9eFWX658A_bOOD0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>AI Can Stop Mass Shootings, and More</title><source>arXiv.org</source><creator>Bringsjord, Selmer ; Govindarajulu, Naveen Sundar ; Giancola, Michael</creator><creatorcontrib>Bringsjord, Selmer ; Govindarajulu, Naveen Sundar ; Giancola, Michael</creatorcontrib><description>We propose to build directly upon our longstanding, prior r&d in AI/machine
ethics in order to attempt to make real the blue-sky idea of AI that can thwart
mass shootings, by bringing to bear its ethical reasoning. The r&d in question
is overtly and avowedly logicist in form, and since we are hardly the only ones
who have established a firm foundation in the attempt to imbue AI's with their
own ethical sensibility, the pursuit of our proposal by those in different
methodological camps should, we believe, be considered as well. We seek herein
to make our vision at least somewhat concrete by anchoring our exposition to
two simulations, one in which the AI saves the lives of innocents by locking
out a malevolent human's gun, and a second in which this malevolent agent is
allowed by the AI to be neutralized by law enforcement. Along the way, some
objections are anticipated, and rebutted.</description><identifier>DOI: 10.48550/arxiv.2102.09343</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computers and Society</subject><creationdate>2021-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2102.09343$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2102.09343$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bringsjord, Selmer</creatorcontrib><creatorcontrib>Govindarajulu, Naveen Sundar</creatorcontrib><creatorcontrib>Giancola, Michael</creatorcontrib><title>AI Can Stop Mass Shootings, and More</title><description>We propose to build directly upon our longstanding, prior r&d in AI/machine
ethics in order to attempt to make real the blue-sky idea of AI that can thwart
mass shootings, by bringing to bear its ethical reasoning. The r&d in question
is overtly and avowedly logicist in form, and since we are hardly the only ones
who have established a firm foundation in the attempt to imbue AI's with their
own ethical sensibility, the pursuit of our proposal by those in different
methodological camps should, we believe, be considered as well. We seek herein
to make our vision at least somewhat concrete by anchoring our exposition to
two simulations, one in which the AI saves the lives of innocents by locking
out a malevolent human's gun, and a second in which this malevolent agent is
allowed by the AI to be neutralized by law enforcement. Along the way, some
objections are anticipated, and rebutted.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computers and Society</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAUgOEsDqI-gJMZHG3N5cSmoxRv0OJQ93ISEy1oK42Ivr14mf7t5yNkzFkMWik2x-5ZP2LBmYhZKkH2yXS5oxk2tLy3N1pgCLQ8t-29bk5hRrE50qLt3JD0PF6CG_07IIf16pBto3y_2WXLPMJFIiMDHgE0ONSWsdQIi8LzVDlltQVUThpvhOBGgRYeUCcareQLJ7xmaWLkgEx-2y-zunX1FbtX9eFWX658A_bOOD0</recordid><startdate>20210205</startdate><enddate>20210205</enddate><creator>Bringsjord, Selmer</creator><creator>Govindarajulu, Naveen Sundar</creator><creator>Giancola, Michael</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210205</creationdate><title>AI Can Stop Mass Shootings, and More</title><author>Bringsjord, Selmer ; Govindarajulu, Naveen Sundar ; Giancola, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-b4fa4484ea8c009b2ca2f195e5c8c4a5e3bfb221b5482f4a878ac316e2f8097b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computers and Society</topic><toplevel>online_resources</toplevel><creatorcontrib>Bringsjord, Selmer</creatorcontrib><creatorcontrib>Govindarajulu, Naveen Sundar</creatorcontrib><creatorcontrib>Giancola, Michael</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bringsjord, Selmer</au><au>Govindarajulu, Naveen Sundar</au><au>Giancola, Michael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>AI Can Stop Mass Shootings, and More</atitle><date>2021-02-05</date><risdate>2021</risdate><abstract>We propose to build directly upon our longstanding, prior r&d in AI/machine
ethics in order to attempt to make real the blue-sky idea of AI that can thwart
mass shootings, by bringing to bear its ethical reasoning. The r&d in question
is overtly and avowedly logicist in form, and since we are hardly the only ones
who have established a firm foundation in the attempt to imbue AI's with their
own ethical sensibility, the pursuit of our proposal by those in different
methodological camps should, we believe, be considered as well. We seek herein
to make our vision at least somewhat concrete by anchoring our exposition to
two simulations, one in which the AI saves the lives of innocents by locking
out a malevolent human's gun, and a second in which this malevolent agent is
allowed by the AI to be neutralized by law enforcement. Along the way, some
objections are anticipated, and rebutted.</abstract><doi>10.48550/arxiv.2102.09343</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2102.09343 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2102_09343 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computers and Society |
title | AI Can Stop Mass Shootings, and More |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T20%3A31%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=AI%20Can%20Stop%20Mass%20Shootings,%20and%20More&rft.au=Bringsjord,%20Selmer&rft.date=2021-02-05&rft_id=info:doi/10.48550/arxiv.2102.09343&rft_dat=%3Carxiv_GOX%3E2102_09343%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |