A Representative Study on Human Detection of Artificially Generated Media Across Countries
AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to de...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 73 |
---|---|
container_issue | |
container_start_page | 55 |
container_title | |
container_volume | |
creator | Frank, Joel Herbert, Franziska Ricker, Jonas Schonherr, Lea Eisenhofer, Thorsten Fischer, Asja Durmuth, Markus Holz, Thorsten |
description | AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from "real" media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories. |
doi_str_mv | 10.1109/SP54263.2024.00159 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_RIE</sourceid><recordid>TN_cdi_ieee_primary_10646666</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10646666</ieee_id><sourcerecordid>10646666</sourcerecordid><originalsourceid>FETCH-LOGICAL-i150t-2dc38bf581c77ef28260b800c38d18575628135698578f9c029c80a28870df493</originalsourceid><addsrcrecordid>eNotjNFKwzAYhaMgOOdeQLzIC7T-SZomuSxVN2GiOL3xZmTpH4h07UgyYW-_gp6b853v4hByx6BkDMzD5l1WvBYlB16VAEyaC7IwymghQQgmgF2SGRdKFoyDuiY3Kf0AcBCmmpHvhn7gIWLCIdscfpFu8rE70XGgq-PeDvQRM7ocpj162sQcfHDB9v2JLnHAaDN29BW7YGnj4pgSbcfjkGPAdEuuvO0TLv57Tr6enz7bVbF-W760zboITEIueOeE3nmpmVMKPde8hp0GmGzHtFSy5poJWZuJtTcOuHEaLNdaQecrI-bk_u83IOL2EMPextOWQV3VU8QZ-fFRpg</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A Representative Study on Human Detection of Artificially Generated Media Across Countries</title><source>IEEE Electronic Library (IEL)</source><creator>Frank, Joel ; Herbert, Franziska ; Ricker, Jonas ; Schonherr, Lea ; Eisenhofer, Thorsten ; Fischer, Asja ; Durmuth, Markus ; Holz, Thorsten</creator><creatorcontrib>Frank, Joel ; Herbert, Franziska ; Ricker, Jonas ; Schonherr, Lea ; Eisenhofer, Thorsten ; Fischer, Asja ; Durmuth, Markus ; Holz, Thorsten</creatorcontrib><description>AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from "real" media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories.</description><identifier>EISSN: 2375-1207</identifier><identifier>EISBN: 9798350331301</identifier><identifier>DOI: 10.1109/SP54263.2024.00159</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Deepfakes ; Generative AI ; Media ; Reflection ; Reviews ; Social networking (online) ; Surveys</subject><ispartof>2024 IEEE Symposium on Security and Privacy (SP), 2024, p.55-73</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10646666$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,793,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10646666$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Frank, Joel</creatorcontrib><creatorcontrib>Herbert, Franziska</creatorcontrib><creatorcontrib>Ricker, Jonas</creatorcontrib><creatorcontrib>Schonherr, Lea</creatorcontrib><creatorcontrib>Eisenhofer, Thorsten</creatorcontrib><creatorcontrib>Fischer, Asja</creatorcontrib><creatorcontrib>Durmuth, Markus</creatorcontrib><creatorcontrib>Holz, Thorsten</creatorcontrib><title>A Representative Study on Human Detection of Artificially Generated Media Across Countries</title><title>2024 IEEE Symposium on Security and Privacy (SP)</title><addtitle>SP</addtitle><description>AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from "real" media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories.</description><subject>Deepfakes</subject><subject>Generative AI</subject><subject>Media</subject><subject>Reflection</subject><subject>Reviews</subject><subject>Social networking (online)</subject><subject>Surveys</subject><issn>2375-1207</issn><isbn>9798350331301</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotjNFKwzAYhaMgOOdeQLzIC7T-SZomuSxVN2GiOL3xZmTpH4h07UgyYW-_gp6b853v4hByx6BkDMzD5l1WvBYlB16VAEyaC7IwymghQQgmgF2SGRdKFoyDuiY3Kf0AcBCmmpHvhn7gIWLCIdscfpFu8rE70XGgq-PeDvQRM7ocpj162sQcfHDB9v2JLnHAaDN29BW7YGnj4pgSbcfjkGPAdEuuvO0TLv57Tr6enz7bVbF-W760zboITEIueOeE3nmpmVMKPde8hp0GmGzHtFSy5poJWZuJtTcOuHEaLNdaQecrI-bk_u83IOL2EMPextOWQV3VU8QZ-fFRpg</recordid><startdate>20240519</startdate><enddate>20240519</enddate><creator>Frank, Joel</creator><creator>Herbert, Franziska</creator><creator>Ricker, Jonas</creator><creator>Schonherr, Lea</creator><creator>Eisenhofer, Thorsten</creator><creator>Fischer, Asja</creator><creator>Durmuth, Markus</creator><creator>Holz, Thorsten</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240519</creationdate><title>A Representative Study on Human Detection of Artificially Generated Media Across Countries</title><author>Frank, Joel ; Herbert, Franziska ; Ricker, Jonas ; Schonherr, Lea ; Eisenhofer, Thorsten ; Fischer, Asja ; Durmuth, Markus ; Holz, Thorsten</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i150t-2dc38bf581c77ef28260b800c38d18575628135698578f9c029c80a28870df493</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Deepfakes</topic><topic>Generative AI</topic><topic>Media</topic><topic>Reflection</topic><topic>Reviews</topic><topic>Social networking (online)</topic><topic>Surveys</topic><toplevel>online_resources</toplevel><creatorcontrib>Frank, Joel</creatorcontrib><creatorcontrib>Herbert, Franziska</creatorcontrib><creatorcontrib>Ricker, Jonas</creatorcontrib><creatorcontrib>Schonherr, Lea</creatorcontrib><creatorcontrib>Eisenhofer, Thorsten</creatorcontrib><creatorcontrib>Fischer, Asja</creatorcontrib><creatorcontrib>Durmuth, Markus</creatorcontrib><creatorcontrib>Holz, Thorsten</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Frank, Joel</au><au>Herbert, Franziska</au><au>Ricker, Jonas</au><au>Schonherr, Lea</au><au>Eisenhofer, Thorsten</au><au>Fischer, Asja</au><au>Durmuth, Markus</au><au>Holz, Thorsten</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A Representative Study on Human Detection of Artificially Generated Media Across Countries</atitle><btitle>2024 IEEE Symposium on Security and Privacy (SP)</btitle><stitle>SP</stitle><date>2024-05-19</date><risdate>2024</risdate><spage>55</spage><epage>73</epage><pages>55-73</pages><eissn>2375-1207</eissn><eisbn>9798350331301</eisbn><coden>IEEPAD</coden><abstract>AI-generated media has become a threat to our digital society as we know it. Forgeries can be created automatically and on a large scale based on publicly available technologies. Recognizing this challenge, academics and practitioners have proposed a multitude of automatic detection strategies to detect such artificial media. However, in contrast to these technological advances, the human perception of generated media has not been thoroughly studied yet.In this paper, we aim to close this research gap. We conduct the first comprehensive survey on people's ability to detect generated media, spanning three countries (USA, Germany, and China), with 3,002 participants covering audio, image, and text media. Our results indicate that state-of-the-art forgeries are almost indistinguishable from "real" media, with the majority of participants simply guessing when asked to rate them as human- or machine-generated. In addition, AI-generated media is rated as more likely to be human-generated across all media types and all countries. To further understand which factors influence people's ability to detect AI-generated media, we include personal variables, chosen based on a literature review in the domains of deepfake and fake news research. In a regression analysis, we found that generalized trust, cognitive reflection, and self-reported familiarity with deepfakes significantly influence participants' decisions across all media categories.</abstract><pub>IEEE</pub><doi>10.1109/SP54263.2024.00159</doi><tpages>19</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2375-1207 |
ispartof | 2024 IEEE Symposium on Security and Privacy (SP), 2024, p.55-73 |
issn | 2375-1207 |
language | eng |
recordid | cdi_ieee_primary_10646666 |
source | IEEE Electronic Library (IEL) |
subjects | Deepfakes Generative AI Media Reflection Reviews Social networking (online) Surveys |
title | A Representative Study on Human Detection of Artificially Generated Media Across Countries |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T20%3A33%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20Representative%20Study%20on%20Human%20Detection%20of%20Artificially%20Generated%20Media%20Across%20Countries&rft.btitle=2024%20IEEE%20Symposium%20on%20Security%20and%20Privacy%20(SP)&rft.au=Frank,%20Joel&rft.date=2024-05-19&rft.spage=55&rft.epage=73&rft.pages=55-73&rft.eissn=2375-1207&rft.coden=IEEPAD&rft_id=info:doi/10.1109/SP54263.2024.00159&rft_dat=%3Cieee_RIE%3E10646666%3C/ieee_RIE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9798350331301&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10646666&rfr_iscdi=true |