As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli

As synthetic media becomes progressively more realistic and barriers to using it continue to lower, the technology has been increasingly utilized for malicious purposes, from financial fraud to nonconsensual pornography. Today, the principal defense against being misled by synthetic media relies on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cooke, Di, Edwards, Abigail, Barkoff, Sophia, Kelly, Kathryn
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Cooke, Di
Edwards, Abigail
Barkoff, Sophia
Kelly, Kathryn
description As synthetic media becomes progressively more realistic and barriers to using it continue to lower, the technology has been increasingly utilized for malicious purposes, from financial fraud to nonconsensual pornography. Today, the principal defense against being misled by synthetic media relies on the ability of the human observer to visually and auditorily discern between real and fake. However, it remains unclear just how vulnerable people actually are to deceptive synthetic media in the course of their day to day lives. We conducted a perceptual study with 1276 participants to assess how accurate people were at distinguishing synthetic images, audio only, video only, and audiovisual stimuli from authentic. To reflect the circumstances under which people would likely encounter synthetic media in the wild, testing conditions and stimuli emulated a typical online platform, while all synthetic media used in the survey was sourced from publicly accessible generative AI technology. We find that overall, participants struggled to meaningfully discern between synthetic and authentic content. We also find that detection performance worsens when the stimuli contains synthetic content as compared to authentic content, images featuring human faces as compared to non face objects, a single modality as compared to multimodal stimuli, mixed authenticity as compared to being fully synthetic for audiovisual stimuli, and features foreign languages as compared to languages the observer is fluent in. Finally, we also find that prior knowledge of synthetic media does not meaningfully impact their detection performance. Collectively, these results indicate that people are highly susceptible to being tricked by synthetic media in their daily lives and that human perceptual detection capabilities can no longer be relied upon as an effective counterdefense.
doi_str_mv 10.48550/arxiv.2403.16760
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_16760</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_16760</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-a8fb3e26caaed02445c530cd486bb0611c6ea98092d609ea1bf33b3e554a7a0e3</originalsourceid><addsrcrecordid>eNotj81Og0AUhWfjwlQfwJX3AQQvzA_gjjTaNmnihj25MJdmEmAMA0TfXmxdfWdzTs4nxFOCscq1xleavt0apwplnJjM4L3oygAH7y1sLGHv3QiVD-ENjstAI1ieuZ2dH8F3UJ6iC4880cwW3EAXDi-wOst-Iy3W-Q2jvcXVhYV6CLMblt49iLuO-sCP_9yJ6uO92h-j8-fhtC_PEZkMI8q7RnJqWiK2mCqlWy2xtSo3TYMmSVrDVORYpNZgwZQ0nZRbQ2tFGSHLnXi-zV5F669pezn91H_C9VVY_gKbR1A_</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli</title><source>arXiv.org</source><creator>Cooke, Di ; Edwards, Abigail ; Barkoff, Sophia ; Kelly, Kathryn</creator><creatorcontrib>Cooke, Di ; Edwards, Abigail ; Barkoff, Sophia ; Kelly, Kathryn</creatorcontrib><description>As synthetic media becomes progressively more realistic and barriers to using it continue to lower, the technology has been increasingly utilized for malicious purposes, from financial fraud to nonconsensual pornography. Today, the principal defense against being misled by synthetic media relies on the ability of the human observer to visually and auditorily discern between real and fake. However, it remains unclear just how vulnerable people actually are to deceptive synthetic media in the course of their day to day lives. We conducted a perceptual study with 1276 participants to assess how accurate people were at distinguishing synthetic images, audio only, video only, and audiovisual stimuli from authentic. To reflect the circumstances under which people would likely encounter synthetic media in the wild, testing conditions and stimuli emulated a typical online platform, while all synthetic media used in the survey was sourced from publicly accessible generative AI technology. We find that overall, participants struggled to meaningfully discern between synthetic and authentic content. We also find that detection performance worsens when the stimuli contains synthetic content as compared to authentic content, images featuring human faces as compared to non face objects, a single modality as compared to multimodal stimuli, mixed authenticity as compared to being fully synthetic for audiovisual stimuli, and features foreign languages as compared to languages the observer is fluent in. Finally, we also find that prior knowledge of synthetic media does not meaningfully impact their detection performance. Collectively, these results indicate that people are highly susceptible to being tricked by synthetic media in their daily lives and that human perceptual detection capabilities can no longer be relied upon as an effective counterdefense.</description><identifier>DOI: 10.48550/arxiv.2403.16760</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Human-Computer Interaction ; Computer Science - Sound</subject><creationdate>2024-03</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.16760$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.16760$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cooke, Di</creatorcontrib><creatorcontrib>Edwards, Abigail</creatorcontrib><creatorcontrib>Barkoff, Sophia</creatorcontrib><creatorcontrib>Kelly, Kathryn</creatorcontrib><title>As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli</title><description>As synthetic media becomes progressively more realistic and barriers to using it continue to lower, the technology has been increasingly utilized for malicious purposes, from financial fraud to nonconsensual pornography. Today, the principal defense against being misled by synthetic media relies on the ability of the human observer to visually and auditorily discern between real and fake. However, it remains unclear just how vulnerable people actually are to deceptive synthetic media in the course of their day to day lives. We conducted a perceptual study with 1276 participants to assess how accurate people were at distinguishing synthetic images, audio only, video only, and audiovisual stimuli from authentic. To reflect the circumstances under which people would likely encounter synthetic media in the wild, testing conditions and stimuli emulated a typical online platform, while all synthetic media used in the survey was sourced from publicly accessible generative AI technology. We find that overall, participants struggled to meaningfully discern between synthetic and authentic content. We also find that detection performance worsens when the stimuli contains synthetic content as compared to authentic content, images featuring human faces as compared to non face objects, a single modality as compared to multimodal stimuli, mixed authenticity as compared to being fully synthetic for audiovisual stimuli, and features foreign languages as compared to languages the observer is fluent in. Finally, we also find that prior knowledge of synthetic media does not meaningfully impact their detection performance. Collectively, these results indicate that people are highly susceptible to being tricked by synthetic media in their daily lives and that human perceptual detection capabilities can no longer be relied upon as an effective counterdefense.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Human-Computer Interaction</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81Og0AUhWfjwlQfwJX3AQQvzA_gjjTaNmnihj25MJdmEmAMA0TfXmxdfWdzTs4nxFOCscq1xleavt0apwplnJjM4L3oygAH7y1sLGHv3QiVD-ENjstAI1ieuZ2dH8F3UJ6iC4880cwW3EAXDi-wOst-Iy3W-Q2jvcXVhYV6CLMblt49iLuO-sCP_9yJ6uO92h-j8-fhtC_PEZkMI8q7RnJqWiK2mCqlWy2xtSo3TYMmSVrDVORYpNZgwZQ0nZRbQ2tFGSHLnXi-zV5F669pezn91H_C9VVY_gKbR1A_</recordid><startdate>20240325</startdate><enddate>20240325</enddate><creator>Cooke, Di</creator><creator>Edwards, Abigail</creator><creator>Barkoff, Sophia</creator><creator>Kelly, Kathryn</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240325</creationdate><title>As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli</title><author>Cooke, Di ; Edwards, Abigail ; Barkoff, Sophia ; Kelly, Kathryn</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-a8fb3e26caaed02445c530cd486bb0611c6ea98092d609ea1bf33b3e554a7a0e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Human-Computer Interaction</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Cooke, Di</creatorcontrib><creatorcontrib>Edwards, Abigail</creatorcontrib><creatorcontrib>Barkoff, Sophia</creatorcontrib><creatorcontrib>Kelly, Kathryn</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cooke, Di</au><au>Edwards, Abigail</au><au>Barkoff, Sophia</au><au>Kelly, Kathryn</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli</atitle><date>2024-03-25</date><risdate>2024</risdate><abstract>As synthetic media becomes progressively more realistic and barriers to using it continue to lower, the technology has been increasingly utilized for malicious purposes, from financial fraud to nonconsensual pornography. Today, the principal defense against being misled by synthetic media relies on the ability of the human observer to visually and auditorily discern between real and fake. However, it remains unclear just how vulnerable people actually are to deceptive synthetic media in the course of their day to day lives. We conducted a perceptual study with 1276 participants to assess how accurate people were at distinguishing synthetic images, audio only, video only, and audiovisual stimuli from authentic. To reflect the circumstances under which people would likely encounter synthetic media in the wild, testing conditions and stimuli emulated a typical online platform, while all synthetic media used in the survey was sourced from publicly accessible generative AI technology. We find that overall, participants struggled to meaningfully discern between synthetic and authentic content. We also find that detection performance worsens when the stimuli contains synthetic content as compared to authentic content, images featuring human faces as compared to non face objects, a single modality as compared to multimodal stimuli, mixed authenticity as compared to being fully synthetic for audiovisual stimuli, and features foreign languages as compared to languages the observer is fluent in. Finally, we also find that prior knowledge of synthetic media does not meaningfully impact their detection performance. Collectively, these results indicate that people are highly susceptible to being tricked by synthetic media in their daily lives and that human perceptual detection capabilities can no longer be relied upon as an effective counterdefense.</abstract><doi>10.48550/arxiv.2403.16760</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.16760
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_16760
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Human-Computer Interaction
Computer Science - Sound
title As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T14%3A33%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=As%20Good%20As%20A%20Coin%20Toss:%20Human%20detection%20of%20AI-generated%20images,%20videos,%20audio,%20and%20audiovisual%20stimuli&rft.au=Cooke,%20Di&rft.date=2024-03-25&rft_id=info:doi/10.48550/arxiv.2403.16760&rft_dat=%3Carxiv_GOX%3E2403_16760%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true