How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?

In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The developmen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics 2025-01, Vol.31 (1), p.1116-1125
Hauptverfasser: Lo, Leo Yu-Ho, Qu, Huamin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1125
container_issue 1
container_start_page 1116
container_title IEEE transactions on visualization and computer graphics
container_volume 31
creator Lo, Leo Yu-Ho
Qu, Huamin
description In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.
doi_str_mv 10.1109/TVCG.2024.3456333
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_3104039102</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10679256</ieee_id><sourcerecordid>3104039102</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1195-9ec4fc30fb308bb57513123acf05dbc3568afb668a6804f35080b2875535301d3</originalsourceid><addsrcrecordid>eNpNkE1PwkAURSdGI4j-ABNjZomL4pvPdlYGUMEEwgbZTqbTqRlTKHbaGP31tgGNm_fu4ty7OAhdExgRAup-vZnORhQoHzEuJGPsBPWJ4iQCAfK0zRDHEZVU9tBFCO8AhPNEnaMeU1TyOBZ9NJmXn3hWlhkerio8MdkdHlcOLxbLgE2NH13tbO13b3jpQ-FM1sWND40p_LepfbkLD5foLDdFcFfHP0Cvz0_r6TxarGYv0_EisoQoESlneW4Z5CmDJE1FLAgjlBmbg8hSy4RMTJ7K9soEeM4EJJDSJBaCCQYkYwM0POzuq_KjcaHWWx-sKwqzc2UTNCPAgSkCtEXJAbVVGULlcr2v_NZUX5qA7tTpTp3u1OmjurZze5xv0q3L_hq_rlrg5gB459y_QRkr2k78ANA9b1E</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3104039102</pqid></control><display><type>article</type><title>How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?</title><source>IEEE Electronic Library (IEL)</source><creator>Lo, Leo Yu-Ho ; Qu, Huamin</creator><creatorcontrib>Lo, Leo Yu-Ho ; Qu, Huamin</creatorcontrib><description>In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.</description><identifier>ISSN: 1077-2626</identifier><identifier>ISSN: 1941-0506</identifier><identifier>EISSN: 1941-0506</identifier><identifier>DOI: 10.1109/TVCG.2024.3456333</identifier><identifier>PMID: 39264775</identifier><identifier>CODEN: ITVGEA</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Data visualization ; Deceptive Visualization ; Education ; Fake news ; Large language models ; Libraries ; Prompt Engineering ; Technological innovation ; Visualization</subject><ispartof>IEEE transactions on visualization and computer graphics, 2025-01, Vol.31 (1), p.1116-1125</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1195-9ec4fc30fb308bb57513123acf05dbc3568afb668a6804f35080b2875535301d3</cites><orcidid>0000-0002-3660-3765 ; 0000-0002-3344-9694</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10679256$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10679256$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39264775$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Lo, Leo Yu-Ho</creatorcontrib><creatorcontrib>Qu, Huamin</creatorcontrib><title>How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?</title><title>IEEE transactions on visualization and computer graphics</title><addtitle>TVCG</addtitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><description>In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.</description><subject>Data visualization</subject><subject>Deceptive Visualization</subject><subject>Education</subject><subject>Fake news</subject><subject>Large language models</subject><subject>Libraries</subject><subject>Prompt Engineering</subject><subject>Technological innovation</subject><subject>Visualization</subject><issn>1077-2626</issn><issn>1941-0506</issn><issn>1941-0506</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1PwkAURSdGI4j-ABNjZomL4pvPdlYGUMEEwgbZTqbTqRlTKHbaGP31tgGNm_fu4ty7OAhdExgRAup-vZnORhQoHzEuJGPsBPWJ4iQCAfK0zRDHEZVU9tBFCO8AhPNEnaMeU1TyOBZ9NJmXn3hWlhkerio8MdkdHlcOLxbLgE2NH13tbO13b3jpQ-FM1sWND40p_LepfbkLD5foLDdFcFfHP0Cvz0_r6TxarGYv0_EisoQoESlneW4Z5CmDJE1FLAgjlBmbg8hSy4RMTJ7K9soEeM4EJJDSJBaCCQYkYwM0POzuq_KjcaHWWx-sKwqzc2UTNCPAgSkCtEXJAbVVGULlcr2v_NZUX5qA7tTpTp3u1OmjurZze5xv0q3L_hq_rlrg5gB459y_QRkr2k78ANA9b1E</recordid><startdate>202501</startdate><enddate>202501</enddate><creator>Lo, Leo Yu-Ho</creator><creator>Qu, Huamin</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-3660-3765</orcidid><orcidid>https://orcid.org/0000-0002-3344-9694</orcidid></search><sort><creationdate>202501</creationdate><title>How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?</title><author>Lo, Leo Yu-Ho ; Qu, Huamin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1195-9ec4fc30fb308bb57513123acf05dbc3568afb668a6804f35080b2875535301d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Data visualization</topic><topic>Deceptive Visualization</topic><topic>Education</topic><topic>Fake news</topic><topic>Large language models</topic><topic>Libraries</topic><topic>Prompt Engineering</topic><topic>Technological innovation</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lo, Leo Yu-Ho</creatorcontrib><creatorcontrib>Qu, Huamin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on visualization and computer graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lo, Leo Yu-Ho</au><au>Qu, Huamin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?</atitle><jtitle>IEEE transactions on visualization and computer graphics</jtitle><stitle>TVCG</stitle><addtitle>IEEE Trans Vis Comput Graph</addtitle><date>2025-01</date><risdate>2025</risdate><volume>31</volume><issue>1</issue><spage>1116</spage><epage>1125</epage><pages>1116-1125</pages><issn>1077-2626</issn><issn>1941-0506</issn><eissn>1941-0506</eissn><coden>ITVGEA</coden><abstract>In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>39264775</pmid><doi>10.1109/TVCG.2024.3456333</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0002-3660-3765</orcidid><orcidid>https://orcid.org/0000-0002-3344-9694</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1077-2626
ispartof IEEE transactions on visualization and computer graphics, 2025-01, Vol.31 (1), p.1116-1125
issn 1077-2626
1941-0506
1941-0506
language eng
recordid cdi_proquest_miscellaneous_3104039102
source IEEE Electronic Library (IEL)
subjects Data visualization
Deceptive Visualization
Education
Fake news
Large language models
Libraries
Prompt Engineering
Technological innovation
Visualization
title How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T02%3A30%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=How%20Good%20(Or%20Bad)%20Are%20LLMs%20at%20Detecting%20Misleading%20Visualizations?&rft.jtitle=IEEE%20transactions%20on%20visualization%20and%20computer%20graphics&rft.au=Lo,%20Leo%20Yu-Ho&rft.date=2025-01&rft.volume=31&rft.issue=1&rft.spage=1116&rft.epage=1125&rft.pages=1116-1125&rft.issn=1077-2626&rft.eissn=1941-0506&rft.coden=ITVGEA&rft_id=info:doi/10.1109/TVCG.2024.3456333&rft_dat=%3Cproquest_RIE%3E3104039102%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3104039102&rft_id=info:pmid/39264775&rft_ieee_id=10679256&rfr_iscdi=true