Unlearning Climate Misinformation in Large Language Models

Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q&A data for fine-tuning and evaluating LLMs on c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Fore, Michael, Singh, Simranjit, Lee, Chaehong, Pandey, Amritanshu, Anastasopoulos, Antonios, Stamoulis, Dimitrios
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Fore, Michael
Singh, Simranjit
Lee, Chaehong
Pandey, Amritanshu
Anastasopoulos, Antonios
Stamoulis, Dimitrios
description Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q&A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model's responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.
doi_str_mv 10.48550/arxiv.2405.19563
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_19563</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_19563</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-4764fb4f24ac0443d3fce9352293f5d2619a11c4ad113a0514aa980a68e90e4f3</originalsourceid><addsrcrecordid>eNotj71uwjAUhb0wVNAH6EReIMH2vTYxG4poQQrqQufoktiRpeAgBxB9-6bAcn6Wo-8w9iF4hrlSfEHx7m-ZRK4yYZSGN7b6CZ2lGHxok6LzJ7rYZO8HH1wfx-L7kPiQlBRbO2porzSGfd_YbpixiaNusO8vn7LD5-ZQbNPy-2tXrMuU9BJSXGp0R3QSqeaI0ICrrQElpQGnGqmFISFqpEYIIK4EEpmck86t4RYdTNn8OfuAr85xhIy_1f-J6nEC_gAMuEEA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Unlearning Climate Misinformation in Large Language Models</title><source>arXiv.org</source><creator>Fore, Michael ; Singh, Simranjit ; Lee, Chaehong ; Pandey, Amritanshu ; Anastasopoulos, Antonios ; Stamoulis, Dimitrios</creator><creatorcontrib>Fore, Michael ; Singh, Simranjit ; Lee, Chaehong ; Pandey, Amritanshu ; Anastasopoulos, Antonios ; Stamoulis, Dimitrios</creatorcontrib><description>Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q&amp;A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model's responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.</description><identifier>DOI: 10.48550/arxiv.2405.19563</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.19563$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.19563$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fore, Michael</creatorcontrib><creatorcontrib>Singh, Simranjit</creatorcontrib><creatorcontrib>Lee, Chaehong</creatorcontrib><creatorcontrib>Pandey, Amritanshu</creatorcontrib><creatorcontrib>Anastasopoulos, Antonios</creatorcontrib><creatorcontrib>Stamoulis, Dimitrios</creatorcontrib><title>Unlearning Climate Misinformation in Large Language Models</title><description>Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q&amp;A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model's responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwjAUhb0wVNAH6EReIMH2vTYxG4poQQrqQufoktiRpeAgBxB9-6bAcn6Wo-8w9iF4hrlSfEHx7m-ZRK4yYZSGN7b6CZ2lGHxok6LzJ7rYZO8HH1wfx-L7kPiQlBRbO2porzSGfd_YbpixiaNusO8vn7LD5-ZQbNPy-2tXrMuU9BJSXGp0R3QSqeaI0ICrrQElpQGnGqmFISFqpEYIIK4EEpmck86t4RYdTNn8OfuAr85xhIy_1f-J6nEC_gAMuEEA</recordid><startdate>20240529</startdate><enddate>20240529</enddate><creator>Fore, Michael</creator><creator>Singh, Simranjit</creator><creator>Lee, Chaehong</creator><creator>Pandey, Amritanshu</creator><creator>Anastasopoulos, Antonios</creator><creator>Stamoulis, Dimitrios</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240529</creationdate><title>Unlearning Climate Misinformation in Large Language Models</title><author>Fore, Michael ; Singh, Simranjit ; Lee, Chaehong ; Pandey, Amritanshu ; Anastasopoulos, Antonios ; Stamoulis, Dimitrios</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-4764fb4f24ac0443d3fce9352293f5d2619a11c4ad113a0514aa980a68e90e4f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Fore, Michael</creatorcontrib><creatorcontrib>Singh, Simranjit</creatorcontrib><creatorcontrib>Lee, Chaehong</creatorcontrib><creatorcontrib>Pandey, Amritanshu</creatorcontrib><creatorcontrib>Anastasopoulos, Antonios</creatorcontrib><creatorcontrib>Stamoulis, Dimitrios</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fore, Michael</au><au>Singh, Simranjit</au><au>Lee, Chaehong</au><au>Pandey, Amritanshu</au><au>Anastasopoulos, Antonios</au><au>Stamoulis, Dimitrios</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Unlearning Climate Misinformation in Large Language Models</atitle><date>2024-05-29</date><risdate>2024</risdate><abstract>Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q&amp;A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model's responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.</abstract><doi>10.48550/arxiv.2405.19563</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.19563
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_19563
source arXiv.org
subjects Computer Science - Computation and Language
title Unlearning Climate Misinformation in Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T17%3A59%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Unlearning%20Climate%20Misinformation%20in%20Large%20Language%20Models&rft.au=Fore,%20Michael&rft.date=2024-05-29&rft_id=info:doi/10.48550/arxiv.2405.19563&rft_dat=%3Carxiv_GOX%3E2405_19563%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true