Understanding Knowledge Drift in LLMs through Misinformation
Large Language Models (LLMs) have revolutionized numerous applications, making them an integral part of our digital ecosystem. However, their reliability becomes critical, especially when these models are exposed to misinformation. We primarily analyze the susceptibility of state-of-the-art LLMs to...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have revolutionized numerous applications,
making them an integral part of our digital ecosystem. However, their
reliability becomes critical, especially when these models are exposed to
misinformation. We primarily analyze the susceptibility of state-of-the-art
LLMs to factual inaccuracies when they encounter false information in a QnA
scenario, an issue that can lead to a phenomenon we refer to as *knowledge
drift*, which significantly undermines the trustworthiness of these models. We
evaluate the factuality and the uncertainty of the models' responses relying on
Entropy, Perplexity, and Token Probability metrics. Our experiments reveal that
an LLM's uncertainty can increase up to 56.6% when the question is answered
incorrectly due to the exposure to false information. At the same time,
repeated exposure to the same false information can decrease the models
uncertainty again (-52.8% w.r.t. the answers on the untainted prompts),
potentially manipulating the underlying model's beliefs and introducing a drift
from its original knowledge. These findings provide insights into LLMs'
robustness and vulnerability to adversarial inputs, paving the way for
developing more reliable LLM applications across various domains. The code is
available at https://github.com/afastowski/knowledge_drift. |
---|---|
DOI: | 10.48550/arxiv.2409.07085 |