KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking
This paper introduces KnowHalu, a novel approach for detecting hallucinations in text generated by large language models (LLMs), utilizing step-wise reasoning, multi-formulation query, multi-form knowledge for factual checking, and fusion-based detection mechanism. As LLMs are increasingly applied a...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper introduces KnowHalu, a novel approach for detecting hallucinations
in text generated by large language models (LLMs), utilizing step-wise
reasoning, multi-formulation query, multi-form knowledge for factual checking,
and fusion-based detection mechanism. As LLMs are increasingly applied across
various domains, ensuring that their outputs are not hallucinated is critical.
Recognizing the limitations of existing approaches that either rely on the
self-consistency check of LLMs or perform post-hoc fact-checking without
considering the complexity of queries or the form of knowledge, KnowHalu
proposes a two-phase process for hallucination detection. In the first phase,
it identifies non-fabrication hallucinations--responses that, while factually
correct, are irrelevant or non-specific to the query. The second phase,
multi-form based factual checking, contains five key steps: reasoning and query
decomposition, knowledge retrieval, knowledge optimization, judgment
generation, and judgment aggregation. Our extensive evaluations demonstrate
that KnowHalu significantly outperforms SOTA baselines in detecting
hallucinations across diverse tasks, e.g., improving by 15.65% in QA tasks and
5.50% in summarization tasks, highlighting its effectiveness and versatility in
detecting hallucinations in LLM-generated content. |
---|---|
DOI: | 10.48550/arxiv.2404.02935 |