Banishing LLM Hallucinations Requires Rethinking Generalization

Despite their powerful chat, coding, and reasoning abilities, Large Language Models (LLMs) frequently hallucinate. Conventional wisdom suggests that hallucinations are a consequence of a balance between creativity and factuality, which can be mitigated, but not eliminated, by grounding the LLM in ex...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-06
Hauptverfasser: Li, Johnny, Consul, Saksham, Zhou, Eda, Wong, James, Farooqui, Naila, Ye, Yuxin, Nithyashree Manohar, Zhuxiaona Wei, Wu, Tian, Echols, Ben, Zhou, Sharon, Diamos, Gregory
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!