Minimizing Factual Inconsistency and Hallucination in Large Language Models

Large Language Models (LLMs) are widely used in critical fields such as healthcare, education, and finance due to their remarkable proficiency in various language-related tasks. However, LLMs are prone to generating factually incorrect responses or "hallucinations," which can lead to a los...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-11
Hauptverfasser: Muneeswaran, I, Saxena, Shreya, Prasad, Siva, Sai Prakash, M V, Shankar, Advaith, Varun, V, Vaddina, Vishal, Gopalakrishnan, Saisubramaniam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!