DETECTING HALLUCINATION IN A LANGUAGE MODEL
Various embodiments discussed herein are directed to improving existing technologies by detecting a likelihood of hallucination arising from one-shot, few-shot, or outside knowledge contexts. For example, regarding the one-shot or few-shot contexts, some embodiments determine a set of tokens in a la...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Various embodiments discussed herein are directed to improving existing technologies by detecting a likelihood of hallucination arising from one-shot, few-shot, or outside knowledge contexts. For example, regarding the one-shot or few-shot contexts, some embodiments determine a set of tokens in a language model output that are not found in target content but are found in at least one example. When such phrases are not very common words, this is highly indicative that the model is hallucinating because these phrases should be located in the target content but are not, but are instead located in the examples. |
---|