KEYword based Sampling (KEYS) for Large Language Models
Question answering (Q/A) can be formulated as a generative task (Mitra, 2017) where the task is to generate an answer given the question and the passage (knowledge, if available). Recent advances in QA task is focused a lot on language model advancements and less on other areas such as sampling(Kris...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Question answering (Q/A) can be formulated as a generative task (Mitra, 2017)
where the task is to generate an answer given the question and the passage
(knowledge, if available). Recent advances in QA task is focused a lot on
language model advancements and less on other areas such as sampling(Krishna et
al., 2021), (Nakano et al., 2021). Keywords play very important role for humans
in language generation. (Humans formulate keywords and use grammar to connect
those keywords and work). In the research community, very little focus is on
how humans generate answers to a question and how this behavior can be
incorporated in a language model. In this paper, we want to explore these two
areas combined, i.e., how sampling can be to used generate answers which are
close to human-like behavior and factually correct. Hence, the type of decoding
algorithm we think should be used for Q/A tasks should also depend on the
keywords. These keywords can be obtained from the question, passage or internet
results. We use knowledge distillation techniques to extract keywords and
sample using these extracted keywords on top of vanilla decoding algorithms
when formulating the answer to generate a human-like answer. In this paper, we
show that our decoding method outperforms most commonly used decoding methods
for Q/A task |
---|---|
DOI: | 10.48550/arxiv.2305.18679 |