How to Make the Most of LLMs' Grammatical Knowledge for Acceptability Judgments
The grammatical knowledge of language models (LMs) is often measured using a benchmark of linguistic minimal pairs, where LMs are presented with a pair of acceptable and unacceptable sentences and required to judge which is acceptable. The existing dominant approach, however, naively calculates and...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The grammatical knowledge of language models (LMs) is often measured using a
benchmark of linguistic minimal pairs, where LMs are presented with a pair of
acceptable and unacceptable sentences and required to judge which is
acceptable. The existing dominant approach, however, naively calculates and
compares the probabilities of paired sentences using LMs. Additionally, large
language models (LLMs) have yet to be thoroughly examined in this field. We
thus investigate how to make the most of LLMs' grammatical knowledge to
comprehensively evaluate it. Through extensive experiments of nine judgment
methods in English and Chinese, we demonstrate that a probability readout
method, in-template LP, and a prompting-based method, Yes/No probability
computing, achieve particularly high performance, surpassing the conventional
approach. Our analysis reveals their different strengths, e.g., Yes/No
probability computing is robust against token-length bias, suggesting that they
harness different aspects of LLMs' grammatical knowledge. Consequently, we
recommend using diverse judgment methods to evaluate LLMs comprehensively. |
---|---|
DOI: | 10.48550/arxiv.2408.09639 |