Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance
While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While analogies are a common way to evaluate word embeddings in NLP, it is
also of interest to investigate whether or not analogical reasoning is a task
in itself that can be learned. In this paper, we test several ways to learn
basic analogical reasoning, specifically focusing on analogies that are more
typical of what is used to evaluate analogical reasoning in humans than those
in commonly used NLP benchmarks. Our experiments find that models are able to
learn analogical reasoning, even with a small amount of data. We additionally
compare our models to a dataset with a human baseline, and find that after
training, models approach human performance. |
---|---|
DOI: | 10.48550/arxiv.2310.05597 |