MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning
Since commonsense information has been recorded significantly less frequently than its existence, language models pre-trained by text generation have difficulty to learn sufficient commonsense knowledge. Several studies have leveraged text retrieval to augment the models' commonsense ability. U...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Since commonsense information has been recorded significantly less frequently
than its existence, language models pre-trained by text generation have
difficulty to learn sufficient commonsense knowledge. Several studies have
leveraged text retrieval to augment the models' commonsense ability. Unlike
text, images capture commonsense information inherently but little effort has
been paid to effectively utilize them. In this work, we propose a novel
Multi-mOdal REtrieval (MORE) augmentation framework, to leverage both text and
images to enhance the commonsense ability of language models. Extensive
experiments on the Common-Gen task have demonstrated the efficacy of MORE based
on the pre-trained models of both single and multiple modalities. |
---|---|
DOI: | 10.48550/arxiv.2402.13625 |