MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning
Since commonsense information has been recorded significantly less frequently than its existence, language models pre-trained by text generation have difficulty to learn sufficient commonsense knowledge. Several studies have leveraged text retrieval to augment the models' commonsense ability. U...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Cui, Wanqing Bi, Keping Guo, Jiafeng Cheng, Xueqi |
description | Since commonsense information has been recorded significantly less frequently
than its existence, language models pre-trained by text generation have
difficulty to learn sufficient commonsense knowledge. Several studies have
leveraged text retrieval to augment the models' commonsense ability. Unlike
text, images capture commonsense information inherently but little effort has
been paid to effectively utilize them. In this work, we propose a novel
Multi-mOdal REtrieval (MORE) augmentation framework, to leverage both text and
images to enhance the commonsense ability of language models. Extensive
experiments on the Common-Gen task have demonstrated the efficacy of MORE based
on the pre-trained models of both single and multiple modalities. |
doi_str_mv | 10.48550/arxiv.2402.13625 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_13625</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_13625</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-a79967e7f807d76bda6cbf6f36b1ebc80fb6093d108a2f7d3ca561c5708b00193</originalsourceid><addsrcrecordid>eNotz01qwzAUBGBtsihpD9BVfAG7khX9uLtgnLSQYDDZmyfrKQgsO8iOaW_fNC0MzKwGPkJeGc22Wgj6BvHLL1m-pXnGuMzFE9mf6qZ6T063fvZpqC30SVPN0eNyX7vbJeAwo00OOGCE2S-YlGMI4zDhPUmDMI2DHy7PZOWgn_Dlv9fkvK_O5Ud6rA-f5e6YglQiBVUUUqFymiqrpLEgO-Ok49IwNJ2mzkhacMuohtwpyzsQknVCUW0oZQVfk83f7QPSXqMPEL_bX1D7APEfqRNFqw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning</title><source>arXiv.org</source><creator>Cui, Wanqing ; Bi, Keping ; Guo, Jiafeng ; Cheng, Xueqi</creator><creatorcontrib>Cui, Wanqing ; Bi, Keping ; Guo, Jiafeng ; Cheng, Xueqi</creatorcontrib><description>Since commonsense information has been recorded significantly less frequently
than its existence, language models pre-trained by text generation have
difficulty to learn sufficient commonsense knowledge. Several studies have
leveraged text retrieval to augment the models' commonsense ability. Unlike
text, images capture commonsense information inherently but little effort has
been paid to effectively utilize them. In this work, we propose a novel
Multi-mOdal REtrieval (MORE) augmentation framework, to leverage both text and
images to enhance the commonsense ability of language models. Extensive
experiments on the Common-Gen task have demonstrated the efficacy of MORE based
on the pre-trained models of both single and multiple modalities.</description><identifier>DOI: 10.48550/arxiv.2402.13625</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.13625$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.13625$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cui, Wanqing</creatorcontrib><creatorcontrib>Bi, Keping</creatorcontrib><creatorcontrib>Guo, Jiafeng</creatorcontrib><creatorcontrib>Cheng, Xueqi</creatorcontrib><title>MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning</title><description>Since commonsense information has been recorded significantly less frequently
than its existence, language models pre-trained by text generation have
difficulty to learn sufficient commonsense knowledge. Several studies have
leveraged text retrieval to augment the models' commonsense ability. Unlike
text, images capture commonsense information inherently but little effort has
been paid to effectively utilize them. In this work, we propose a novel
Multi-mOdal REtrieval (MORE) augmentation framework, to leverage both text and
images to enhance the commonsense ability of language models. Extensive
experiments on the Common-Gen task have demonstrated the efficacy of MORE based
on the pre-trained models of both single and multiple modalities.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz01qwzAUBGBtsihpD9BVfAG7khX9uLtgnLSQYDDZmyfrKQgsO8iOaW_fNC0MzKwGPkJeGc22Wgj6BvHLL1m-pXnGuMzFE9mf6qZ6T063fvZpqC30SVPN0eNyX7vbJeAwo00OOGCE2S-YlGMI4zDhPUmDMI2DHy7PZOWgn_Dlv9fkvK_O5Ud6rA-f5e6YglQiBVUUUqFymiqrpLEgO-Ok49IwNJ2mzkhacMuohtwpyzsQknVCUW0oZQVfk83f7QPSXqMPEL_bX1D7APEfqRNFqw</recordid><startdate>20240221</startdate><enddate>20240221</enddate><creator>Cui, Wanqing</creator><creator>Bi, Keping</creator><creator>Guo, Jiafeng</creator><creator>Cheng, Xueqi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240221</creationdate><title>MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning</title><author>Cui, Wanqing ; Bi, Keping ; Guo, Jiafeng ; Cheng, Xueqi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-a79967e7f807d76bda6cbf6f36b1ebc80fb6093d108a2f7d3ca561c5708b00193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Cui, Wanqing</creatorcontrib><creatorcontrib>Bi, Keping</creatorcontrib><creatorcontrib>Guo, Jiafeng</creatorcontrib><creatorcontrib>Cheng, Xueqi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cui, Wanqing</au><au>Bi, Keping</au><au>Guo, Jiafeng</au><au>Cheng, Xueqi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning</atitle><date>2024-02-21</date><risdate>2024</risdate><abstract>Since commonsense information has been recorded significantly less frequently
than its existence, language models pre-trained by text generation have
difficulty to learn sufficient commonsense knowledge. Several studies have
leveraged text retrieval to augment the models' commonsense ability. Unlike
text, images capture commonsense information inherently but little effort has
been paid to effectively utilize them. In this work, we propose a novel
Multi-mOdal REtrieval (MORE) augmentation framework, to leverage both text and
images to enhance the commonsense ability of language models. Extensive
experiments on the Common-Gen task have demonstrated the efficacy of MORE based
on the pre-trained models of both single and multiple modalities.</abstract><doi>10.48550/arxiv.2402.13625</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2402.13625 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2402_13625 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T17%3A24%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MORE:%20Multi-mOdal%20REtrieval%20Augmented%20Generative%20Commonsense%20Reasoning&rft.au=Cui,%20Wanqing&rft.date=2024-02-21&rft_id=info:doi/10.48550/arxiv.2402.13625&rft_dat=%3Carxiv_GOX%3E2402_13625%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |