Language Models As or For Knowledge Bases
DL4KG 2021 Pre-trained language models (LMs) have recently gained attention for their potential as an alternative to (or proxy for) explicit knowledge bases (KBs). In this position paper, we examine this hypothesis, identify strengths and limitations of both LMs and KBs, and discuss the complementar...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Razniewski, Simon Yates, Andrew Kassner, Nora Weikum, Gerhard |
description | DL4KG 2021 Pre-trained language models (LMs) have recently gained attention for their
potential as an alternative to (or proxy for) explicit knowledge bases (KBs).
In this position paper, we examine this hypothesis, identify strengths and
limitations of both LMs and KBs, and discuss the complementary nature of the
two paradigms. In particular, we offer qualitative arguments that latent LMs
are not suitable as a substitute for explicit KBs, but could play a major role
for augmenting and curating KBs. |
doi_str_mv | 10.48550/arxiv.2110.04888 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2110_04888</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2110_04888</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-9bb29fdc4e5cc2475383fb59a60c46eb42d7096c2472db1c4810dfff314bd86b3</originalsourceid><addsrcrecordid>eNotjrkOgkAARLexMOgHWElrge7NUioRNWJs6MmehATBsPH6e_EoJpPMSyYPgBmCSyoYgyvZP-v7EqNhgFQIMQaLXLbVTVY2PHXGNj5c-7Drw2zIse0ejTUD2khv_QSMnGy8nf47AEW2LdJ9lJ93h3SdR5LHIkqUwokzmlqmNaYxI4I4xRLJoabcKopNDBP-QdgopKlA0DjnCKLKCK5IAOa_269ree3ri-xf5ce5_DqTN62OOqs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Language Models As or For Knowledge Bases</title><source>arXiv.org</source><creator>Razniewski, Simon ; Yates, Andrew ; Kassner, Nora ; Weikum, Gerhard</creator><creatorcontrib>Razniewski, Simon ; Yates, Andrew ; Kassner, Nora ; Weikum, Gerhard</creatorcontrib><description>DL4KG 2021 Pre-trained language models (LMs) have recently gained attention for their
potential as an alternative to (or proxy for) explicit knowledge bases (KBs).
In this position paper, we examine this hypothesis, identify strengths and
limitations of both LMs and KBs, and discuss the complementary nature of the
two paradigms. In particular, we offer qualitative arguments that latent LMs
are not suitable as a substitute for explicit KBs, but could play a major role
for augmenting and curating KBs.</description><identifier>DOI: 10.48550/arxiv.2110.04888</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Databases</subject><creationdate>2021-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2110.04888$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2110.04888$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Razniewski, Simon</creatorcontrib><creatorcontrib>Yates, Andrew</creatorcontrib><creatorcontrib>Kassner, Nora</creatorcontrib><creatorcontrib>Weikum, Gerhard</creatorcontrib><title>Language Models As or For Knowledge Bases</title><description>DL4KG 2021 Pre-trained language models (LMs) have recently gained attention for their
potential as an alternative to (or proxy for) explicit knowledge bases (KBs).
In this position paper, we examine this hypothesis, identify strengths and
limitations of both LMs and KBs, and discuss the complementary nature of the
two paradigms. In particular, we offer qualitative arguments that latent LMs
are not suitable as a substitute for explicit KBs, but could play a major role
for augmenting and curating KBs.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Databases</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjrkOgkAARLexMOgHWElrge7NUioRNWJs6MmehATBsPH6e_EoJpPMSyYPgBmCSyoYgyvZP-v7EqNhgFQIMQaLXLbVTVY2PHXGNj5c-7Drw2zIse0ejTUD2khv_QSMnGy8nf47AEW2LdJ9lJ93h3SdR5LHIkqUwokzmlqmNaYxI4I4xRLJoabcKopNDBP-QdgopKlA0DjnCKLKCK5IAOa_269ree3ri-xf5ce5_DqTN62OOqs</recordid><startdate>20211010</startdate><enddate>20211010</enddate><creator>Razniewski, Simon</creator><creator>Yates, Andrew</creator><creator>Kassner, Nora</creator><creator>Weikum, Gerhard</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211010</creationdate><title>Language Models As or For Knowledge Bases</title><author>Razniewski, Simon ; Yates, Andrew ; Kassner, Nora ; Weikum, Gerhard</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-9bb29fdc4e5cc2475383fb59a60c46eb42d7096c2472db1c4810dfff314bd86b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Databases</topic><toplevel>online_resources</toplevel><creatorcontrib>Razniewski, Simon</creatorcontrib><creatorcontrib>Yates, Andrew</creatorcontrib><creatorcontrib>Kassner, Nora</creatorcontrib><creatorcontrib>Weikum, Gerhard</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Razniewski, Simon</au><au>Yates, Andrew</au><au>Kassner, Nora</au><au>Weikum, Gerhard</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Language Models As or For Knowledge Bases</atitle><date>2021-10-10</date><risdate>2021</risdate><abstract>DL4KG 2021 Pre-trained language models (LMs) have recently gained attention for their
potential as an alternative to (or proxy for) explicit knowledge bases (KBs).
In this position paper, we examine this hypothesis, identify strengths and
limitations of both LMs and KBs, and discuss the complementary nature of the
two paradigms. In particular, we offer qualitative arguments that latent LMs
are not suitable as a substitute for explicit KBs, but could play a major role
for augmenting and curating KBs.</abstract><doi>10.48550/arxiv.2110.04888</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2110.04888 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2110_04888 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Databases |
title | Language Models As or For Knowledge Bases |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T20%3A07%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Language%20Models%20As%20or%20For%20Knowledge%20Bases&rft.au=Razniewski,%20Simon&rft.date=2021-10-10&rft_id=info:doi/10.48550/arxiv.2110.04888&rft_dat=%3Carxiv_GOX%3E2110_04888%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |