Knowledge Graph-Enhanced Large Language Models via Path Selection
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications. However, they are known to generate factually inaccurate outputs, a.k.a. the hallucination problem. In recent years, incorporating external knowledge extracted from Knowledge Graphs (KGs) has become...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Liu, Haochen Wang, Song Zhu, Yaochen Dong, Yushun Li, Jundong |
description | Large Language Models (LLMs) have shown unprecedented performance in various
real-world applications. However, they are known to generate factually
inaccurate outputs, a.k.a. the hallucination problem. In recent years,
incorporating external knowledge extracted from Knowledge Graphs (KGs) has
become a promising strategy to improve the factual accuracy of LLM-generated
outputs. Nevertheless, most existing explorations rely on LLMs themselves to
perform KG knowledge extraction, which is highly inflexible as LLMs can only
provide binary judgment on whether a certain knowledge (e.g., a knowledge path
in KG) should be used. In addition, LLMs tend to pick only knowledge with
direct semantic relationship with the input text, while potentially useful
knowledge with indirect semantics can be ignored. In this work, we propose a
principled framework KELP with three stages to handle the above problems.
Specifically, KELP is able to achieve finer granularity of flexible knowledge
extraction by generating scores for knowledge paths with input texts via latent
semantic matching. Meanwhile, knowledge paths with indirect semantic
relationships with the input text can also be considered via trained encoding
between the selected paths in KG and the input text. Experiments on real-world
datasets validate the effectiveness of KELP. |
doi_str_mv | 10.48550/arxiv.2406.13862 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_13862</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_13862</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-90d8030c05ee614dacedff5eee412fb76600cf730fe9f985217606cba36e63033</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgdp-AKv6BxLGdjJJllVVWkQQSHQfTe1xEyk4lfuCvycUVvexuFdHiAcFaVbmOTxS_Oouqc4AU2VK1Pdi8RKGa89uz3Id6dAmq9BSsOxkTXEsawr7M43mdXDcH-WlI_lOp1Z-cM_21A1hKu489Uee_etEbJ9W2-Umqd_Wz8tFnRAWOqnAlWDAQs6MKnM0Xng_Bs6U9rsCEcD6woDnyldlrlWBgHZHBhkNGDMR87_ZG0NziN0nxe_ml6W5sZgfBHFDkA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Knowledge Graph-Enhanced Large Language Models via Path Selection</title><source>arXiv.org</source><creator>Liu, Haochen ; Wang, Song ; Zhu, Yaochen ; Dong, Yushun ; Li, Jundong</creator><creatorcontrib>Liu, Haochen ; Wang, Song ; Zhu, Yaochen ; Dong, Yushun ; Li, Jundong</creatorcontrib><description>Large Language Models (LLMs) have shown unprecedented performance in various
real-world applications. However, they are known to generate factually
inaccurate outputs, a.k.a. the hallucination problem. In recent years,
incorporating external knowledge extracted from Knowledge Graphs (KGs) has
become a promising strategy to improve the factual accuracy of LLM-generated
outputs. Nevertheless, most existing explorations rely on LLMs themselves to
perform KG knowledge extraction, which is highly inflexible as LLMs can only
provide binary judgment on whether a certain knowledge (e.g., a knowledge path
in KG) should be used. In addition, LLMs tend to pick only knowledge with
direct semantic relationship with the input text, while potentially useful
knowledge with indirect semantics can be ignored. In this work, we propose a
principled framework KELP with three stages to handle the above problems.
Specifically, KELP is able to achieve finer granularity of flexible knowledge
extraction by generating scores for knowledge paths with input texts via latent
semantic matching. Meanwhile, knowledge paths with indirect semantic
relationships with the input text can also be considered via trained encoding
between the selected paths in KG and the input text. Experiments on real-world
datasets validate the effectiveness of KELP.</description><identifier>DOI: 10.48550/arxiv.2406.13862</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.13862$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.13862$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Haochen</creatorcontrib><creatorcontrib>Wang, Song</creatorcontrib><creatorcontrib>Zhu, Yaochen</creatorcontrib><creatorcontrib>Dong, Yushun</creatorcontrib><creatorcontrib>Li, Jundong</creatorcontrib><title>Knowledge Graph-Enhanced Large Language Models via Path Selection</title><description>Large Language Models (LLMs) have shown unprecedented performance in various
real-world applications. However, they are known to generate factually
inaccurate outputs, a.k.a. the hallucination problem. In recent years,
incorporating external knowledge extracted from Knowledge Graphs (KGs) has
become a promising strategy to improve the factual accuracy of LLM-generated
outputs. Nevertheless, most existing explorations rely on LLMs themselves to
perform KG knowledge extraction, which is highly inflexible as LLMs can only
provide binary judgment on whether a certain knowledge (e.g., a knowledge path
in KG) should be used. In addition, LLMs tend to pick only knowledge with
direct semantic relationship with the input text, while potentially useful
knowledge with indirect semantics can be ignored. In this work, we propose a
principled framework KELP with three stages to handle the above problems.
Specifically, KELP is able to achieve finer granularity of flexible knowledge
extraction by generating scores for knowledge paths with input texts via latent
semantic matching. Meanwhile, knowledge paths with indirect semantic
relationships with the input text can also be considered via trained encoding
between the selected paths in KG and the input text. Experiments on real-world
datasets validate the effectiveness of KELP.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgdp-AKv6BxLGdjJJllVVWkQQSHQfTe1xEyk4lfuCvycUVvexuFdHiAcFaVbmOTxS_Oouqc4AU2VK1Pdi8RKGa89uz3Id6dAmq9BSsOxkTXEsawr7M43mdXDcH-WlI_lOp1Z-cM_21A1hKu489Uee_etEbJ9W2-Umqd_Wz8tFnRAWOqnAlWDAQs6MKnM0Xng_Bs6U9rsCEcD6woDnyldlrlWBgHZHBhkNGDMR87_ZG0NziN0nxe_ml6W5sZgfBHFDkA</recordid><startdate>20240619</startdate><enddate>20240619</enddate><creator>Liu, Haochen</creator><creator>Wang, Song</creator><creator>Zhu, Yaochen</creator><creator>Dong, Yushun</creator><creator>Li, Jundong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240619</creationdate><title>Knowledge Graph-Enhanced Large Language Models via Path Selection</title><author>Liu, Haochen ; Wang, Song ; Zhu, Yaochen ; Dong, Yushun ; Li, Jundong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-90d8030c05ee614dacedff5eee412fb76600cf730fe9f985217606cba36e63033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Haochen</creatorcontrib><creatorcontrib>Wang, Song</creatorcontrib><creatorcontrib>Zhu, Yaochen</creatorcontrib><creatorcontrib>Dong, Yushun</creatorcontrib><creatorcontrib>Li, Jundong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Haochen</au><au>Wang, Song</au><au>Zhu, Yaochen</au><au>Dong, Yushun</au><au>Li, Jundong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Knowledge Graph-Enhanced Large Language Models via Path Selection</atitle><date>2024-06-19</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have shown unprecedented performance in various
real-world applications. However, they are known to generate factually
inaccurate outputs, a.k.a. the hallucination problem. In recent years,
incorporating external knowledge extracted from Knowledge Graphs (KGs) has
become a promising strategy to improve the factual accuracy of LLM-generated
outputs. Nevertheless, most existing explorations rely on LLMs themselves to
perform KG knowledge extraction, which is highly inflexible as LLMs can only
provide binary judgment on whether a certain knowledge (e.g., a knowledge path
in KG) should be used. In addition, LLMs tend to pick only knowledge with
direct semantic relationship with the input text, while potentially useful
knowledge with indirect semantics can be ignored. In this work, we propose a
principled framework KELP with three stages to handle the above problems.
Specifically, KELP is able to achieve finer granularity of flexible knowledge
extraction by generating scores for knowledge paths with input texts via latent
semantic matching. Meanwhile, knowledge paths with indirect semantic
relationships with the input text can also be considered via trained encoding
between the selected paths in KG and the input text. Experiments on real-world
datasets validate the effectiveness of KELP.</abstract><doi>10.48550/arxiv.2406.13862</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2406.13862 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2406_13862 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
title | Knowledge Graph-Enhanced Large Language Models via Path Selection |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T11%3A01%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Knowledge%20Graph-Enhanced%20Large%20Language%20Models%20via%20Path%20Selection&rft.au=Liu,%20Haochen&rft.date=2024-06-19&rft_id=info:doi/10.48550/arxiv.2406.13862&rft_dat=%3Carxiv_GOX%3E2406_13862%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |