Multi-domain Knowledge Graph Collaborative Pre-training and Prompt Tuning for Diverse Downstream Tasks

Knowledge graphs (KGs) provide reliable external knowledge for a wide variety of AI tasks in the form of structured triples. Knowledge graph pre-training (KGP) aims to pre-train neural networks on large-scale KGs and provide unified interfaces to enhance different downstream tasks, which is a key di...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-05
Hauptverfasser: Zhang, Yichi, Hu, Binbin, Chen, Zhuo, Guo, Lingbing, Liu, Ziqi, Zhang, Zhiqiang, Liang, Lei, Chen, Huajun, Zhang, Wen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhang, Yichi
Hu, Binbin
Chen, Zhuo
Guo, Lingbing
Liu, Ziqi
Zhang, Zhiqiang
Liang, Lei
Chen, Huajun
Zhang, Wen
description Knowledge graphs (KGs) provide reliable external knowledge for a wide variety of AI tasks in the form of structured triples. Knowledge graph pre-training (KGP) aims to pre-train neural networks on large-scale KGs and provide unified interfaces to enhance different downstream tasks, which is a key direction for KG management, maintenance, and applications. Existing works often focus on purely research questions in open domains, or they are not open source due to data security and privacy in real scenarios. Meanwhile, existing studies have not explored the training efficiency and transferability of KGP models in depth. To address these problems, We propose a framework MuDoK to achieve multi-domain collaborative pre-training and efficient prefix prompt tuning to serve diverse downstream tasks like recommendation and text understanding. Our design is a plug-and-play prompt learning approach that can be flexibly adapted to different downstream task backbones. In response to the lack of open-source benchmarks, we constructed a new multi-domain KGP benchmark called KPI with two large-scale KGs and six different sub-domain tasks to evaluate our method and open-sourced it for subsequent research. We evaluated our approach based on constructed KPI benchmarks using diverse backbone models in heterogeneous downstream tasks. The experimental results show that our framework brings significant performance gains, along with its generality, efficiency, and transferability.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3059635325</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3059635325</sourcerecordid><originalsourceid>FETCH-proquest_journals_30596353253</originalsourceid><addsrcrecordid>eNqNys0KgkAUhuEhCJLyHg60FmymsVprPxBBC_cy4WjaOMfOjHX7SXQBrT54v2fCAi7EKtquOZ-x0Lk2jmOebLiUImDVZTC-iUrsVGPhbPFtdFlrOJLq75CiMeqGpHzz0nAlHXkaXWNrULYcA3a9h3z4lgoJstGR05Dh2zpPWnWQK_dwCzatlHE6_O2cLQ_7PD1FPeFz0M4XLQ5kx6sQsdwlQgouxX_qA835R0M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3059635325</pqid></control><display><type>article</type><title>Multi-domain Knowledge Graph Collaborative Pre-training and Prompt Tuning for Diverse Downstream Tasks</title><source>Free E- Journals</source><creator>Zhang, Yichi ; Hu, Binbin ; Chen, Zhuo ; Guo, Lingbing ; Liu, Ziqi ; Zhang, Zhiqiang ; Liang, Lei ; Chen, Huajun ; Zhang, Wen</creator><creatorcontrib>Zhang, Yichi ; Hu, Binbin ; Chen, Zhuo ; Guo, Lingbing ; Liu, Ziqi ; Zhang, Zhiqiang ; Liang, Lei ; Chen, Huajun ; Zhang, Wen</creatorcontrib><description>Knowledge graphs (KGs) provide reliable external knowledge for a wide variety of AI tasks in the form of structured triples. Knowledge graph pre-training (KGP) aims to pre-train neural networks on large-scale KGs and provide unified interfaces to enhance different downstream tasks, which is a key direction for KG management, maintenance, and applications. Existing works often focus on purely research questions in open domains, or they are not open source due to data security and privacy in real scenarios. Meanwhile, existing studies have not explored the training efficiency and transferability of KGP models in depth. To address these problems, We propose a framework MuDoK to achieve multi-domain collaborative pre-training and efficient prefix prompt tuning to serve diverse downstream tasks like recommendation and text understanding. Our design is a plug-and-play prompt learning approach that can be flexibly adapted to different downstream task backbones. In response to the lack of open-source benchmarks, we constructed a new multi-domain KGP benchmark called KPI with two large-scale KGs and six different sub-domain tasks to evaluate our method and open-sourced it for subsequent research. We evaluated our approach based on constructed KPI benchmarks using diverse backbone models in heterogeneous downstream tasks. The experimental results show that our framework brings significant performance gains, along with its generality, efficiency, and transferability.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Collaboration ; Knowledge representation ; Neural networks</subject><ispartof>arXiv.org, 2024-05</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Zhang, Yichi</creatorcontrib><creatorcontrib>Hu, Binbin</creatorcontrib><creatorcontrib>Chen, Zhuo</creatorcontrib><creatorcontrib>Guo, Lingbing</creatorcontrib><creatorcontrib>Liu, Ziqi</creatorcontrib><creatorcontrib>Zhang, Zhiqiang</creatorcontrib><creatorcontrib>Liang, Lei</creatorcontrib><creatorcontrib>Chen, Huajun</creatorcontrib><creatorcontrib>Zhang, Wen</creatorcontrib><title>Multi-domain Knowledge Graph Collaborative Pre-training and Prompt Tuning for Diverse Downstream Tasks</title><title>arXiv.org</title><description>Knowledge graphs (KGs) provide reliable external knowledge for a wide variety of AI tasks in the form of structured triples. Knowledge graph pre-training (KGP) aims to pre-train neural networks on large-scale KGs and provide unified interfaces to enhance different downstream tasks, which is a key direction for KG management, maintenance, and applications. Existing works often focus on purely research questions in open domains, or they are not open source due to data security and privacy in real scenarios. Meanwhile, existing studies have not explored the training efficiency and transferability of KGP models in depth. To address these problems, We propose a framework MuDoK to achieve multi-domain collaborative pre-training and efficient prefix prompt tuning to serve diverse downstream tasks like recommendation and text understanding. Our design is a plug-and-play prompt learning approach that can be flexibly adapted to different downstream task backbones. In response to the lack of open-source benchmarks, we constructed a new multi-domain KGP benchmark called KPI with two large-scale KGs and six different sub-domain tasks to evaluate our method and open-sourced it for subsequent research. We evaluated our approach based on constructed KPI benchmarks using diverse backbone models in heterogeneous downstream tasks. The experimental results show that our framework brings significant performance gains, along with its generality, efficiency, and transferability.</description><subject>Benchmarks</subject><subject>Collaboration</subject><subject>Knowledge representation</subject><subject>Neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNys0KgkAUhuEhCJLyHg60FmymsVprPxBBC_cy4WjaOMfOjHX7SXQBrT54v2fCAi7EKtquOZ-x0Lk2jmOebLiUImDVZTC-iUrsVGPhbPFtdFlrOJLq75CiMeqGpHzz0nAlHXkaXWNrULYcA3a9h3z4lgoJstGR05Dh2zpPWnWQK_dwCzatlHE6_O2cLQ_7PD1FPeFz0M4XLQ5kx6sQsdwlQgouxX_qA835R0M</recordid><startdate>20240521</startdate><enddate>20240521</enddate><creator>Zhang, Yichi</creator><creator>Hu, Binbin</creator><creator>Chen, Zhuo</creator><creator>Guo, Lingbing</creator><creator>Liu, Ziqi</creator><creator>Zhang, Zhiqiang</creator><creator>Liang, Lei</creator><creator>Chen, Huajun</creator><creator>Zhang, Wen</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240521</creationdate><title>Multi-domain Knowledge Graph Collaborative Pre-training and Prompt Tuning for Diverse Downstream Tasks</title><author>Zhang, Yichi ; Hu, Binbin ; Chen, Zhuo ; Guo, Lingbing ; Liu, Ziqi ; Zhang, Zhiqiang ; Liang, Lei ; Chen, Huajun ; Zhang, Wen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30596353253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmarks</topic><topic>Collaboration</topic><topic>Knowledge representation</topic><topic>Neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Yichi</creatorcontrib><creatorcontrib>Hu, Binbin</creatorcontrib><creatorcontrib>Chen, Zhuo</creatorcontrib><creatorcontrib>Guo, Lingbing</creatorcontrib><creatorcontrib>Liu, Ziqi</creatorcontrib><creatorcontrib>Zhang, Zhiqiang</creatorcontrib><creatorcontrib>Liang, Lei</creatorcontrib><creatorcontrib>Chen, Huajun</creatorcontrib><creatorcontrib>Zhang, Wen</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Yichi</au><au>Hu, Binbin</au><au>Chen, Zhuo</au><au>Guo, Lingbing</au><au>Liu, Ziqi</au><au>Zhang, Zhiqiang</au><au>Liang, Lei</au><au>Chen, Huajun</au><au>Zhang, Wen</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Multi-domain Knowledge Graph Collaborative Pre-training and Prompt Tuning for Diverse Downstream Tasks</atitle><jtitle>arXiv.org</jtitle><date>2024-05-21</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Knowledge graphs (KGs) provide reliable external knowledge for a wide variety of AI tasks in the form of structured triples. Knowledge graph pre-training (KGP) aims to pre-train neural networks on large-scale KGs and provide unified interfaces to enhance different downstream tasks, which is a key direction for KG management, maintenance, and applications. Existing works often focus on purely research questions in open domains, or they are not open source due to data security and privacy in real scenarios. Meanwhile, existing studies have not explored the training efficiency and transferability of KGP models in depth. To address these problems, We propose a framework MuDoK to achieve multi-domain collaborative pre-training and efficient prefix prompt tuning to serve diverse downstream tasks like recommendation and text understanding. Our design is a plug-and-play prompt learning approach that can be flexibly adapted to different downstream task backbones. In response to the lack of open-source benchmarks, we constructed a new multi-domain KGP benchmark called KPI with two large-scale KGs and six different sub-domain tasks to evaluate our method and open-sourced it for subsequent research. We evaluated our approach based on constructed KPI benchmarks using diverse backbone models in heterogeneous downstream tasks. The experimental results show that our framework brings significant performance gains, along with its generality, efficiency, and transferability.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_3059635325
source Free E- Journals
subjects Benchmarks
Collaboration
Knowledge representation
Neural networks
title Multi-domain Knowledge Graph Collaborative Pre-training and Prompt Tuning for Diverse Downstream Tasks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T14%3A34%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Multi-domain%20Knowledge%20Graph%20Collaborative%20Pre-training%20and%20Prompt%20Tuning%20for%20Diverse%20Downstream%20Tasks&rft.jtitle=arXiv.org&rft.au=Zhang,%20Yichi&rft.date=2024-05-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3059635325%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3059635325&rft_id=info:pmid/&rfr_iscdi=true