HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction
Citation networks are critical in modern science, and predicting which previous papers (candidates) will a new paper (query) cite is a critical problem. However, the roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts. Distinguishin...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-10 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Qianyue Hao Fan, Jingyang Xu, Fengli Yuan, Jian Li, Yong |
description | Citation networks are critical in modern science, and predicting which previous papers (candidates) will a new paper (query) cite is a critical problem. However, the roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts. Distinguishing these roles requires a deeper understanding of the logical relationships among papers, beyond simple edges in citation networks. The emergence of LLMs with textual reasoning capabilities offers new possibilities for discerning these relationships, but there are two major challenges. First, in practice, a new paper may select its citations from gigantic existing papers, where the texts exceed the context length of LLMs. Second, logical relationships between papers are implicit, and directly prompting an LLM to predict citations may result in surface-level textual similarities rather than the deeper logical reasoning. In this paper, we introduce the novel concept of core citation, which identifies the critical references that go beyond superficial mentions. Thereby, we elevate the citation prediction task from a simple binary classification to distinguishing core citations from both superficial citations and non-citations. To address this, we propose \(\textbf{HLM-Cite}\), a \(\textbf{H}\)ybrid \(\textbf{L}\)anguage \(\textbf{M}\)odel workflow for citation prediction, which combines embedding and generative LMs. We design a curriculum finetune procedure to adapt a pretrained text embedding model to coarsely retrieve high-likelihood core citations from vast candidates and then design an LLM agentic workflow to rank the retrieved papers through one-shot reasoning, revealing the implicit relationships among papers. With the pipeline, we can scale the candidate sets to 100K papers. We evaluate HLM-Cite across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3116752462</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3116752462</sourcerecordid><originalsourceid>FETCH-proquest_journals_31167524623</originalsourceid><addsrcrecordid>eNqNi80KgkAURocgSMp3uNBa0Bl_oq0ULhSCpJYy6lXGZKZmRqq3z6AHaPUdOOdbEIcyFni7kNIVcY0ZfN-ncUKjiDnkkuWFlwqLe8jetRYt5Fz2E-8RCtXiCFelb92ontApDSW-rFdzgy2cG4HSik40MN-5FUrCSWMrmi9uyLLjo0H3t2uyPR7KNPPuWj0mNLYa1KTlrCoWBHES0TCm7L_qA2sAQEk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3116752462</pqid></control><display><type>article</type><title>HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction</title><source>Free E- Journals</source><creator>Qianyue Hao ; Fan, Jingyang ; Xu, Fengli ; Yuan, Jian ; Li, Yong</creator><creatorcontrib>Qianyue Hao ; Fan, Jingyang ; Xu, Fengli ; Yuan, Jian ; Li, Yong</creatorcontrib><description>Citation networks are critical in modern science, and predicting which previous papers (candidates) will a new paper (query) cite is a critical problem. However, the roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts. Distinguishing these roles requires a deeper understanding of the logical relationships among papers, beyond simple edges in citation networks. The emergence of LLMs with textual reasoning capabilities offers new possibilities for discerning these relationships, but there are two major challenges. First, in practice, a new paper may select its citations from gigantic existing papers, where the texts exceed the context length of LLMs. Second, logical relationships between papers are implicit, and directly prompting an LLM to predict citations may result in surface-level textual similarities rather than the deeper logical reasoning. In this paper, we introduce the novel concept of core citation, which identifies the critical references that go beyond superficial mentions. Thereby, we elevate the citation prediction task from a simple binary classification to distinguishing core citations from both superficial citations and non-citations. To address this, we propose \(\textbf{HLM-Cite}\), a \(\textbf{H}\)ybrid \(\textbf{L}\)anguage \(\textbf{M}\)odel workflow for citation prediction, which combines embedding and generative LMs. We design a curriculum finetune procedure to adapt a pretrained text embedding model to coarsely retrieve high-likelihood core citations from vast candidates and then design an LLM agentic workflow to rank the retrieved papers through one-shot reasoning, revealing the implicit relationships among papers. With the pipeline, we can scale the candidate sets to 100K papers. We evaluate HLM-Cite across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Citations ; Cognition & reasoning ; Embedding ; Large language models ; Predictions ; Reasoning ; Workflow</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Qianyue Hao</creatorcontrib><creatorcontrib>Fan, Jingyang</creatorcontrib><creatorcontrib>Xu, Fengli</creatorcontrib><creatorcontrib>Yuan, Jian</creatorcontrib><creatorcontrib>Li, Yong</creatorcontrib><title>HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction</title><title>arXiv.org</title><description>Citation networks are critical in modern science, and predicting which previous papers (candidates) will a new paper (query) cite is a critical problem. However, the roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts. Distinguishing these roles requires a deeper understanding of the logical relationships among papers, beyond simple edges in citation networks. The emergence of LLMs with textual reasoning capabilities offers new possibilities for discerning these relationships, but there are two major challenges. First, in practice, a new paper may select its citations from gigantic existing papers, where the texts exceed the context length of LLMs. Second, logical relationships between papers are implicit, and directly prompting an LLM to predict citations may result in surface-level textual similarities rather than the deeper logical reasoning. In this paper, we introduce the novel concept of core citation, which identifies the critical references that go beyond superficial mentions. Thereby, we elevate the citation prediction task from a simple binary classification to distinguishing core citations from both superficial citations and non-citations. To address this, we propose \(\textbf{HLM-Cite}\), a \(\textbf{H}\)ybrid \(\textbf{L}\)anguage \(\textbf{M}\)odel workflow for citation prediction, which combines embedding and generative LMs. We design a curriculum finetune procedure to adapt a pretrained text embedding model to coarsely retrieve high-likelihood core citations from vast candidates and then design an LLM agentic workflow to rank the retrieved papers through one-shot reasoning, revealing the implicit relationships among papers. With the pipeline, we can scale the candidate sets to 100K papers. We evaluate HLM-Cite across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods.</description><subject>Citations</subject><subject>Cognition & reasoning</subject><subject>Embedding</subject><subject>Large language models</subject><subject>Predictions</subject><subject>Reasoning</subject><subject>Workflow</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNi80KgkAURocgSMp3uNBa0Bl_oq0ULhSCpJYy6lXGZKZmRqq3z6AHaPUdOOdbEIcyFni7kNIVcY0ZfN-ncUKjiDnkkuWFlwqLe8jetRYt5Fz2E-8RCtXiCFelb92ontApDSW-rFdzgy2cG4HSik40MN-5FUrCSWMrmi9uyLLjo0H3t2uyPR7KNPPuWj0mNLYa1KTlrCoWBHES0TCm7L_qA2sAQEk</recordid><startdate>20241010</startdate><enddate>20241010</enddate><creator>Qianyue Hao</creator><creator>Fan, Jingyang</creator><creator>Xu, Fengli</creator><creator>Yuan, Jian</creator><creator>Li, Yong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241010</creationdate><title>HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction</title><author>Qianyue Hao ; Fan, Jingyang ; Xu, Fengli ; Yuan, Jian ; Li, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31167524623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Citations</topic><topic>Cognition & reasoning</topic><topic>Embedding</topic><topic>Large language models</topic><topic>Predictions</topic><topic>Reasoning</topic><topic>Workflow</topic><toplevel>online_resources</toplevel><creatorcontrib>Qianyue Hao</creatorcontrib><creatorcontrib>Fan, Jingyang</creatorcontrib><creatorcontrib>Xu, Fengli</creatorcontrib><creatorcontrib>Yuan, Jian</creatorcontrib><creatorcontrib>Li, Yong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qianyue Hao</au><au>Fan, Jingyang</au><au>Xu, Fengli</au><au>Yuan, Jian</au><au>Li, Yong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction</atitle><jtitle>arXiv.org</jtitle><date>2024-10-10</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Citation networks are critical in modern science, and predicting which previous papers (candidates) will a new paper (query) cite is a critical problem. However, the roles of a paper's citations vary significantly, ranging from foundational knowledge basis to superficial contexts. Distinguishing these roles requires a deeper understanding of the logical relationships among papers, beyond simple edges in citation networks. The emergence of LLMs with textual reasoning capabilities offers new possibilities for discerning these relationships, but there are two major challenges. First, in practice, a new paper may select its citations from gigantic existing papers, where the texts exceed the context length of LLMs. Second, logical relationships between papers are implicit, and directly prompting an LLM to predict citations may result in surface-level textual similarities rather than the deeper logical reasoning. In this paper, we introduce the novel concept of core citation, which identifies the critical references that go beyond superficial mentions. Thereby, we elevate the citation prediction task from a simple binary classification to distinguishing core citations from both superficial citations and non-citations. To address this, we propose \(\textbf{HLM-Cite}\), a \(\textbf{H}\)ybrid \(\textbf{L}\)anguage \(\textbf{M}\)odel workflow for citation prediction, which combines embedding and generative LMs. We design a curriculum finetune procedure to adapt a pretrained text embedding model to coarsely retrieve high-likelihood core citations from vast candidates and then design an LLM agentic workflow to rank the retrieved papers through one-shot reasoning, revealing the implicit relationships among papers. With the pipeline, we can scale the candidate sets to 100K papers. We evaluate HLM-Cite across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3116752462 |
source | Free E- Journals |
subjects | Citations Cognition & reasoning Embedding Large language models Predictions Reasoning Workflow |
title | HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T21%3A39%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=HLM-Cite:%20Hybrid%20Language%20Model%20Workflow%20for%20Text-based%20Scientific%20Citation%20Prediction&rft.jtitle=arXiv.org&rft.au=Qianyue%20Hao&rft.date=2024-10-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3116752462%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3116752462&rft_id=info:pmid/&rfr_iscdi=true |