FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding
The development of Long-Context Large Language Models (LLMs) has markedly advanced natural language processing by facilitating the process of textual data across long documents and multiple corpora. However, Long-Context LLMs still face two critical challenges: The lost in the middle phenomenon, whe...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Deng, Jingyang Shen, Zhengyang Wang, Boyang Su, Lixin Cheng, Suqi Nie, Ying Wang, Junfeng Yin, Dawei Ma, Jinwen |
description | The development of Long-Context Large Language Models (LLMs) has markedly
advanced natural language processing by facilitating the process of textual
data across long documents and multiple corpora. However, Long-Context LLMs
still face two critical challenges: The lost in the middle phenomenon, where
crucial middle-context information is likely to be missed, and the distraction
issue that the models lose focus due to overly extended contexts. To address
these challenges, we propose the Context Filtering Language Model (FltLM), a
novel integrated Long-Context LLM which enhances the ability of the model on
multi-document question-answering (QA) tasks. Specifically, FltLM innovatively
incorporates a context filter with a soft mask mechanism, identifying and
dynamically excluding irrelevant content to concentrate on pertinent
information for better comprehension and reasoning. Our approach not only
mitigates these two challenges, but also enables the model to operate
conveniently in a single forward pass. Experimental results demonstrate that
FltLM significantly outperforms supervised fine-tuning and retrieval-based
methods in complex QA scenarios, suggesting a promising solution for more
accurate and reliable long-context natural language understanding applications. |
doi_str_mv | 10.48550/arxiv.2410.06886 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_06886</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_06886</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_068863</originalsourceid><addsrcrecordid>eNqFjkEKwjAQRbNxIeoBXDkXaK3aluJOSotCu9N1CWYSAjGRaSz19kbRtZv5n8dneIwtN0mcFlmWrDmNeoi3aQBJXhT5lOna-Kbdw8HCyXokRdyjgMZZFZUukNFDw0lhuFY9eCitE2hAOoJKSrx6PSD8prU24Ym2CrgVcLECqfehBjJnE8lNj4tvztiqrs7lMfpIdXfSN07P7i3XfeR2_xcvftFFMw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</title><source>arXiv.org</source><creator>Deng, Jingyang ; Shen, Zhengyang ; Wang, Boyang ; Su, Lixin ; Cheng, Suqi ; Nie, Ying ; Wang, Junfeng ; Yin, Dawei ; Ma, Jinwen</creator><creatorcontrib>Deng, Jingyang ; Shen, Zhengyang ; Wang, Boyang ; Su, Lixin ; Cheng, Suqi ; Nie, Ying ; Wang, Junfeng ; Yin, Dawei ; Ma, Jinwen</creatorcontrib><description>The development of Long-Context Large Language Models (LLMs) has markedly
advanced natural language processing by facilitating the process of textual
data across long documents and multiple corpora. However, Long-Context LLMs
still face two critical challenges: The lost in the middle phenomenon, where
crucial middle-context information is likely to be missed, and the distraction
issue that the models lose focus due to overly extended contexts. To address
these challenges, we propose the Context Filtering Language Model (FltLM), a
novel integrated Long-Context LLM which enhances the ability of the model on
multi-document question-answering (QA) tasks. Specifically, FltLM innovatively
incorporates a context filter with a soft mask mechanism, identifying and
dynamically excluding irrelevant content to concentrate on pertinent
information for better comprehension and reasoning. Our approach not only
mitigates these two challenges, but also enables the model to operate
conveniently in a single forward pass. Experimental results demonstrate that
FltLM significantly outperforms supervised fine-tuning and retrieval-based
methods in complex QA scenarios, suggesting a promising solution for more
accurate and reliable long-context natural language understanding applications.</description><identifier>DOI: 10.48550/arxiv.2410.06886</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.06886$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.06886$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Deng, Jingyang</creatorcontrib><creatorcontrib>Shen, Zhengyang</creatorcontrib><creatorcontrib>Wang, Boyang</creatorcontrib><creatorcontrib>Su, Lixin</creatorcontrib><creatorcontrib>Cheng, Suqi</creatorcontrib><creatorcontrib>Nie, Ying</creatorcontrib><creatorcontrib>Wang, Junfeng</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><creatorcontrib>Ma, Jinwen</creatorcontrib><title>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</title><description>The development of Long-Context Large Language Models (LLMs) has markedly
advanced natural language processing by facilitating the process of textual
data across long documents and multiple corpora. However, Long-Context LLMs
still face two critical challenges: The lost in the middle phenomenon, where
crucial middle-context information is likely to be missed, and the distraction
issue that the models lose focus due to overly extended contexts. To address
these challenges, we propose the Context Filtering Language Model (FltLM), a
novel integrated Long-Context LLM which enhances the ability of the model on
multi-document question-answering (QA) tasks. Specifically, FltLM innovatively
incorporates a context filter with a soft mask mechanism, identifying and
dynamically excluding irrelevant content to concentrate on pertinent
information for better comprehension and reasoning. Our approach not only
mitigates these two challenges, but also enables the model to operate
conveniently in a single forward pass. Experimental results demonstrate that
FltLM significantly outperforms supervised fine-tuning and retrieval-based
methods in complex QA scenarios, suggesting a promising solution for more
accurate and reliable long-context natural language understanding applications.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjkEKwjAQRbNxIeoBXDkXaK3aluJOSotCu9N1CWYSAjGRaSz19kbRtZv5n8dneIwtN0mcFlmWrDmNeoi3aQBJXhT5lOna-Kbdw8HCyXokRdyjgMZZFZUukNFDw0lhuFY9eCitE2hAOoJKSrx6PSD8prU24Ym2CrgVcLECqfehBjJnE8lNj4tvztiqrs7lMfpIdXfSN07P7i3XfeR2_xcvftFFMw</recordid><startdate>20241009</startdate><enddate>20241009</enddate><creator>Deng, Jingyang</creator><creator>Shen, Zhengyang</creator><creator>Wang, Boyang</creator><creator>Su, Lixin</creator><creator>Cheng, Suqi</creator><creator>Nie, Ying</creator><creator>Wang, Junfeng</creator><creator>Yin, Dawei</creator><creator>Ma, Jinwen</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241009</creationdate><title>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</title><author>Deng, Jingyang ; Shen, Zhengyang ; Wang, Boyang ; Su, Lixin ; Cheng, Suqi ; Nie, Ying ; Wang, Junfeng ; Yin, Dawei ; Ma, Jinwen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_068863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Jingyang</creatorcontrib><creatorcontrib>Shen, Zhengyang</creatorcontrib><creatorcontrib>Wang, Boyang</creatorcontrib><creatorcontrib>Su, Lixin</creatorcontrib><creatorcontrib>Cheng, Suqi</creatorcontrib><creatorcontrib>Nie, Ying</creatorcontrib><creatorcontrib>Wang, Junfeng</creatorcontrib><creatorcontrib>Yin, Dawei</creatorcontrib><creatorcontrib>Ma, Jinwen</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Deng, Jingyang</au><au>Shen, Zhengyang</au><au>Wang, Boyang</au><au>Su, Lixin</au><au>Cheng, Suqi</au><au>Nie, Ying</au><au>Wang, Junfeng</au><au>Yin, Dawei</au><au>Ma, Jinwen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding</atitle><date>2024-10-09</date><risdate>2024</risdate><abstract>The development of Long-Context Large Language Models (LLMs) has markedly
advanced natural language processing by facilitating the process of textual
data across long documents and multiple corpora. However, Long-Context LLMs
still face two critical challenges: The lost in the middle phenomenon, where
crucial middle-context information is likely to be missed, and the distraction
issue that the models lose focus due to overly extended contexts. To address
these challenges, we propose the Context Filtering Language Model (FltLM), a
novel integrated Long-Context LLM which enhances the ability of the model on
multi-document question-answering (QA) tasks. Specifically, FltLM innovatively
incorporates a context filter with a soft mask mechanism, identifying and
dynamically excluding irrelevant content to concentrate on pertinent
information for better comprehension and reasoning. Our approach not only
mitigates these two challenges, but also enables the model to operate
conveniently in a single forward pass. Experimental results demonstrate that
FltLM significantly outperforms supervised fine-tuning and retrieval-based
methods in complex QA scenarios, suggesting a promising solution for more
accurate and reliable long-context natural language understanding applications.</abstract><doi>10.48550/arxiv.2410.06886</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2410.06886 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2410_06886 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | FltLM: An Intergrated Long-Context Large Language Model for Effective Context Filtering and Understanding |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T05%3A25%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FltLM:%20An%20Intergrated%20Long-Context%20Large%20Language%20Model%20for%20Effective%20Context%20Filtering%20and%20Understanding&rft.au=Deng,%20Jingyang&rft.date=2024-10-09&rft_id=info:doi/10.48550/arxiv.2410.06886&rft_dat=%3Carxiv_GOX%3E2410_06886%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |