Long-context Language Models Are Not Good At Retrieval Without Enough Steps

Long-context language models (LCLMs), characterized by their extensive context window, are becoming increasingly popular. However, despite they are nearly perfect at standard long-context retrieval, we find they are actually not good at all of them. Specifically, we identify 2 basic cases, "mul...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-12
Hauptverfasser: Yu, Yijiong, Ma Xiufa, Fang Jianwei, Xu, Zhi, Su Guangyao, Wang, Jiancheng, Huang, Yongfeng, Qi, Zhixiao, Wang, Wei, Liu, Weifeng, Chen, Ran, Ji Pei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Long-context language models (LCLMs), characterized by their extensive context window, are becoming increasingly popular. However, despite they are nearly perfect at standard long-context retrieval, we find they are actually not good at all of them. Specifically, we identify 2 basic cases, "multi-matching retrieval," and "logic-based retrieval", which LLMs struggle to solve under normal settings. Moreover, we find these cases can only be well addressed by specific CoT prompting, with enough reasoning steps. This finding reminds the developers and users of LCLMs that relying on LCLMs to directly perform even basic retrieval tasks may be unreliable, rather, a sufficiently long reasoning process is necessary.
ISSN:2331-8422