Tree-of-Table: Unleashing the Power of LLMs for Enhanced Large-Scale Table Understanding
The ubiquity and value of tables as semi-structured data across various domains necessitate advanced methods for understanding their complexity and vast amounts of information. Despite the impressive capabilities of large language models (LLMs) in advancing the natural language understanding frontie...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The ubiquity and value of tables as semi-structured data across various
domains necessitate advanced methods for understanding their complexity and
vast amounts of information. Despite the impressive capabilities of large
language models (LLMs) in advancing the natural language understanding
frontier, their application to large-scale tabular data presents significant
challenges, specifically regarding table size and complex intricate
relationships. Existing works have shown promise with small-scale tables but
often flounder when tasked with the complex reasoning required by larger,
interconnected tables found in real-world scenarios. To address this gap, we
introduce "Tree-of-Table", a novel approach designed to enhance LLMs' reasoning
capabilities over large and complex tables. Our method employs Table
Condensation and Decomposition to distill and reorganize relevant data into a
manageable format, followed by the construction of a hierarchical Table-Tree
that facilitates tree-structured reasoning. Through a meticulous Table-Tree
Execution process, we systematically unravel the tree-structured reasoning
chain to derive the solutions. Experiments across diverse datasets, including
WikiTQ, TableFact, FeTaQA, and BIRD, demonstrate that Tree-of-Table sets a new
benchmark with superior performance, showcasing remarkable efficiency and
generalization capabilities in large-scale table reasoning. |
---|---|
DOI: | 10.48550/arxiv.2411.08516 |