Linear Transformer Topological Masking with Graph Random Features
When training transformers on graph-structured data, incorporating information about the underlying topology is crucial for good performance. Topological masking, a type of relative position encoding, achieves this by upweighting or downweighting attention depending on the relationship between the q...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | When training transformers on graph-structured data, incorporating
information about the underlying topology is crucial for good performance.
Topological masking, a type of relative position encoding, achieves this by
upweighting or downweighting attention depending on the relationship between
the query and keys in a graph. In this paper, we propose to parameterise
topological masks as a learnable function of a weighted adjacency matrix -- a
novel, flexible approach which incorporates a strong structural inductive bias.
By approximating this mask with graph random features (for which we prove the
first known concentration bounds), we show how this can be made fully
compatible with linear attention, preserving $\mathcal{O}(N)$ time and space
complexity with respect to the number of input tokens. The fastest previous
alternative was $\mathcal{O}(N \log N)$ and only suitable for specific graphs.
Our efficient masking algorithms provide strong performance gains for tasks on
image and point cloud data, including with $>30$k nodes. |
---|---|
DOI: | 10.48550/arxiv.2410.03462 |