Investigating Neuron Ablation in Attention Heads: The Case for Peak Activation Centering
The use of transformer-based models is growing rapidly throughout society. With this growth, it is important to understand how they work, and in particular, how the attention mechanisms represent concepts. Though there are many interpretability methods, many look at models through their neuronal act...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The use of transformer-based models is growing rapidly throughout society.
With this growth, it is important to understand how they work, and in
particular, how the attention mechanisms represent concepts. Though there are
many interpretability methods, many look at models through their neuronal
activations, which are poorly understood. We describe different lenses through
which to view neuron activations, and investigate the effectiveness in language
models and vision transformers through various methods of neural ablation: zero
ablation, mean ablation, activation resampling, and a novel approach we term
'peak ablation'. Through experimental analysis, we find that in different
regimes and models, each method can offer the lowest degradation of model
performance compared to other methods, with resampling usually causing the most
significant performance deterioration. We make our code available at
https://github.com/nickypro/investigating-ablation. |
---|---|
DOI: | 10.48550/arxiv.2408.17322 |