Modularity in Transformers: Investigating Neuron Separability & Specialization
Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited. This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B)...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer models are increasingly prevalent in various applications, yet
our understanding of their internal workings remains limited. This paper
investigates the modularity and task specialization of neurons within
transformer architectures, focusing on both vision (ViT) and language (Mistral
7B) models. Using a combination of selective pruning and MoEfication clustering
techniques, we analyze the overlap and specialization of neurons across
different tasks and data subsets. Our findings reveal evidence of task-specific
neuron clusters, with varying degrees of overlap between related tasks. We
observe that neuron importance patterns persist to some extent even in randomly
initialized models, suggesting an inherent structure that training refines.
Additionally, we find that neuron clusters identified through MoEfication
correspond more strongly to task-specific neurons in earlier and later layers
of the models. This work contributes to a more nuanced understanding of
transformer internals and offers insights into potential avenues for improving
model interpretability and efficiency. |
---|---|
DOI: | 10.48550/arxiv.2408.17324 |