Modularity in Transformers: Investigating Neuron Separability & Specialization

Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited. This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B)...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Pochinkov, Nicholas, Jones, Thomas, Rahman, Mohammed Rashidur
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Pochinkov, Nicholas
Jones, Thomas
Rahman, Mohammed Rashidur
description Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited. This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B) models. Using a combination of selective pruning and MoEfication clustering techniques, we analyze the overlap and specialization of neurons across different tasks and data subsets. Our findings reveal evidence of task-specific neuron clusters, with varying degrees of overlap between related tasks. We observe that neuron importance patterns persist to some extent even in randomly initialized models, suggesting an inherent structure that training refines. Additionally, we find that neuron clusters identified through MoEfication correspond more strongly to task-specific neurons in earlier and later layers of the models. This work contributes to a more nuanced understanding of transformer internals and offers insights into potential avenues for improving model interpretability and efficiency.
doi_str_mv 10.48550/arxiv.2408.17324
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_17324</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_17324</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_173243</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0NzYy4WTw881PKc1JLMosqVTIzFMIKUrMK07LL8pNLSq2UvDMK0stLslMTyzJzEtX8EstLcrPUwhOLUgsSkzKzAFpUVMILkhNzkzMyawCKsrP42FgTUvMKU7lhdLcDPJuriHOHrpgm-MLijJzE4sq40EuiAe7wJiwCgDYxzyh</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Modularity in Transformers: Investigating Neuron Separability &amp; Specialization</title><source>arXiv.org</source><creator>Pochinkov, Nicholas ; Jones, Thomas ; Rahman, Mohammed Rashidur</creator><creatorcontrib>Pochinkov, Nicholas ; Jones, Thomas ; Rahman, Mohammed Rashidur</creatorcontrib><description>Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited. This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B) models. Using a combination of selective pruning and MoEfication clustering techniques, we analyze the overlap and specialization of neurons across different tasks and data subsets. Our findings reveal evidence of task-specific neuron clusters, with varying degrees of overlap between related tasks. We observe that neuron importance patterns persist to some extent even in randomly initialized models, suggesting an inherent structure that training refines. Additionally, we find that neuron clusters identified through MoEfication correspond more strongly to task-specific neurons in earlier and later layers of the models. This work contributes to a more nuanced understanding of transformer internals and offers insights into potential avenues for improving model interpretability and efficiency.</description><identifier>DOI: 10.48550/arxiv.2408.17324</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.17324$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.17324$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Pochinkov, Nicholas</creatorcontrib><creatorcontrib>Jones, Thomas</creatorcontrib><creatorcontrib>Rahman, Mohammed Rashidur</creatorcontrib><title>Modularity in Transformers: Investigating Neuron Separability &amp; Specialization</title><description>Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited. This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B) models. Using a combination of selective pruning and MoEfication clustering techniques, we analyze the overlap and specialization of neurons across different tasks and data subsets. Our findings reveal evidence of task-specific neuron clusters, with varying degrees of overlap between related tasks. We observe that neuron importance patterns persist to some extent even in randomly initialized models, suggesting an inherent structure that training refines. Additionally, we find that neuron clusters identified through MoEfication correspond more strongly to task-specific neurons in earlier and later layers of the models. This work contributes to a more nuanced understanding of transformer internals and offers insights into potential avenues for improving model interpretability and efficiency.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DM0NzYy4WTw881PKc1JLMosqVTIzFMIKUrMK07LL8pNLSq2UvDMK0stLslMTyzJzEtX8EstLcrPUwhOLUgsSkzKzAFpUVMILkhNzkzMyawCKsrP42FgTUvMKU7lhdLcDPJuriHOHrpgm-MLijJzE4sq40EuiAe7wJiwCgDYxzyh</recordid><startdate>20240830</startdate><enddate>20240830</enddate><creator>Pochinkov, Nicholas</creator><creator>Jones, Thomas</creator><creator>Rahman, Mohammed Rashidur</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240830</creationdate><title>Modularity in Transformers: Investigating Neuron Separability &amp; Specialization</title><author>Pochinkov, Nicholas ; Jones, Thomas ; Rahman, Mohammed Rashidur</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_173243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Pochinkov, Nicholas</creatorcontrib><creatorcontrib>Jones, Thomas</creatorcontrib><creatorcontrib>Rahman, Mohammed Rashidur</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Pochinkov, Nicholas</au><au>Jones, Thomas</au><au>Rahman, Mohammed Rashidur</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Modularity in Transformers: Investigating Neuron Separability &amp; Specialization</atitle><date>2024-08-30</date><risdate>2024</risdate><abstract>Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited. This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B) models. Using a combination of selective pruning and MoEfication clustering techniques, we analyze the overlap and specialization of neurons across different tasks and data subsets. Our findings reveal evidence of task-specific neuron clusters, with varying degrees of overlap between related tasks. We observe that neuron importance patterns persist to some extent even in randomly initialized models, suggesting an inherent structure that training refines. Additionally, we find that neuron clusters identified through MoEfication correspond more strongly to task-specific neurons in earlier and later layers of the models. This work contributes to a more nuanced understanding of transformer internals and offers insights into potential avenues for improving model interpretability and efficiency.</abstract><doi>10.48550/arxiv.2408.17324</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.17324
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_17324
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Modularity in Transformers: Investigating Neuron Separability & Specialization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T22%3A58%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Modularity%20in%20Transformers:%20Investigating%20Neuron%20Separability%20&%20Specialization&rft.au=Pochinkov,%20Nicholas&rft.date=2024-08-30&rft_id=info:doi/10.48550/arxiv.2408.17324&rft_dat=%3Carxiv_GOX%3E2408_17324%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true