Post-hoc analysis of Arabic transformer models
Arabic is a Semitic language which is widely spoken with many dialects. Given the success of pre-trained language models, many transformer models trained on Arabic and its dialects have surfaced. While there have been an extrinsic evaluation of these models with respect to downstream NLP tasks, no w...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Abdelali, Ahmed Durrani, Nadir Dalvi, Fahim Sajjad, Hassan |
description | Arabic is a Semitic language which is widely spoken with many dialects. Given
the success of pre-trained language models, many transformer models trained on
Arabic and its dialects have surfaced. While there have been an extrinsic
evaluation of these models with respect to downstream NLP tasks, no work has
been carried out to analyze and compare their internal representations. We
probe how linguistic information is encoded in the transformer models, trained
on different Arabic dialects. We perform a layer and neuron analysis on the
models using morphological tagging tasks for different dialects of Arabic and a
dialectal identification task. Our analysis enlightens interesting findings
such as: i) word morphology is learned at the lower and middle layers, ii)
while syntactic dependencies are predominantly captured at the higher layers,
iii) despite a large overlap in their vocabulary, the MSA-based models fail to
capture the nuances of Arabic dialects, iv) we found that neurons in embedding
layers are polysemous in nature, while the neurons in middle layers are
exclusive to specific properties |
doi_str_mv | 10.48550/arxiv.2210.09990 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2210_09990</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2210_09990</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-8034a602f0f50cc65b69cf0d83d54b325435af58014c31810136c4b1397e285b3</originalsourceid><addsrcrecordid>eNotzr1qwzAUhmEtHUraC-gU3YDdIx0dRRpD6B8EmiG7OZItYrCjIoXS3H3StNMH7_DxCPGkoDWOCJ65_IzfrdbXAN57uBftLtdTc8hR8pGncx2rzEmuC4cxylPhY025zEORc-6HqT6Iu8RTHR7_dyH2ry_7zXuz_Xz72Ky3DdsVNA7QsAWdIBHEaClYHxP0DnsyATUZJE7kQJmIyilQaKMJCv1q0I4CLsTy7_YG7r7KOHM5d7_w7gbHC0aTO9w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Post-hoc analysis of Arabic transformer models</title><source>arXiv.org</source><creator>Abdelali, Ahmed ; Durrani, Nadir ; Dalvi, Fahim ; Sajjad, Hassan</creator><creatorcontrib>Abdelali, Ahmed ; Durrani, Nadir ; Dalvi, Fahim ; Sajjad, Hassan</creatorcontrib><description>Arabic is a Semitic language which is widely spoken with many dialects. Given
the success of pre-trained language models, many transformer models trained on
Arabic and its dialects have surfaced. While there have been an extrinsic
evaluation of these models with respect to downstream NLP tasks, no work has
been carried out to analyze and compare their internal representations. We
probe how linguistic information is encoded in the transformer models, trained
on different Arabic dialects. We perform a layer and neuron analysis on the
models using morphological tagging tasks for different dialects of Arabic and a
dialectal identification task. Our analysis enlightens interesting findings
such as: i) word morphology is learned at the lower and middle layers, ii)
while syntactic dependencies are predominantly captured at the higher layers,
iii) despite a large overlap in their vocabulary, the MSA-based models fail to
capture the nuances of Arabic dialects, iv) we found that neurons in embedding
layers are polysemous in nature, while the neurons in middle layers are
exclusive to specific properties</description><identifier>DOI: 10.48550/arxiv.2210.09990</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2022-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2210.09990$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2210.09990$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Abdelali, Ahmed</creatorcontrib><creatorcontrib>Durrani, Nadir</creatorcontrib><creatorcontrib>Dalvi, Fahim</creatorcontrib><creatorcontrib>Sajjad, Hassan</creatorcontrib><title>Post-hoc analysis of Arabic transformer models</title><description>Arabic is a Semitic language which is widely spoken with many dialects. Given
the success of pre-trained language models, many transformer models trained on
Arabic and its dialects have surfaced. While there have been an extrinsic
evaluation of these models with respect to downstream NLP tasks, no work has
been carried out to analyze and compare their internal representations. We
probe how linguistic information is encoded in the transformer models, trained
on different Arabic dialects. We perform a layer and neuron analysis on the
models using morphological tagging tasks for different dialects of Arabic and a
dialectal identification task. Our analysis enlightens interesting findings
such as: i) word morphology is learned at the lower and middle layers, ii)
while syntactic dependencies are predominantly captured at the higher layers,
iii) despite a large overlap in their vocabulary, the MSA-based models fail to
capture the nuances of Arabic dialects, iv) we found that neurons in embedding
layers are polysemous in nature, while the neurons in middle layers are
exclusive to specific properties</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1qwzAUhmEtHUraC-gU3YDdIx0dRRpD6B8EmiG7OZItYrCjIoXS3H3StNMH7_DxCPGkoDWOCJ65_IzfrdbXAN57uBftLtdTc8hR8pGncx2rzEmuC4cxylPhY025zEORc-6HqT6Iu8RTHR7_dyH2ry_7zXuz_Xz72Ky3DdsVNA7QsAWdIBHEaClYHxP0DnsyATUZJE7kQJmIyilQaKMJCv1q0I4CLsTy7_YG7r7KOHM5d7_w7gbHC0aTO9w</recordid><startdate>20221018</startdate><enddate>20221018</enddate><creator>Abdelali, Ahmed</creator><creator>Durrani, Nadir</creator><creator>Dalvi, Fahim</creator><creator>Sajjad, Hassan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221018</creationdate><title>Post-hoc analysis of Arabic transformer models</title><author>Abdelali, Ahmed ; Durrani, Nadir ; Dalvi, Fahim ; Sajjad, Hassan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-8034a602f0f50cc65b69cf0d83d54b325435af58014c31810136c4b1397e285b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Abdelali, Ahmed</creatorcontrib><creatorcontrib>Durrani, Nadir</creatorcontrib><creatorcontrib>Dalvi, Fahim</creatorcontrib><creatorcontrib>Sajjad, Hassan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Abdelali, Ahmed</au><au>Durrani, Nadir</au><au>Dalvi, Fahim</au><au>Sajjad, Hassan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Post-hoc analysis of Arabic transformer models</atitle><date>2022-10-18</date><risdate>2022</risdate><abstract>Arabic is a Semitic language which is widely spoken with many dialects. Given
the success of pre-trained language models, many transformer models trained on
Arabic and its dialects have surfaced. While there have been an extrinsic
evaluation of these models with respect to downstream NLP tasks, no work has
been carried out to analyze and compare their internal representations. We
probe how linguistic information is encoded in the transformer models, trained
on different Arabic dialects. We perform a layer and neuron analysis on the
models using morphological tagging tasks for different dialects of Arabic and a
dialectal identification task. Our analysis enlightens interesting findings
such as: i) word morphology is learned at the lower and middle layers, ii)
while syntactic dependencies are predominantly captured at the higher layers,
iii) despite a large overlap in their vocabulary, the MSA-based models fail to
capture the nuances of Arabic dialects, iv) we found that neurons in embedding
layers are polysemous in nature, while the neurons in middle layers are
exclusive to specific properties</abstract><doi>10.48550/arxiv.2210.09990</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2210.09990 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2210_09990 |
source | arXiv.org |
subjects | Computer Science - Computation and Language |
title | Post-hoc analysis of Arabic transformer models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T18%3A00%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Post-hoc%20analysis%20of%20Arabic%20transformer%20models&rft.au=Abdelali,%20Ahmed&rft.date=2022-10-18&rft_id=info:doi/10.48550/arxiv.2210.09990&rft_dat=%3Carxiv_GOX%3E2210_09990%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |