Post-hoc analysis of Arabic transformer models
Arabic is a Semitic language which is widely spoken with many dialects. Given the success of pre-trained language models, many transformer models trained on Arabic and its dialects have surfaced. While there have been an extrinsic evaluation of these models with respect to downstream NLP tasks, no w...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Arabic is a Semitic language which is widely spoken with many dialects. Given
the success of pre-trained language models, many transformer models trained on
Arabic and its dialects have surfaced. While there have been an extrinsic
evaluation of these models with respect to downstream NLP tasks, no work has
been carried out to analyze and compare their internal representations. We
probe how linguistic information is encoded in the transformer models, trained
on different Arabic dialects. We perform a layer and neuron analysis on the
models using morphological tagging tasks for different dialects of Arabic and a
dialectal identification task. Our analysis enlightens interesting findings
such as: i) word morphology is learned at the lower and middle layers, ii)
while syntactic dependencies are predominantly captured at the higher layers,
iii) despite a large overlap in their vocabulary, the MSA-based models fail to
capture the nuances of Arabic dialects, iv) we found that neurons in embedding
layers are polysemous in nature, while the neurons in middle layers are
exclusive to specific properties |
---|---|
DOI: | 10.48550/arxiv.2210.09990 |