Federated Named Entity Recognition
We present an analysis of the performance of Federated Learning in a paradigmatic natural-language processing task: Named-Entity Recognition (NER). For our evaluation, we use the language-independent CoNLL-2003 dataset as our benchmark dataset and a Bi-LSTM-CRF model as our benchmark NER model. We s...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present an analysis of the performance of Federated Learning in a
paradigmatic natural-language processing task: Named-Entity Recognition (NER).
For our evaluation, we use the language-independent CoNLL-2003 dataset as our
benchmark dataset and a Bi-LSTM-CRF model as our benchmark NER model. We show
that federated training reaches almost the same performance as the centralized
model, though with some performance degradation as the learning environments
become more heterogeneous. We also show the convergence rate of federated
models for NER. Finally, we discuss existing challenges of Federated Learning
for NLP applications that can foster future research directions. |
---|---|
DOI: | 10.48550/arxiv.2203.15101 |