A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation

Recent advances in the pre-training of language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages are not well represented on the web and therefore excluded...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Adelani, David Ifeoluwa, Alabi, Jesujoba Oluwadara, Fan, Angela, Kreutzer, Julia, Shen, Xiaoyu, Reid, Machel, Ruiter, Dana, Klakow, Dietrich, Nabende, Peter, Chang, Ernie, Gwadabe, Tajuddeen, Sackey, Freshia, Dossou, Bonaventure F. P, Emezue, Chris Chinenye, Leong, Colin, Beukman, Michael, Muhammad, Shamsuddeen Hassan, Jarso, Guyo Dub, Yousuf, Oreen, Rubungo, Andre Niyongabo, Hacheme, Gilles, Wairagala, Eric Peter, Nasir, Muhammad Umair, Ajibade, Benjamin Ayoade, Ajayi, Tunde Oluwaseyi, Gitau, Yvonne Wambui, Abbott, Jade, Ahmed, Mohamed, Ochieng, Millicent, Aremu, Anuoluwapo, Ogayo, Perez, Mukiibi, Jonathan, Kabore, Fatoumata Ouoba, Kalipe, Godson Koffi, Mbaye, Derguene, Tapo, Allahsera Auguste, Koagne, Victoire Memdjokam, Munkoh-Buabeng, Edwin, Wagner, Valencia, Abdulmumin, Idris, Awokoya, Ayodele, Buzaaba, Happy, Sibanda, Blessing, Bukula, Andiswa, Manthalu, Sam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent advances in the pre-training of language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages are not well represented on the web and therefore excluded from the large-scale crawls used to create datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pre-training? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a new African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both to additional languages and to additional domains is to fine-tune large pre-trained models on small quantities of high-quality translation data.
DOI:10.48550/arxiv.2205.02022