Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning

Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-02
Hauptverfasser: Singh, Shivalika, Vargus, Freddie, Dsouza, Daniel, Karlsson, Börje F, Mahendiran, Abinaya, Wei-Yin, Ko, Shandilya, Herumb, Patel, Jay, Mataciunas, Deividas, OMahony, Laura, Zhang, Mike, Hettiarachchi, Ramith, Wilson, Joseph, Machado, Marina, Luisa Souza Moura, Krzemiński, Dominik, Fadaei, Hakimeh, Ergün, Irem, Okoh, Ifeoma, Alaagib, Aisha, Mudannayake, Oshan, Alyafeai, Zaid, Chien, Vu Minh, Ruder, Sebastian, Guthikonda, Surya, Alghamdi, Emad A, Gehrmann, Sebastian, Muennighoff, Niklas, Bartolo, Max, Kreutzer, Julia, Üstün, Ahmet, Fadaee, Marzieh, Hooker, Sara
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Singh, Shivalika
Vargus, Freddie
Dsouza, Daniel
Karlsson, Börje F
Mahendiran, Abinaya
Wei-Yin, Ko
Shandilya, Herumb
Patel, Jay
Mataciunas, Deividas
OMahony, Laura
Zhang, Mike
Hettiarachchi, Ramith
Wilson, Joseph
Machado, Marina
Luisa Souza Moura
Krzemiński, Dominik
Fadaei, Hakimeh
Ergün, Irem
Okoh, Ifeoma
Alaagib, Aisha
Mudannayake, Oshan
Alyafeai, Zaid
Chien, Vu Minh
Ruder, Sebastian
Guthikonda, Surya
Alghamdi, Emad A
Gehrmann, Sebastian
Muennighoff, Niklas
Bartolo, Max
Kreutzer, Julia
Üstün, Ahmet
Fadaee, Marzieh
Hooker, Sara
description Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and translating existing datasets across 114 languages. In total, we contribute four key resources: we develop and open-source the Aya Annotation Platform, the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as a valuable framework for future research collaborations that aim to bridge gaps in resources.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2925279457</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2925279457</sourcerecordid><originalsourceid>FETCH-proquest_journals_29252794573</originalsourceid><addsrcrecordid>eNqNitEKgjAUQEcQJOU_DHoW7M5l9iZWVBC--C5DZijjzna3h_4-oT6gpwPnnAWLQIhdcsgAViwmGtM0hX0OUoqI3cu34iflFWl_5CXyetKYlF2niXhljdGdHyzy3jr-CMYPZsBnUIbfkLwL39gEnO2GLXtlSMc_rtn2cm6qazI5-wqafDva4HBOLRQgIS8ymYv_rg9zlzxz</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2925279457</pqid></control><display><type>article</type><title>Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning</title><source>Freely Accessible Journals</source><creator>Singh, Shivalika ; Vargus, Freddie ; Dsouza, Daniel ; Karlsson, Börje F ; Mahendiran, Abinaya ; Wei-Yin, Ko ; Shandilya, Herumb ; Patel, Jay ; Mataciunas, Deividas ; OMahony, Laura ; Zhang, Mike ; Hettiarachchi, Ramith ; Wilson, Joseph ; Machado, Marina ; Luisa Souza Moura ; Krzemiński, Dominik ; Fadaei, Hakimeh ; Ergün, Irem ; Okoh, Ifeoma ; Alaagib, Aisha ; Mudannayake, Oshan ; Alyafeai, Zaid ; Chien, Vu Minh ; Ruder, Sebastian ; Guthikonda, Surya ; Alghamdi, Emad A ; Gehrmann, Sebastian ; Muennighoff, Niklas ; Bartolo, Max ; Kreutzer, Julia ; Üstün, Ahmet ; Fadaee, Marzieh ; Hooker, Sara</creator><creatorcontrib>Singh, Shivalika ; Vargus, Freddie ; Dsouza, Daniel ; Karlsson, Börje F ; Mahendiran, Abinaya ; Wei-Yin, Ko ; Shandilya, Herumb ; Patel, Jay ; Mataciunas, Deividas ; OMahony, Laura ; Zhang, Mike ; Hettiarachchi, Ramith ; Wilson, Joseph ; Machado, Marina ; Luisa Souza Moura ; Krzemiński, Dominik ; Fadaei, Hakimeh ; Ergün, Irem ; Okoh, Ifeoma ; Alaagib, Aisha ; Mudannayake, Oshan ; Alyafeai, Zaid ; Chien, Vu Minh ; Ruder, Sebastian ; Guthikonda, Surya ; Alghamdi, Emad A ; Gehrmann, Sebastian ; Muennighoff, Niklas ; Bartolo, Max ; Kreutzer, Julia ; Üstün, Ahmet ; Fadaee, Marzieh ; Hooker, Sara</creatorcontrib><description>Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and translating existing datasets across 114 languages. In total, we contribute four key resources: we develop and open-source the Aya Annotation Platform, the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as a valuable framework for future research collaborations that aim to bridge gaps in resources.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Artificial intelligence ; Collection ; Datasets ; English language ; Large language models ; Multilingualism ; Natural language processing ; Translating</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Singh, Shivalika</creatorcontrib><creatorcontrib>Vargus, Freddie</creatorcontrib><creatorcontrib>Dsouza, Daniel</creatorcontrib><creatorcontrib>Karlsson, Börje F</creatorcontrib><creatorcontrib>Mahendiran, Abinaya</creatorcontrib><creatorcontrib>Wei-Yin, Ko</creatorcontrib><creatorcontrib>Shandilya, Herumb</creatorcontrib><creatorcontrib>Patel, Jay</creatorcontrib><creatorcontrib>Mataciunas, Deividas</creatorcontrib><creatorcontrib>OMahony, Laura</creatorcontrib><creatorcontrib>Zhang, Mike</creatorcontrib><creatorcontrib>Hettiarachchi, Ramith</creatorcontrib><creatorcontrib>Wilson, Joseph</creatorcontrib><creatorcontrib>Machado, Marina</creatorcontrib><creatorcontrib>Luisa Souza Moura</creatorcontrib><creatorcontrib>Krzemiński, Dominik</creatorcontrib><creatorcontrib>Fadaei, Hakimeh</creatorcontrib><creatorcontrib>Ergün, Irem</creatorcontrib><creatorcontrib>Okoh, Ifeoma</creatorcontrib><creatorcontrib>Alaagib, Aisha</creatorcontrib><creatorcontrib>Mudannayake, Oshan</creatorcontrib><creatorcontrib>Alyafeai, Zaid</creatorcontrib><creatorcontrib>Chien, Vu Minh</creatorcontrib><creatorcontrib>Ruder, Sebastian</creatorcontrib><creatorcontrib>Guthikonda, Surya</creatorcontrib><creatorcontrib>Alghamdi, Emad A</creatorcontrib><creatorcontrib>Gehrmann, Sebastian</creatorcontrib><creatorcontrib>Muennighoff, Niklas</creatorcontrib><creatorcontrib>Bartolo, Max</creatorcontrib><creatorcontrib>Kreutzer, Julia</creatorcontrib><creatorcontrib>Üstün, Ahmet</creatorcontrib><creatorcontrib>Fadaee, Marzieh</creatorcontrib><creatorcontrib>Hooker, Sara</creatorcontrib><title>Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning</title><title>arXiv.org</title><description>Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and translating existing datasets across 114 languages. In total, we contribute four key resources: we develop and open-source the Aya Annotation Platform, the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as a valuable framework for future research collaborations that aim to bridge gaps in resources.</description><subject>Annotations</subject><subject>Artificial intelligence</subject><subject>Collection</subject><subject>Datasets</subject><subject>English language</subject><subject>Large language models</subject><subject>Multilingualism</subject><subject>Natural language processing</subject><subject>Translating</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNitEKgjAUQEcQJOU_DHoW7M5l9iZWVBC--C5DZijjzna3h_4-oT6gpwPnnAWLQIhdcsgAViwmGtM0hX0OUoqI3cu34iflFWl_5CXyetKYlF2niXhljdGdHyzy3jr-CMYPZsBnUIbfkLwL39gEnO2GLXtlSMc_rtn2cm6qazI5-wqafDva4HBOLRQgIS8ymYv_rg9zlzxz</recordid><startdate>20240209</startdate><enddate>20240209</enddate><creator>Singh, Shivalika</creator><creator>Vargus, Freddie</creator><creator>Dsouza, Daniel</creator><creator>Karlsson, Börje F</creator><creator>Mahendiran, Abinaya</creator><creator>Wei-Yin, Ko</creator><creator>Shandilya, Herumb</creator><creator>Patel, Jay</creator><creator>Mataciunas, Deividas</creator><creator>OMahony, Laura</creator><creator>Zhang, Mike</creator><creator>Hettiarachchi, Ramith</creator><creator>Wilson, Joseph</creator><creator>Machado, Marina</creator><creator>Luisa Souza Moura</creator><creator>Krzemiński, Dominik</creator><creator>Fadaei, Hakimeh</creator><creator>Ergün, Irem</creator><creator>Okoh, Ifeoma</creator><creator>Alaagib, Aisha</creator><creator>Mudannayake, Oshan</creator><creator>Alyafeai, Zaid</creator><creator>Chien, Vu Minh</creator><creator>Ruder, Sebastian</creator><creator>Guthikonda, Surya</creator><creator>Alghamdi, Emad A</creator><creator>Gehrmann, Sebastian</creator><creator>Muennighoff, Niklas</creator><creator>Bartolo, Max</creator><creator>Kreutzer, Julia</creator><creator>Üstün, Ahmet</creator><creator>Fadaee, Marzieh</creator><creator>Hooker, Sara</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240209</creationdate><title>Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning</title><author>Singh, Shivalika ; Vargus, Freddie ; Dsouza, Daniel ; Karlsson, Börje F ; Mahendiran, Abinaya ; Wei-Yin, Ko ; Shandilya, Herumb ; Patel, Jay ; Mataciunas, Deividas ; OMahony, Laura ; Zhang, Mike ; Hettiarachchi, Ramith ; Wilson, Joseph ; Machado, Marina ; Luisa Souza Moura ; Krzemiński, Dominik ; Fadaei, Hakimeh ; Ergün, Irem ; Okoh, Ifeoma ; Alaagib, Aisha ; Mudannayake, Oshan ; Alyafeai, Zaid ; Chien, Vu Minh ; Ruder, Sebastian ; Guthikonda, Surya ; Alghamdi, Emad A ; Gehrmann, Sebastian ; Muennighoff, Niklas ; Bartolo, Max ; Kreutzer, Julia ; Üstün, Ahmet ; Fadaee, Marzieh ; Hooker, Sara</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29252794573</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Annotations</topic><topic>Artificial intelligence</topic><topic>Collection</topic><topic>Datasets</topic><topic>English language</topic><topic>Large language models</topic><topic>Multilingualism</topic><topic>Natural language processing</topic><topic>Translating</topic><toplevel>online_resources</toplevel><creatorcontrib>Singh, Shivalika</creatorcontrib><creatorcontrib>Vargus, Freddie</creatorcontrib><creatorcontrib>Dsouza, Daniel</creatorcontrib><creatorcontrib>Karlsson, Börje F</creatorcontrib><creatorcontrib>Mahendiran, Abinaya</creatorcontrib><creatorcontrib>Wei-Yin, Ko</creatorcontrib><creatorcontrib>Shandilya, Herumb</creatorcontrib><creatorcontrib>Patel, Jay</creatorcontrib><creatorcontrib>Mataciunas, Deividas</creatorcontrib><creatorcontrib>OMahony, Laura</creatorcontrib><creatorcontrib>Zhang, Mike</creatorcontrib><creatorcontrib>Hettiarachchi, Ramith</creatorcontrib><creatorcontrib>Wilson, Joseph</creatorcontrib><creatorcontrib>Machado, Marina</creatorcontrib><creatorcontrib>Luisa Souza Moura</creatorcontrib><creatorcontrib>Krzemiński, Dominik</creatorcontrib><creatorcontrib>Fadaei, Hakimeh</creatorcontrib><creatorcontrib>Ergün, Irem</creatorcontrib><creatorcontrib>Okoh, Ifeoma</creatorcontrib><creatorcontrib>Alaagib, Aisha</creatorcontrib><creatorcontrib>Mudannayake, Oshan</creatorcontrib><creatorcontrib>Alyafeai, Zaid</creatorcontrib><creatorcontrib>Chien, Vu Minh</creatorcontrib><creatorcontrib>Ruder, Sebastian</creatorcontrib><creatorcontrib>Guthikonda, Surya</creatorcontrib><creatorcontrib>Alghamdi, Emad A</creatorcontrib><creatorcontrib>Gehrmann, Sebastian</creatorcontrib><creatorcontrib>Muennighoff, Niklas</creatorcontrib><creatorcontrib>Bartolo, Max</creatorcontrib><creatorcontrib>Kreutzer, Julia</creatorcontrib><creatorcontrib>Üstün, Ahmet</creatorcontrib><creatorcontrib>Fadaee, Marzieh</creatorcontrib><creatorcontrib>Hooker, Sara</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Singh, Shivalika</au><au>Vargus, Freddie</au><au>Dsouza, Daniel</au><au>Karlsson, Börje F</au><au>Mahendiran, Abinaya</au><au>Wei-Yin, Ko</au><au>Shandilya, Herumb</au><au>Patel, Jay</au><au>Mataciunas, Deividas</au><au>OMahony, Laura</au><au>Zhang, Mike</au><au>Hettiarachchi, Ramith</au><au>Wilson, Joseph</au><au>Machado, Marina</au><au>Luisa Souza Moura</au><au>Krzemiński, Dominik</au><au>Fadaei, Hakimeh</au><au>Ergün, Irem</au><au>Okoh, Ifeoma</au><au>Alaagib, Aisha</au><au>Mudannayake, Oshan</au><au>Alyafeai, Zaid</au><au>Chien, Vu Minh</au><au>Ruder, Sebastian</au><au>Guthikonda, Surya</au><au>Alghamdi, Emad A</au><au>Gehrmann, Sebastian</au><au>Muennighoff, Niklas</au><au>Bartolo, Max</au><au>Kreutzer, Julia</au><au>Üstün, Ahmet</au><au>Fadaee, Marzieh</au><au>Hooker, Sara</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning</atitle><jtitle>arXiv.org</jtitle><date>2024-02-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the finetuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and translating existing datasets across 114 languages. In total, we contribute four key resources: we develop and open-source the Aya Annotation Platform, the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as a valuable framework for future research collaborations that aim to bridge gaps in resources.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2925279457
source Freely Accessible Journals
subjects Annotations
Artificial intelligence
Collection
Datasets
English language
Large language models
Multilingualism
Natural language processing
Translating
title Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T17%3A31%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Aya%20Dataset:%20An%20Open-Access%20Collection%20for%20Multilingual%20Instruction%20Tuning&rft.jtitle=arXiv.org&rft.au=Singh,%20Shivalika&rft.date=2024-02-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2925279457%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2925279457&rft_id=info:pmid/&rfr_iscdi=true