Build a Robust QA System with Transformer-based Mixture of Experts

In this paper, we aim to build a robust question answering system that can adapt to out-of-domain datasets. A single network may overfit to the superficial correlation in the training distribution, but with a meaningful number of expert sub-networks, a gating network that selects a sparse combinatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhou, Yu Qing, Liu, Xixuan Julie, Dong, Yuanzhe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhou, Yu Qing
Liu, Xixuan Julie
Dong, Yuanzhe
description In this paper, we aim to build a robust question answering system that can adapt to out-of-domain datasets. A single network may overfit to the superficial correlation in the training distribution, but with a meaningful number of expert sub-networks, a gating network that selects a sparse combination of experts for each input, and careful balance on the importance of expert sub-networks, the Mixture-of-Experts (MoE) model allows us to train a multi-task learner that can be generalized to out-of-domain datasets. We also explore the possibility of bringing the MoE layers up to the middle of the DistilBERT and replacing the dense feed-forward network with a sparsely-activated switch FFN layers, similar to the Switch Transformer architecture, which simplifies the MoE routing algorithm with reduced communication and computational costs. In addition to model architectures, we explore techniques of data augmentation including Easy Data Augmentation (EDA) and back translation, to create more meaningful variance among the small out-of-domain training data, therefore boosting the performance and robustness of our models. In this paper, we show that our combination of best architecture and data augmentation techniques achieves a 53.477 F1 score in the out-of-domain evaluation, which is a 9.52% performance gain over the baseline. On the final test set, we reported a higher 59.506 F1 and 41.651 EM. We successfully demonstrate the effectiveness of Mixture-of-Expert architecture in a Robust QA task.
doi_str_mv 10.48550/arxiv.2204.09598
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2204_09598</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2204_09598</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-7d8284ca7acc636c38323f9b1e16330f6db4ad201fdde345834b88ba480016763</originalsourceid><addsrcrecordid>eNotz71OwzAUQGEvDKjwAEz4BRLsXMe5Gduq_EhFCMgeXce2sNSQynYgfXtEYTrbkT7GbqQoFda1uKO4hK-yqoQqRVu3eMk2mzkcLCf-Npk5Zf665u-nlN3Iv0P-4F2kz-SnOLpYGErO8uew5Dk6Pnm-W44u5nTFLjwdkrv-74p197tu-1jsXx6etut9QbrBorFYoRqooWHQoAdAqMC3RjqpAYTX1iiylZDeWgeqRlAG0ZBCIaRuNKzY7d_2jOiPMYwUT_0vpj9j4AdIekOm</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Build a Robust QA System with Transformer-based Mixture of Experts</title><source>arXiv.org</source><creator>Zhou, Yu Qing ; Liu, Xixuan Julie ; Dong, Yuanzhe</creator><creatorcontrib>Zhou, Yu Qing ; Liu, Xixuan Julie ; Dong, Yuanzhe</creatorcontrib><description>In this paper, we aim to build a robust question answering system that can adapt to out-of-domain datasets. A single network may overfit to the superficial correlation in the training distribution, but with a meaningful number of expert sub-networks, a gating network that selects a sparse combination of experts for each input, and careful balance on the importance of expert sub-networks, the Mixture-of-Experts (MoE) model allows us to train a multi-task learner that can be generalized to out-of-domain datasets. We also explore the possibility of bringing the MoE layers up to the middle of the DistilBERT and replacing the dense feed-forward network with a sparsely-activated switch FFN layers, similar to the Switch Transformer architecture, which simplifies the MoE routing algorithm with reduced communication and computational costs. In addition to model architectures, we explore techniques of data augmentation including Easy Data Augmentation (EDA) and back translation, to create more meaningful variance among the small out-of-domain training data, therefore boosting the performance and robustness of our models. In this paper, we show that our combination of best architecture and data augmentation techniques achieves a 53.477 F1 score in the out-of-domain evaluation, which is a 9.52% performance gain over the baseline. On the final test set, we reported a higher 59.506 F1 and 41.651 EM. We successfully demonstrate the effectiveness of Mixture-of-Expert architecture in a Robust QA task.</description><identifier>DOI: 10.48550/arxiv.2204.09598</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2022-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2204.09598$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2204.09598$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhou, Yu Qing</creatorcontrib><creatorcontrib>Liu, Xixuan Julie</creatorcontrib><creatorcontrib>Dong, Yuanzhe</creatorcontrib><title>Build a Robust QA System with Transformer-based Mixture of Experts</title><description>In this paper, we aim to build a robust question answering system that can adapt to out-of-domain datasets. A single network may overfit to the superficial correlation in the training distribution, but with a meaningful number of expert sub-networks, a gating network that selects a sparse combination of experts for each input, and careful balance on the importance of expert sub-networks, the Mixture-of-Experts (MoE) model allows us to train a multi-task learner that can be generalized to out-of-domain datasets. We also explore the possibility of bringing the MoE layers up to the middle of the DistilBERT and replacing the dense feed-forward network with a sparsely-activated switch FFN layers, similar to the Switch Transformer architecture, which simplifies the MoE routing algorithm with reduced communication and computational costs. In addition to model architectures, we explore techniques of data augmentation including Easy Data Augmentation (EDA) and back translation, to create more meaningful variance among the small out-of-domain training data, therefore boosting the performance and robustness of our models. In this paper, we show that our combination of best architecture and data augmentation techniques achieves a 53.477 F1 score in the out-of-domain evaluation, which is a 9.52% performance gain over the baseline. On the final test set, we reported a higher 59.506 F1 and 41.651 EM. We successfully demonstrate the effectiveness of Mixture-of-Expert architecture in a Robust QA task.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUQGEvDKjwAEz4BRLsXMe5Gduq_EhFCMgeXce2sNSQynYgfXtEYTrbkT7GbqQoFda1uKO4hK-yqoQqRVu3eMk2mzkcLCf-Npk5Zf665u-nlN3Iv0P-4F2kz-SnOLpYGErO8uew5Dk6Pnm-W44u5nTFLjwdkrv-74p197tu-1jsXx6etut9QbrBorFYoRqooWHQoAdAqMC3RjqpAYTX1iiylZDeWgeqRlAG0ZBCIaRuNKzY7d_2jOiPMYwUT_0vpj9j4AdIekOm</recordid><startdate>20220319</startdate><enddate>20220319</enddate><creator>Zhou, Yu Qing</creator><creator>Liu, Xixuan Julie</creator><creator>Dong, Yuanzhe</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220319</creationdate><title>Build a Robust QA System with Transformer-based Mixture of Experts</title><author>Zhou, Yu Qing ; Liu, Xixuan Julie ; Dong, Yuanzhe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-7d8284ca7acc636c38323f9b1e16330f6db4ad201fdde345834b88ba480016763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Yu Qing</creatorcontrib><creatorcontrib>Liu, Xixuan Julie</creatorcontrib><creatorcontrib>Dong, Yuanzhe</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhou, Yu Qing</au><au>Liu, Xixuan Julie</au><au>Dong, Yuanzhe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Build a Robust QA System with Transformer-based Mixture of Experts</atitle><date>2022-03-19</date><risdate>2022</risdate><abstract>In this paper, we aim to build a robust question answering system that can adapt to out-of-domain datasets. A single network may overfit to the superficial correlation in the training distribution, but with a meaningful number of expert sub-networks, a gating network that selects a sparse combination of experts for each input, and careful balance on the importance of expert sub-networks, the Mixture-of-Experts (MoE) model allows us to train a multi-task learner that can be generalized to out-of-domain datasets. We also explore the possibility of bringing the MoE layers up to the middle of the DistilBERT and replacing the dense feed-forward network with a sparsely-activated switch FFN layers, similar to the Switch Transformer architecture, which simplifies the MoE routing algorithm with reduced communication and computational costs. In addition to model architectures, we explore techniques of data augmentation including Easy Data Augmentation (EDA) and back translation, to create more meaningful variance among the small out-of-domain training data, therefore boosting the performance and robustness of our models. In this paper, we show that our combination of best architecture and data augmentation techniques achieves a 53.477 F1 score in the out-of-domain evaluation, which is a 9.52% performance gain over the baseline. On the final test set, we reported a higher 59.506 F1 and 41.651 EM. We successfully demonstrate the effectiveness of Mixture-of-Expert architecture in a Robust QA task.</abstract><doi>10.48550/arxiv.2204.09598</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2204.09598
ispartof
issn
language eng
recordid cdi_arxiv_primary_2204_09598
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title Build a Robust QA System with Transformer-based Mixture of Experts
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T07%3A38%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Build%20a%20Robust%20QA%20System%20with%20Transformer-based%20Mixture%20of%20Experts&rft.au=Zhou,%20Yu%20Qing&rft.date=2022-03-19&rft_id=info:doi/10.48550/arxiv.2204.09598&rft_dat=%3Carxiv_GOX%3E2204_09598%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true