MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts

We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process design...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lin, Xi Victoria, Shrivastava, Akshat, Luo, Liang, Iyer, Srinivasan, Lewis, Mike, Ghosh, Gargi, Zettlemoyer, Luke, Aghajanyan, Armen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lin, Xi Victoria
Shrivastava, Akshat
Luo, Liang
Iyer, Srinivasan
Lewis, Mike
Ghosh, Gargi
Zettlemoyer, Luke
Aghajanyan, Armen
description We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7x overall, with 2.6x for text and 5.2x for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3x overall FLOPs savings (3x for text, 2.8x for image). Combining MoMa with mixture-of-depths (MoD) further improves pre-training FLOPs savings to 4.2x overall (text: 3.4x, image: 5.3x), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa's potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems.
doi_str_mv 10.48550/arxiv.2407.21770
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_21770</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_21770</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_217703</originalsourceid><addsrcrecordid>eNqFjrEKwjAURbM4iPoBTuYHUtPaUnETSXEJdHAvD030QU3Ka2rbv1eLu9OBy4F7GFvHMkr3WSa3QAO-oiSVeZTEeS7nrNRew4Era_GKxgWugOpRFF2L3vGSjAgE6NDdeY_hwTUOoSPDveXa36DGMIpjD59FDY2h0C7ZzELdmtWPC7Yp1OV0FtN31RA-gcbq21BNDbv_xhuv8Twn</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts</title><source>arXiv.org</source><creator>Lin, Xi Victoria ; Shrivastava, Akshat ; Luo, Liang ; Iyer, Srinivasan ; Lewis, Mike ; Ghosh, Gargi ; Zettlemoyer, Luke ; Aghajanyan, Armen</creator><creatorcontrib>Lin, Xi Victoria ; Shrivastava, Akshat ; Luo, Liang ; Iyer, Srinivasan ; Lewis, Mike ; Ghosh, Gargi ; Zettlemoyer, Luke ; Aghajanyan, Armen</creatorcontrib><description>We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7x overall, with 2.6x for text and 5.2x for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3x overall FLOPs savings (3x for text, 2.8x for image). Combining MoMa with mixture-of-depths (MoD) further improves pre-training FLOPs savings to 4.2x overall (text: 3.4x, image: 5.3x), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa's potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems.</description><identifier>DOI: 10.48550/arxiv.2407.21770</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.21770$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.21770$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Xi Victoria</creatorcontrib><creatorcontrib>Shrivastava, Akshat</creatorcontrib><creatorcontrib>Luo, Liang</creatorcontrib><creatorcontrib>Iyer, Srinivasan</creatorcontrib><creatorcontrib>Lewis, Mike</creatorcontrib><creatorcontrib>Ghosh, Gargi</creatorcontrib><creatorcontrib>Zettlemoyer, Luke</creatorcontrib><creatorcontrib>Aghajanyan, Armen</creatorcontrib><title>MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts</title><description>We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7x overall, with 2.6x for text and 5.2x for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3x overall FLOPs savings (3x for text, 2.8x for image). Combining MoMa with mixture-of-depths (MoD) further improves pre-training FLOPs savings to 4.2x overall (text: 3.4x, image: 5.3x), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa's potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrEKwjAURbM4iPoBTuYHUtPaUnETSXEJdHAvD030QU3Ka2rbv1eLu9OBy4F7GFvHMkr3WSa3QAO-oiSVeZTEeS7nrNRew4Era_GKxgWugOpRFF2L3vGSjAgE6NDdeY_hwTUOoSPDveXa36DGMIpjD59FDY2h0C7ZzELdmtWPC7Yp1OV0FtN31RA-gcbq21BNDbv_xhuv8Twn</recordid><startdate>20240731</startdate><enddate>20240731</enddate><creator>Lin, Xi Victoria</creator><creator>Shrivastava, Akshat</creator><creator>Luo, Liang</creator><creator>Iyer, Srinivasan</creator><creator>Lewis, Mike</creator><creator>Ghosh, Gargi</creator><creator>Zettlemoyer, Luke</creator><creator>Aghajanyan, Armen</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240731</creationdate><title>MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts</title><author>Lin, Xi Victoria ; Shrivastava, Akshat ; Luo, Liang ; Iyer, Srinivasan ; Lewis, Mike ; Ghosh, Gargi ; Zettlemoyer, Luke ; Aghajanyan, Armen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_217703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Xi Victoria</creatorcontrib><creatorcontrib>Shrivastava, Akshat</creatorcontrib><creatorcontrib>Luo, Liang</creatorcontrib><creatorcontrib>Iyer, Srinivasan</creatorcontrib><creatorcontrib>Lewis, Mike</creatorcontrib><creatorcontrib>Ghosh, Gargi</creatorcontrib><creatorcontrib>Zettlemoyer, Luke</creatorcontrib><creatorcontrib>Aghajanyan, Armen</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lin, Xi Victoria</au><au>Shrivastava, Akshat</au><au>Luo, Liang</au><au>Iyer, Srinivasan</au><au>Lewis, Mike</au><au>Ghosh, Gargi</au><au>Zettlemoyer, Luke</au><au>Aghajanyan, Armen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts</atitle><date>2024-07-31</date><risdate>2024</risdate><abstract>We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7x overall, with 2.6x for text and 5.2x for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3x overall FLOPs savings (3x for text, 2.8x for image). Combining MoMa with mixture-of-depths (MoD) further improves pre-training FLOPs savings to 4.2x overall (text: 3.4x, image: 5.3x), although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa's potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems.</abstract><doi>10.48550/arxiv.2407.21770</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.21770
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_21770
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T15%3A19%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MoMa:%20Efficient%20Early-Fusion%20Pre-training%20with%20Mixture%20of%20Modality-Aware%20Experts&rft.au=Lin,%20Xi%20Victoria&rft.date=2024-07-31&rft_id=info:doi/10.48550/arxiv.2407.21770&rft_dat=%3Carxiv_GOX%3E2407_21770%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true