Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities

Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yang, Enneng, Shen, Li, Guo, Guibing, Wang, Xingwei, Cao, Xiaochun, Zhang, Jie, Tao, Dacheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yang, Enneng
Shen, Li
Guo, Guibing
Wang, Xingwei
Cao, Xiaochun
Zhang, Jie
Tao, Dacheng
description Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available model merging techniques comprehensively. However, there is a significant gap in the literature regarding a systematic and thorough review of these techniques. This survey provides a comprehensive overview of model merging methods and theories, their applications in various domains and settings, and future research directions. Specifically, we first propose a new taxonomic approach that exhaustively discusses existing model merging methods. Secondly, we discuss the application of model merging techniques in large language models, multimodal large language models, and 10+ machine learning subfields, including continual learning, multi-task learning, few-shot learning, etc. Finally, we highlight the remaining challenges of model merging and discuss future research directions. A comprehensive list of papers about model merging is available at \url{https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications}.
doi_str_mv 10.48550/arxiv.2408.07666
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_07666</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_07666</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_076663</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwNzMz42SI9s1PSc1R8E0tSs_MS1fIzFPw8fEt1lHwhVCJeSkKTqmV-XkpVkA1JRn5KUDBkIzU_KLMVCDLsaAgJzM5sSQzP68YrNa_oCC_qKQ0L7MEKM_DwJqWmFOcyguluRnk3VxDnD10wc6ILyjKzE0sqowHOSce7BxjwioA85o_Jg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities</title><source>arXiv.org</source><creator>Yang, Enneng ; Shen, Li ; Guo, Guibing ; Wang, Xingwei ; Cao, Xiaochun ; Zhang, Jie ; Tao, Dacheng</creator><creatorcontrib>Yang, Enneng ; Shen, Li ; Guo, Guibing ; Wang, Xingwei ; Cao, Xiaochun ; Zhang, Jie ; Tao, Dacheng</creatorcontrib><description>Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available model merging techniques comprehensively. However, there is a significant gap in the literature regarding a systematic and thorough review of these techniques. This survey provides a comprehensive overview of model merging methods and theories, their applications in various domains and settings, and future research directions. Specifically, we first propose a new taxonomic approach that exhaustively discusses existing model merging methods. Secondly, we discuss the application of model merging techniques in large language models, multimodal large language models, and 10+ machine learning subfields, including continual learning, multi-task learning, few-shot learning, etc. Finally, we highlight the remaining challenges of model merging and discuss future research directions. A comprehensive list of papers about model merging is available at \url{https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications}.</description><identifier>DOI: 10.48550/arxiv.2408.07666</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.07666$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.07666$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Enneng</creatorcontrib><creatorcontrib>Shen, Li</creatorcontrib><creatorcontrib>Guo, Guibing</creatorcontrib><creatorcontrib>Wang, Xingwei</creatorcontrib><creatorcontrib>Cao, Xiaochun</creatorcontrib><creatorcontrib>Zhang, Jie</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><title>Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities</title><description>Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available model merging techniques comprehensively. However, there is a significant gap in the literature regarding a systematic and thorough review of these techniques. This survey provides a comprehensive overview of model merging methods and theories, their applications in various domains and settings, and future research directions. Specifically, we first propose a new taxonomic approach that exhaustively discusses existing model merging methods. Secondly, we discuss the application of model merging techniques in large language models, multimodal large language models, and 10+ machine learning subfields, including continual learning, multi-task learning, few-shot learning, etc. Finally, we highlight the remaining challenges of model merging and discuss future research directions. A comprehensive list of papers about model merging is available at \url{https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications}.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwNzMz42SI9s1PSc1R8E0tSs_MS1fIzFPw8fEt1lHwhVCJeSkKTqmV-XkpVkA1JRn5KUDBkIzU_KLMVCDLsaAgJzM5sSQzP68YrNa_oCC_qKQ0L7MEKM_DwJqWmFOcyguluRnk3VxDnD10wc6ILyjKzE0sqowHOSce7BxjwioA85o_Jg</recordid><startdate>20240814</startdate><enddate>20240814</enddate><creator>Yang, Enneng</creator><creator>Shen, Li</creator><creator>Guo, Guibing</creator><creator>Wang, Xingwei</creator><creator>Cao, Xiaochun</creator><creator>Zhang, Jie</creator><creator>Tao, Dacheng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240814</creationdate><title>Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities</title><author>Yang, Enneng ; Shen, Li ; Guo, Guibing ; Wang, Xingwei ; Cao, Xiaochun ; Zhang, Jie ; Tao, Dacheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_076663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Enneng</creatorcontrib><creatorcontrib>Shen, Li</creatorcontrib><creatorcontrib>Guo, Guibing</creatorcontrib><creatorcontrib>Wang, Xingwei</creatorcontrib><creatorcontrib>Cao, Xiaochun</creatorcontrib><creatorcontrib>Zhang, Jie</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Enneng</au><au>Shen, Li</au><au>Guo, Guibing</au><au>Wang, Xingwei</au><au>Cao, Xiaochun</au><au>Zhang, Jie</au><au>Tao, Dacheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities</atitle><date>2024-08-14</date><risdate>2024</risdate><abstract>Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available model merging techniques comprehensively. However, there is a significant gap in the literature regarding a systematic and thorough review of these techniques. This survey provides a comprehensive overview of model merging methods and theories, their applications in various domains and settings, and future research directions. Specifically, we first propose a new taxonomic approach that exhaustively discusses existing model merging methods. Secondly, we discuss the application of model merging techniques in large language models, multimodal large language models, and 10+ machine learning subfields, including continual learning, multi-task learning, few-shot learning, etc. Finally, we highlight the remaining challenges of model merging and discuss future research directions. A comprehensive list of papers about model merging is available at \url{https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications}.</abstract><doi>10.48550/arxiv.2408.07666</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.07666
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_07666
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T01%3A27%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Model%20Merging%20in%20LLMs,%20MLLMs,%20and%20Beyond:%20Methods,%20Theories,%20Applications%20and%20Opportunities&rft.au=Yang,%20Enneng&rft.date=2024-08-14&rft_id=info:doi/10.48550/arxiv.2408.07666&rft_dat=%3Carxiv_GOX%3E2408_07666%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true