Composition of Experts: A Modular Compound AI System Leveraging Large Language Models

Large Language Models (LLMs) have achieved remarkable advancements, but their monolithic nature presents challenges in terms of scalability, cost, and customization. This paper introduces the Composition of Experts (CoE), a modular compound AI system leveraging multiple expert LLMs. CoE leverages a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jain, Swayambhoo, Raju, Ravi, Li, Bo, Csaki, Zoltan, Li, Jonathan, Liang, Kaizhao, Feng, Guoyao, Thakkar, Urmish, Sampat, Anand, Prabhakar, Raghu, Jairath, Sumati
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jain, Swayambhoo
Raju, Ravi
Li, Bo
Csaki, Zoltan
Li, Jonathan
Liang, Kaizhao
Feng, Guoyao
Thakkar, Urmish
Sampat, Anand
Prabhakar, Raghu
Jairath, Sumati
description Large Language Models (LLMs) have achieved remarkable advancements, but their monolithic nature presents challenges in terms of scalability, cost, and customization. This paper introduces the Composition of Experts (CoE), a modular compound AI system leveraging multiple expert LLMs. CoE leverages a router to dynamically select the most appropriate expert for a given input, enabling efficient utilization of resources and improved performance. We formulate the general problem of training a CoE and discuss inherent complexities associated with it. We propose a two-step routing approach to address these complexities that first uses a router to classify the input into distinct categories followed by a category-to-expert mapping to obtain desired experts. CoE offers a flexible and cost-effective solution to build compound AI systems. Our empirical evaluation demonstrates the effectiveness of CoE in achieving superior performance with reduced computational overhead. Given that CoE comprises of many expert LLMs it has unique system requirements for cost-effective serving. We present an efficient implementation of CoE leveraging SambaNova SN40L RDUs unique three-tiered memory architecture. CoEs obtained using open weight LLMs Qwen/Qwen2-7B-Instruct, google/gemma-2-9b-it, google/gemma-2-27b-it, meta-llama/Llama-3.1-70B-Instruct and Qwen/Qwen2-72B-Instruct achieve a score of $59.4$ with merely $31$ billion average active parameters on Arena-Hard and a score of $9.06$ with $54$ billion average active parameters on MT-Bench.
doi_str_mv 10.48550/arxiv.2412.01868
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_01868</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_01868</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_018683</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwtDCz4GQIdc7PLcgvzizJzM9TyE9TcK0oSC0qKbZScFTwzU8pzUksUgCrKM1LUXD0VAiuLC5JzVXwSS1LLUpMz8xLV_BJLEpPBZJ56aWJQAZQU2pOMQ8Da1piTnEqL5TmZpB3cw1x9tAFOyC-oCgzN7GoMh7kkHiwQ4wJqwAA-XI9tw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Composition of Experts: A Modular Compound AI System Leveraging Large Language Models</title><source>arXiv.org</source><creator>Jain, Swayambhoo ; Raju, Ravi ; Li, Bo ; Csaki, Zoltan ; Li, Jonathan ; Liang, Kaizhao ; Feng, Guoyao ; Thakkar, Urmish ; Sampat, Anand ; Prabhakar, Raghu ; Jairath, Sumati</creator><creatorcontrib>Jain, Swayambhoo ; Raju, Ravi ; Li, Bo ; Csaki, Zoltan ; Li, Jonathan ; Liang, Kaizhao ; Feng, Guoyao ; Thakkar, Urmish ; Sampat, Anand ; Prabhakar, Raghu ; Jairath, Sumati</creatorcontrib><description>Large Language Models (LLMs) have achieved remarkable advancements, but their monolithic nature presents challenges in terms of scalability, cost, and customization. This paper introduces the Composition of Experts (CoE), a modular compound AI system leveraging multiple expert LLMs. CoE leverages a router to dynamically select the most appropriate expert for a given input, enabling efficient utilization of resources and improved performance. We formulate the general problem of training a CoE and discuss inherent complexities associated with it. We propose a two-step routing approach to address these complexities that first uses a router to classify the input into distinct categories followed by a category-to-expert mapping to obtain desired experts. CoE offers a flexible and cost-effective solution to build compound AI systems. Our empirical evaluation demonstrates the effectiveness of CoE in achieving superior performance with reduced computational overhead. Given that CoE comprises of many expert LLMs it has unique system requirements for cost-effective serving. We present an efficient implementation of CoE leveraging SambaNova SN40L RDUs unique three-tiered memory architecture. CoEs obtained using open weight LLMs Qwen/Qwen2-7B-Instruct, google/gemma-2-9b-it, google/gemma-2-27b-it, meta-llama/Llama-3.1-70B-Instruct and Qwen/Qwen2-72B-Instruct achieve a score of $59.4$ with merely $31$ billion average active parameters on Arena-Hard and a score of $9.06$ with $54$ billion average active parameters on MT-Bench.</description><identifier>DOI: 10.48550/arxiv.2412.01868</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.01868$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.01868$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jain, Swayambhoo</creatorcontrib><creatorcontrib>Raju, Ravi</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Csaki, Zoltan</creatorcontrib><creatorcontrib>Li, Jonathan</creatorcontrib><creatorcontrib>Liang, Kaizhao</creatorcontrib><creatorcontrib>Feng, Guoyao</creatorcontrib><creatorcontrib>Thakkar, Urmish</creatorcontrib><creatorcontrib>Sampat, Anand</creatorcontrib><creatorcontrib>Prabhakar, Raghu</creatorcontrib><creatorcontrib>Jairath, Sumati</creatorcontrib><title>Composition of Experts: A Modular Compound AI System Leveraging Large Language Models</title><description>Large Language Models (LLMs) have achieved remarkable advancements, but their monolithic nature presents challenges in terms of scalability, cost, and customization. This paper introduces the Composition of Experts (CoE), a modular compound AI system leveraging multiple expert LLMs. CoE leverages a router to dynamically select the most appropriate expert for a given input, enabling efficient utilization of resources and improved performance. We formulate the general problem of training a CoE and discuss inherent complexities associated with it. We propose a two-step routing approach to address these complexities that first uses a router to classify the input into distinct categories followed by a category-to-expert mapping to obtain desired experts. CoE offers a flexible and cost-effective solution to build compound AI systems. Our empirical evaluation demonstrates the effectiveness of CoE in achieving superior performance with reduced computational overhead. Given that CoE comprises of many expert LLMs it has unique system requirements for cost-effective serving. We present an efficient implementation of CoE leveraging SambaNova SN40L RDUs unique three-tiered memory architecture. CoEs obtained using open weight LLMs Qwen/Qwen2-7B-Instruct, google/gemma-2-9b-it, google/gemma-2-27b-it, meta-llama/Llama-3.1-70B-Instruct and Qwen/Qwen2-72B-Instruct achieve a score of $59.4$ with merely $31$ billion average active parameters on Arena-Hard and a score of $9.06$ with $54$ billion average active parameters on MT-Bench.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jMwtDCz4GQIdc7PLcgvzizJzM9TyE9TcK0oSC0qKbZScFTwzU8pzUksUgCrKM1LUXD0VAiuLC5JzVXwSS1LLUpMz8xLV_BJLEpPBZJ56aWJQAZQU2pOMQ8Da1piTnEqL5TmZpB3cw1x9tAFOyC-oCgzN7GoMh7kkHiwQ4wJqwAA-XI9tw</recordid><startdate>20241202</startdate><enddate>20241202</enddate><creator>Jain, Swayambhoo</creator><creator>Raju, Ravi</creator><creator>Li, Bo</creator><creator>Csaki, Zoltan</creator><creator>Li, Jonathan</creator><creator>Liang, Kaizhao</creator><creator>Feng, Guoyao</creator><creator>Thakkar, Urmish</creator><creator>Sampat, Anand</creator><creator>Prabhakar, Raghu</creator><creator>Jairath, Sumati</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20241202</creationdate><title>Composition of Experts: A Modular Compound AI System Leveraging Large Language Models</title><author>Jain, Swayambhoo ; Raju, Ravi ; Li, Bo ; Csaki, Zoltan ; Li, Jonathan ; Liang, Kaizhao ; Feng, Guoyao ; Thakkar, Urmish ; Sampat, Anand ; Prabhakar, Raghu ; Jairath, Sumati</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_018683</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Jain, Swayambhoo</creatorcontrib><creatorcontrib>Raju, Ravi</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><creatorcontrib>Csaki, Zoltan</creatorcontrib><creatorcontrib>Li, Jonathan</creatorcontrib><creatorcontrib>Liang, Kaizhao</creatorcontrib><creatorcontrib>Feng, Guoyao</creatorcontrib><creatorcontrib>Thakkar, Urmish</creatorcontrib><creatorcontrib>Sampat, Anand</creatorcontrib><creatorcontrib>Prabhakar, Raghu</creatorcontrib><creatorcontrib>Jairath, Sumati</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jain, Swayambhoo</au><au>Raju, Ravi</au><au>Li, Bo</au><au>Csaki, Zoltan</au><au>Li, Jonathan</au><au>Liang, Kaizhao</au><au>Feng, Guoyao</au><au>Thakkar, Urmish</au><au>Sampat, Anand</au><au>Prabhakar, Raghu</au><au>Jairath, Sumati</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Composition of Experts: A Modular Compound AI System Leveraging Large Language Models</atitle><date>2024-12-02</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have achieved remarkable advancements, but their monolithic nature presents challenges in terms of scalability, cost, and customization. This paper introduces the Composition of Experts (CoE), a modular compound AI system leveraging multiple expert LLMs. CoE leverages a router to dynamically select the most appropriate expert for a given input, enabling efficient utilization of resources and improved performance. We formulate the general problem of training a CoE and discuss inherent complexities associated with it. We propose a two-step routing approach to address these complexities that first uses a router to classify the input into distinct categories followed by a category-to-expert mapping to obtain desired experts. CoE offers a flexible and cost-effective solution to build compound AI systems. Our empirical evaluation demonstrates the effectiveness of CoE in achieving superior performance with reduced computational overhead. Given that CoE comprises of many expert LLMs it has unique system requirements for cost-effective serving. We present an efficient implementation of CoE leveraging SambaNova SN40L RDUs unique three-tiered memory architecture. CoEs obtained using open weight LLMs Qwen/Qwen2-7B-Instruct, google/gemma-2-9b-it, google/gemma-2-27b-it, meta-llama/Llama-3.1-70B-Instruct and Qwen/Qwen2-72B-Instruct achieve a score of $59.4$ with merely $31$ billion average active parameters on Arena-Hard and a score of $9.06$ with $54$ billion average active parameters on MT-Bench.</abstract><doi>10.48550/arxiv.2412.01868</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.01868
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_01868
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
Statistics - Machine Learning
title Composition of Experts: A Modular Compound AI System Leveraging Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T18%3A59%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Composition%20of%20Experts:%20A%20Modular%20Compound%20AI%20System%20Leveraging%20Large%20Language%20Models&rft.au=Jain,%20Swayambhoo&rft.date=2024-12-02&rft_id=info:doi/10.48550/arxiv.2412.01868&rft_dat=%3Carxiv_GOX%3E2412_01868%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true