MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection

Combining the strengths of many existing predictors to obtain a Mixture of Experts which is superior to its individual components is an effective way to improve the performance without having to develop new architectures or train a model from scratch. However, surprisingly, we find that na\"ive...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Oksuz, Kemal, Kuzucu, Selim, Joy, Tom, Dokania, Puneet K
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Oksuz, Kemal
Kuzucu, Selim
Joy, Tom
Dokania, Puneet K
description Combining the strengths of many existing predictors to obtain a Mixture of Experts which is superior to its individual components is an effective way to improve the performance without having to develop new architectures or train a model from scratch. However, surprisingly, we find that na\"ively combining expert object detectors in a similar way to Deep Ensembles, can often lead to degraded performance. We identify that the primary cause of this issue is that the predictions of the experts do not match their performance, a term referred to as miscalibration. Consequently, the most confident detector dominates the final predictions, preventing the mixture from leveraging all the predictions from the experts appropriately. To address this, when constructing the Mixture of Experts, we propose to combine their predictions in a manner which reflects the individual performance of the experts; an objective we achieve by first calibrating the predictions before filtering and refining them. We term this approach the Mixture of Calibrated Experts and demonstrate its effectiveness through extensive experiments on 5 different detection tasks using a variety of detectors, showing that it: (i) improves object detectors on COCO and instance segmentation methods on LVIS by up to $\sim 2.5$ AP; (ii) reaches state-of-the-art on COCO test-dev with $65.1$ AP and on DOTA with $82.62$ $\mathrm{AP_{50}}$; (iii) outperforms single models consistently on recent detection tasks such as Open Vocabulary Object Detection.
doi_str_mv 10.48550/arxiv.2309.14976
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2309_14976</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2309_14976</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-4b2604f569d7cfed85ef99b931cb2bbf010742a477fdc5b4d0a2c83689d87f5c3</originalsourceid><addsrcrecordid>eNotz7tOwzAYQGEvDKjwAEz4BRIcx1c2FAJUStWB7tHvGzJKk8gxVfr2QGE625E-hO4qUjLFOXmAtMZTSWuiy4ppKa5Rt5saaB_xLq75K3k8BdzAEE2C7B1u19mnvOD3-DHGEC2MeTjj7XFO08kveG8-vc342eefxGm8QVcBhsXf_neDDi_toXkruv3rtnnqChBSFMxQQVjgQjtpg3eK-6C10XVlDTUmkIpIRoFJGZzlhjkC1KpaKO2UDNzWG3T_t71w-jnFI6Rz_8vqL6z6G95kSHU</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection</title><source>arXiv.org</source><creator>Oksuz, Kemal ; Kuzucu, Selim ; Joy, Tom ; Dokania, Puneet K</creator><creatorcontrib>Oksuz, Kemal ; Kuzucu, Selim ; Joy, Tom ; Dokania, Puneet K</creatorcontrib><description>Combining the strengths of many existing predictors to obtain a Mixture of Experts which is superior to its individual components is an effective way to improve the performance without having to develop new architectures or train a model from scratch. However, surprisingly, we find that na\"ively combining expert object detectors in a similar way to Deep Ensembles, can often lead to degraded performance. We identify that the primary cause of this issue is that the predictions of the experts do not match their performance, a term referred to as miscalibration. Consequently, the most confident detector dominates the final predictions, preventing the mixture from leveraging all the predictions from the experts appropriately. To address this, when constructing the Mixture of Experts, we propose to combine their predictions in a manner which reflects the individual performance of the experts; an objective we achieve by first calibrating the predictions before filtering and refining them. We term this approach the Mixture of Calibrated Experts and demonstrate its effectiveness through extensive experiments on 5 different detection tasks using a variety of detectors, showing that it: (i) improves object detectors on COCO and instance segmentation methods on LVIS by up to $\sim 2.5$ AP; (ii) reaches state-of-the-art on COCO test-dev with $65.1$ AP and on DOTA with $82.62$ $\mathrm{AP_{50}}$; (iii) outperforms single models consistently on recent detection tasks such as Open Vocabulary Object Detection.</description><identifier>DOI: 10.48550/arxiv.2309.14976</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-09</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2309.14976$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2309.14976$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Oksuz, Kemal</creatorcontrib><creatorcontrib>Kuzucu, Selim</creatorcontrib><creatorcontrib>Joy, Tom</creatorcontrib><creatorcontrib>Dokania, Puneet K</creatorcontrib><title>MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection</title><description>Combining the strengths of many existing predictors to obtain a Mixture of Experts which is superior to its individual components is an effective way to improve the performance without having to develop new architectures or train a model from scratch. However, surprisingly, we find that na\"ively combining expert object detectors in a similar way to Deep Ensembles, can often lead to degraded performance. We identify that the primary cause of this issue is that the predictions of the experts do not match their performance, a term referred to as miscalibration. Consequently, the most confident detector dominates the final predictions, preventing the mixture from leveraging all the predictions from the experts appropriately. To address this, when constructing the Mixture of Experts, we propose to combine their predictions in a manner which reflects the individual performance of the experts; an objective we achieve by first calibrating the predictions before filtering and refining them. We term this approach the Mixture of Calibrated Experts and demonstrate its effectiveness through extensive experiments on 5 different detection tasks using a variety of detectors, showing that it: (i) improves object detectors on COCO and instance segmentation methods on LVIS by up to $\sim 2.5$ AP; (ii) reaches state-of-the-art on COCO test-dev with $65.1$ AP and on DOTA with $82.62$ $\mathrm{AP_{50}}$; (iii) outperforms single models consistently on recent detection tasks such as Open Vocabulary Object Detection.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tOwzAYQGEvDKjwAEz4BRIcx1c2FAJUStWB7tHvGzJKk8gxVfr2QGE625E-hO4qUjLFOXmAtMZTSWuiy4ppKa5Rt5saaB_xLq75K3k8BdzAEE2C7B1u19mnvOD3-DHGEC2MeTjj7XFO08kveG8-vc342eefxGm8QVcBhsXf_neDDi_toXkruv3rtnnqChBSFMxQQVjgQjtpg3eK-6C10XVlDTUmkIpIRoFJGZzlhjkC1KpaKO2UDNzWG3T_t71w-jnFI6Rz_8vqL6z6G95kSHU</recordid><startdate>20230926</startdate><enddate>20230926</enddate><creator>Oksuz, Kemal</creator><creator>Kuzucu, Selim</creator><creator>Joy, Tom</creator><creator>Dokania, Puneet K</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230926</creationdate><title>MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection</title><author>Oksuz, Kemal ; Kuzucu, Selim ; Joy, Tom ; Dokania, Puneet K</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-4b2604f569d7cfed85ef99b931cb2bbf010742a477fdc5b4d0a2c83689d87f5c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Oksuz, Kemal</creatorcontrib><creatorcontrib>Kuzucu, Selim</creatorcontrib><creatorcontrib>Joy, Tom</creatorcontrib><creatorcontrib>Dokania, Puneet K</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Oksuz, Kemal</au><au>Kuzucu, Selim</au><au>Joy, Tom</au><au>Dokania, Puneet K</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection</atitle><date>2023-09-26</date><risdate>2023</risdate><abstract>Combining the strengths of many existing predictors to obtain a Mixture of Experts which is superior to its individual components is an effective way to improve the performance without having to develop new architectures or train a model from scratch. However, surprisingly, we find that na\"ively combining expert object detectors in a similar way to Deep Ensembles, can often lead to degraded performance. We identify that the primary cause of this issue is that the predictions of the experts do not match their performance, a term referred to as miscalibration. Consequently, the most confident detector dominates the final predictions, preventing the mixture from leveraging all the predictions from the experts appropriately. To address this, when constructing the Mixture of Experts, we propose to combine their predictions in a manner which reflects the individual performance of the experts; an objective we achieve by first calibrating the predictions before filtering and refining them. We term this approach the Mixture of Calibrated Experts and demonstrate its effectiveness through extensive experiments on 5 different detection tasks using a variety of detectors, showing that it: (i) improves object detectors on COCO and instance segmentation methods on LVIS by up to $\sim 2.5$ AP; (ii) reaches state-of-the-art on COCO test-dev with $65.1$ AP and on DOTA with $82.62$ $\mathrm{AP_{50}}$; (iii) outperforms single models consistently on recent detection tasks such as Open Vocabulary Object Detection.</abstract><doi>10.48550/arxiv.2309.14976</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2309.14976
ispartof
issn
language eng
recordid cdi_arxiv_primary_2309_14976
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title MoCaE: Mixture of Calibrated Experts Significantly Improves Object Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T07%3A19%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MoCaE:%20Mixture%20of%20Calibrated%20Experts%20Significantly%20Improves%20Object%20Detection&rft.au=Oksuz,%20Kemal&rft.date=2023-09-26&rft_id=info:doi/10.48550/arxiv.2309.14976&rft_dat=%3Carxiv_GOX%3E2309_14976%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true