Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy for Multi-Class Multi-Instance Segmentation

Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Khan, Mariia, Qiu, Yue, Cong, Yuren, Abu-Khalaf, Jumana, Suter, David, Rosenhahn, Bodo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Khan, Mariia
Qiu, Yue
Cong, Yuren
Abu-Khalaf, Jumana
Suter, David
Rosenhahn, Bodo
description Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the "everything" mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the "everything" mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code will be released after publication.
doi_str_mv 10.48550/arxiv.2403.10780
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_10780</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_10780</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-e70cf9984bf1921899621bf70b6a16d2e6cf5212da653901b20846a4568dc3eb3</originalsourceid><addsrcrecordid>eNotjzFPhDAARrs4mNMf4GRHHYptgdK6EeLpJUdIhJ200JKaUgwUI__eeMf0fdN7eQA8EBwlPE3xi5x_7U9EExxHBGcc34JQ62HUPsDcb7BSX7oLsJx67eBTnVfl8yv81NKhMKHajquTwU4eHq3XqFm99QOswyyDHjZophmWqwsWFU4uy_5PfgnSdxrungvgDtwY6RZ9v-8BNMe3pvhA5-r9VORnJFmGkc5wZ4TgiTJEUMKFYJQok2HFJGE91awzKSW0lyyNBSaKYp4wmaSM912sVXwAj1fsJbv9nu0o5639z28v-fEfA05U7A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy for Multi-Class Multi-Instance Segmentation</title><source>arXiv.org</source><creator>Khan, Mariia ; Qiu, Yue ; Cong, Yuren ; Abu-Khalaf, Jumana ; Suter, David ; Rosenhahn, Bodo</creator><creatorcontrib>Khan, Mariia ; Qiu, Yue ; Cong, Yuren ; Abu-Khalaf, Jumana ; Suter, David ; Rosenhahn, Bodo</creatorcontrib><description>Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the "everything" mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the "everything" mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code will be released after publication.</description><identifier>DOI: 10.48550/arxiv.2403.10780</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.10780$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.10780$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Khan, Mariia</creatorcontrib><creatorcontrib>Qiu, Yue</creatorcontrib><creatorcontrib>Cong, Yuren</creatorcontrib><creatorcontrib>Abu-Khalaf, Jumana</creatorcontrib><creatorcontrib>Suter, David</creatorcontrib><creatorcontrib>Rosenhahn, Bodo</creatorcontrib><title>Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy for Multi-Class Multi-Instance Segmentation</title><description>Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the "everything" mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the "everything" mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code will be released after publication.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjzFPhDAARrs4mNMf4GRHHYptgdK6EeLpJUdIhJ200JKaUgwUI__eeMf0fdN7eQA8EBwlPE3xi5x_7U9EExxHBGcc34JQ62HUPsDcb7BSX7oLsJx67eBTnVfl8yv81NKhMKHajquTwU4eHq3XqFm99QOswyyDHjZophmWqwsWFU4uy_5PfgnSdxrungvgDtwY6RZ9v-8BNMe3pvhA5-r9VORnJFmGkc5wZ4TgiTJEUMKFYJQok2HFJGE91awzKSW0lyyNBSaKYp4wmaSM912sVXwAj1fsJbv9nu0o5639z28v-fEfA05U7A</recordid><startdate>20240315</startdate><enddate>20240315</enddate><creator>Khan, Mariia</creator><creator>Qiu, Yue</creator><creator>Cong, Yuren</creator><creator>Abu-Khalaf, Jumana</creator><creator>Suter, David</creator><creator>Rosenhahn, Bodo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240315</creationdate><title>Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy for Multi-Class Multi-Instance Segmentation</title><author>Khan, Mariia ; Qiu, Yue ; Cong, Yuren ; Abu-Khalaf, Jumana ; Suter, David ; Rosenhahn, Bodo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-e70cf9984bf1921899621bf70b6a16d2e6cf5212da653901b20846a4568dc3eb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Khan, Mariia</creatorcontrib><creatorcontrib>Qiu, Yue</creatorcontrib><creatorcontrib>Cong, Yuren</creatorcontrib><creatorcontrib>Abu-Khalaf, Jumana</creatorcontrib><creatorcontrib>Suter, David</creatorcontrib><creatorcontrib>Rosenhahn, Bodo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Khan, Mariia</au><au>Qiu, Yue</au><au>Cong, Yuren</au><au>Abu-Khalaf, Jumana</au><au>Suter, David</au><au>Rosenhahn, Bodo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy for Multi-Class Multi-Instance Segmentation</atitle><date>2024-03-15</date><risdate>2024</risdate><abstract>Multi-class multi-instance segmentation is the task of identifying masks for multiple object classes and multiple instances of the same class within an image. The foundational Segment Anything Model (SAM) is designed for promptable multi-class multi-instance segmentation but tends to output part or sub-part masks in the "everything" mode for various real-world applications. Whole object segmentation masks play a crucial role for indoor scene understanding, especially in robotics applications. We propose a new domain invariant Real-to-Simulation (Real-Sim) fine-tuning strategy for SAM. We use object images and ground truth data collected from Ai2Thor simulator during fine-tuning (real-to-sim). To allow our Segment Any Object Model (SAOM) to work in the "everything" mode, we propose the novel nearest neighbour assignment method, updating point embeddings for each ground-truth mask. SAOM is evaluated on our own dataset collected from Ai2Thor simulator. SAOM significantly improves on SAM, with a 28% increase in mIoU and a 25% increase in mAcc for 54 frequently-seen indoor object classes. Moreover, our Real-to-Simulation fine-tuning strategy demonstrates promising generalization performance in real environments without being trained on the real-world data (sim-to-real). The dataset and the code will be released after publication.</abstract><doi>10.48550/arxiv.2403.10780</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.10780
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_10780
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title Segment Any Object Model (SAOM): Real-to-Simulation Fine-Tuning Strategy for Multi-Class Multi-Instance Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T23%3A19%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Segment%20Any%20Object%20Model%20(SAOM):%20Real-to-Simulation%20Fine-Tuning%20Strategy%20for%20Multi-Class%20Multi-Instance%20Segmentation&rft.au=Khan,%20Mariia&rft.date=2024-03-15&rft_id=info:doi/10.48550/arxiv.2403.10780&rft_dat=%3Carxiv_GOX%3E2403_10780%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true