Robot Instance Segmentation with Few Annotations for Grasping
The ability of robots to manipulate objects relies heavily on their aptitude for visual perception. In domains characterized by cluttered scenes and high object variability, most methods call for vast labeled datasets, laboriously hand-annotated, with the aim of training capable models. Once deploye...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kimhi, Moshe Vainshtein, David Baskin, Chaim Di Castro, Dotan |
description | The ability of robots to manipulate objects relies heavily on their aptitude
for visual perception. In domains characterized by cluttered scenes and high
object variability, most methods call for vast labeled datasets, laboriously
hand-annotated, with the aim of training capable models. Once deployed, the
challenge of generalizing to unfamiliar objects implies that the model must
evolve alongside its domain. To address this, we propose a novel framework that
combines Semi-Supervised Learning (SSL) with Learning Through Interaction
(LTI), allowing a model to learn by observing scene alterations and leverage
visual consistency despite temporal gaps without requiring curated data of
interaction sequences. As a result, our approach exploits partially annotated
data through self-supervision and incorporates temporal context using
pseudo-sequences generated from unlabeled still images. We validate our method
on two common benchmarks, ARMBench mix-object-tote and OCID, where it achieves
state-of-the-art performance. Notably, on ARMBench, we attain an
$\text{AP}_{50}$ of $86.37$, almost a $20\%$ improvement over existing work,
and obtain remarkable results in scenarios with extremely low annotation,
achieving an $\text{AP}_{50}$ score of $84.89$ with just $1 \%$ of annotated
data compared to $72$ presented in ARMBench on the fully annotated counterpart. |
doi_str_mv | 10.48550/arxiv.2407.01302 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_01302</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_01302</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_013023</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNDYw4mSwDcpPyi9R8MwrLknMS05VCE5Nz03NK0ksyczPUyjPLMlQcEstV3DMy8uHiBUrpOUXKbgXJRYXZOal8zCwpiXmFKfyQmluBnk31xBnD12wRfEFRZm5iUWV8SAL48EWGhNWAQBGlTWh</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Robot Instance Segmentation with Few Annotations for Grasping</title><source>arXiv.org</source><creator>Kimhi, Moshe ; Vainshtein, David ; Baskin, Chaim ; Di Castro, Dotan</creator><creatorcontrib>Kimhi, Moshe ; Vainshtein, David ; Baskin, Chaim ; Di Castro, Dotan</creatorcontrib><description>The ability of robots to manipulate objects relies heavily on their aptitude
for visual perception. In domains characterized by cluttered scenes and high
object variability, most methods call for vast labeled datasets, laboriously
hand-annotated, with the aim of training capable models. Once deployed, the
challenge of generalizing to unfamiliar objects implies that the model must
evolve alongside its domain. To address this, we propose a novel framework that
combines Semi-Supervised Learning (SSL) with Learning Through Interaction
(LTI), allowing a model to learn by observing scene alterations and leverage
visual consistency despite temporal gaps without requiring curated data of
interaction sequences. As a result, our approach exploits partially annotated
data through self-supervision and incorporates temporal context using
pseudo-sequences generated from unlabeled still images. We validate our method
on two common benchmarks, ARMBench mix-object-tote and OCID, where it achieves
state-of-the-art performance. Notably, on ARMBench, we attain an
$\text{AP}_{50}$ of $86.37$, almost a $20\%$ improvement over existing work,
and obtain remarkable results in scenarios with extremely low annotation,
achieving an $\text{AP}_{50}$ score of $84.89$ with just $1 \%$ of annotated
data compared to $72$ presented in ARMBench on the fully annotated counterpart.</description><identifier>DOI: 10.48550/arxiv.2407.01302</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2024-07</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.01302$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.01302$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kimhi, Moshe</creatorcontrib><creatorcontrib>Vainshtein, David</creatorcontrib><creatorcontrib>Baskin, Chaim</creatorcontrib><creatorcontrib>Di Castro, Dotan</creatorcontrib><title>Robot Instance Segmentation with Few Annotations for Grasping</title><description>The ability of robots to manipulate objects relies heavily on their aptitude
for visual perception. In domains characterized by cluttered scenes and high
object variability, most methods call for vast labeled datasets, laboriously
hand-annotated, with the aim of training capable models. Once deployed, the
challenge of generalizing to unfamiliar objects implies that the model must
evolve alongside its domain. To address this, we propose a novel framework that
combines Semi-Supervised Learning (SSL) with Learning Through Interaction
(LTI), allowing a model to learn by observing scene alterations and leverage
visual consistency despite temporal gaps without requiring curated data of
interaction sequences. As a result, our approach exploits partially annotated
data through self-supervision and incorporates temporal context using
pseudo-sequences generated from unlabeled still images. We validate our method
on two common benchmarks, ARMBench mix-object-tote and OCID, where it achieves
state-of-the-art performance. Notably, on ARMBench, we attain an
$\text{AP}_{50}$ of $86.37$, almost a $20\%$ improvement over existing work,
and obtain remarkable results in scenarios with extremely low annotation,
achieving an $\text{AP}_{50}$ score of $84.89$ with just $1 \%$ of annotated
data compared to $72$ presented in ARMBench on the fully annotated counterpart.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zMwNDYw4mSwDcpPyi9R8MwrLknMS05VCE5Nz03NK0ksyczPUyjPLMlQcEstV3DMy8uHiBUrpOUXKbgXJRYXZOal8zCwpiXmFKfyQmluBnk31xBnD12wRfEFRZm5iUWV8SAL48EWGhNWAQBGlTWh</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Kimhi, Moshe</creator><creator>Vainshtein, David</creator><creator>Baskin, Chaim</creator><creator>Di Castro, Dotan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240701</creationdate><title>Robot Instance Segmentation with Few Annotations for Grasping</title><author>Kimhi, Moshe ; Vainshtein, David ; Baskin, Chaim ; Di Castro, Dotan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_013023</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Kimhi, Moshe</creatorcontrib><creatorcontrib>Vainshtein, David</creatorcontrib><creatorcontrib>Baskin, Chaim</creatorcontrib><creatorcontrib>Di Castro, Dotan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kimhi, Moshe</au><au>Vainshtein, David</au><au>Baskin, Chaim</au><au>Di Castro, Dotan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robot Instance Segmentation with Few Annotations for Grasping</atitle><date>2024-07-01</date><risdate>2024</risdate><abstract>The ability of robots to manipulate objects relies heavily on their aptitude
for visual perception. In domains characterized by cluttered scenes and high
object variability, most methods call for vast labeled datasets, laboriously
hand-annotated, with the aim of training capable models. Once deployed, the
challenge of generalizing to unfamiliar objects implies that the model must
evolve alongside its domain. To address this, we propose a novel framework that
combines Semi-Supervised Learning (SSL) with Learning Through Interaction
(LTI), allowing a model to learn by observing scene alterations and leverage
visual consistency despite temporal gaps without requiring curated data of
interaction sequences. As a result, our approach exploits partially annotated
data through self-supervision and incorporates temporal context using
pseudo-sequences generated from unlabeled still images. We validate our method
on two common benchmarks, ARMBench mix-object-tote and OCID, where it achieves
state-of-the-art performance. Notably, on ARMBench, we attain an
$\text{AP}_{50}$ of $86.37$, almost a $20\%$ improvement over existing work,
and obtain remarkable results in scenarios with extremely low annotation,
achieving an $\text{AP}_{50}$ score of $84.89$ with just $1 \%$ of annotated
data compared to $72$ presented in ARMBench on the fully annotated counterpart.</abstract><doi>10.48550/arxiv.2407.01302</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2407.01302 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2407_01302 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition Computer Science - Robotics |
title | Robot Instance Segmentation with Few Annotations for Grasping |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T13%3A28%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robot%20Instance%20Segmentation%20with%20Few%20Annotations%20for%20Grasping&rft.au=Kimhi,%20Moshe&rft.date=2024-07-01&rft_id=info:doi/10.48550/arxiv.2407.01302&rft_dat=%3Carxiv_GOX%3E2407_01302%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |