SOTR: Segmenting Objects with Transformers

Most recent transformer-based models show impressive performance on vision tasks, even better than Convolution Neural Networks (CNN). In this work, we present a novel, flexible, and effective transformer-based model for high-quality instance segmentation. The proposed method, Segmenting Objects with...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Guo, Ruohao, Niu, Dantong, Qu, Liao, Li, Zhenbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Guo, Ruohao
Niu, Dantong
Qu, Liao
Li, Zhenbo
description Most recent transformer-based models show impressive performance on vision tasks, even better than Convolution Neural Networks (CNN). In this work, we present a novel, flexible, and effective transformer-based model for high-quality instance segmentation. The proposed method, Segmenting Objects with TRansformers (SOTR), simplifies the segmentation pipeline, building on an alternative CNN backbone appended with two parallel subtasks: (1) predicting per-instance category via transformer and (2) dynamically generating segmentation mask with the multi-level upsampling module. SOTR can effectively extract lower-level feature representations and capture long-range context dependencies by Feature Pyramid Network (FPN) and twin transformer, respectively. Meanwhile, compared with the original transformer, the proposed twin transformer is time- and resource-efficient since only a row and a column attention are involved to encode pixels. Moreover, SOTR is easy to be incorporated with various CNN backbones and transformer model variants to make considerable improvements for the segmentation accuracy and training convergence. Extensive experiments show that our SOTR performs well on the MS COCO dataset and surpasses state-of-the-art instance segmentation approaches. We hope our simple but strong framework could serve as a preferment baseline for instance-level recognition. Our code is available at https://github.com/easton-cau/SOTR.
doi_str_mv 10.48550/arxiv.2108.06747
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2108_06747</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2108_06747</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-102b115e0f81277b0c088bd29993069d514a0167f74eb7ede2ae488beef01d3d3</originalsourceid><addsrcrecordid>eNotzj1vwjAUhWEvDAj6A5iauVLCvY6T63RDiH5ISJGa7JGNr2kQSZETtfDv2wLTWV4dPUIsEBKlswyWJpzb70Qi6ARyUjQVT1VZfzxHFe877se230elPfBuHKKfdvyM6mD6wX-FjsMwFxNvjgM_3Hcm6pdNvX6Lt-Xr-3q1jU1OFCNIi5gxeI2SyMIOtLZOFkWRQl64DJUBzMmTYkvsWBpWfwWzB3SpS2fi8XZ7xTan0HYmXJp_dHNFp78L5jsl</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SOTR: Segmenting Objects with Transformers</title><source>arXiv.org</source><creator>Guo, Ruohao ; Niu, Dantong ; Qu, Liao ; Li, Zhenbo</creator><creatorcontrib>Guo, Ruohao ; Niu, Dantong ; Qu, Liao ; Li, Zhenbo</creatorcontrib><description>Most recent transformer-based models show impressive performance on vision tasks, even better than Convolution Neural Networks (CNN). In this work, we present a novel, flexible, and effective transformer-based model for high-quality instance segmentation. The proposed method, Segmenting Objects with TRansformers (SOTR), simplifies the segmentation pipeline, building on an alternative CNN backbone appended with two parallel subtasks: (1) predicting per-instance category via transformer and (2) dynamically generating segmentation mask with the multi-level upsampling module. SOTR can effectively extract lower-level feature representations and capture long-range context dependencies by Feature Pyramid Network (FPN) and twin transformer, respectively. Meanwhile, compared with the original transformer, the proposed twin transformer is time- and resource-efficient since only a row and a column attention are involved to encode pixels. Moreover, SOTR is easy to be incorporated with various CNN backbones and transformer model variants to make considerable improvements for the segmentation accuracy and training convergence. Extensive experiments show that our SOTR performs well on the MS COCO dataset and surpasses state-of-the-art instance segmentation approaches. We hope our simple but strong framework could serve as a preferment baseline for instance-level recognition. Our code is available at https://github.com/easton-cau/SOTR.</description><identifier>DOI: 10.48550/arxiv.2108.06747</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2108.06747$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2108.06747$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Guo, Ruohao</creatorcontrib><creatorcontrib>Niu, Dantong</creatorcontrib><creatorcontrib>Qu, Liao</creatorcontrib><creatorcontrib>Li, Zhenbo</creatorcontrib><title>SOTR: Segmenting Objects with Transformers</title><description>Most recent transformer-based models show impressive performance on vision tasks, even better than Convolution Neural Networks (CNN). In this work, we present a novel, flexible, and effective transformer-based model for high-quality instance segmentation. The proposed method, Segmenting Objects with TRansformers (SOTR), simplifies the segmentation pipeline, building on an alternative CNN backbone appended with two parallel subtasks: (1) predicting per-instance category via transformer and (2) dynamically generating segmentation mask with the multi-level upsampling module. SOTR can effectively extract lower-level feature representations and capture long-range context dependencies by Feature Pyramid Network (FPN) and twin transformer, respectively. Meanwhile, compared with the original transformer, the proposed twin transformer is time- and resource-efficient since only a row and a column attention are involved to encode pixels. Moreover, SOTR is easy to be incorporated with various CNN backbones and transformer model variants to make considerable improvements for the segmentation accuracy and training convergence. Extensive experiments show that our SOTR performs well on the MS COCO dataset and surpasses state-of-the-art instance segmentation approaches. We hope our simple but strong framework could serve as a preferment baseline for instance-level recognition. Our code is available at https://github.com/easton-cau/SOTR.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzj1vwjAUhWEvDAj6A5iauVLCvY6T63RDiH5ISJGa7JGNr2kQSZETtfDv2wLTWV4dPUIsEBKlswyWJpzb70Qi6ARyUjQVT1VZfzxHFe877se230elPfBuHKKfdvyM6mD6wX-FjsMwFxNvjgM_3Hcm6pdNvX6Lt-Xr-3q1jU1OFCNIi5gxeI2SyMIOtLZOFkWRQl64DJUBzMmTYkvsWBpWfwWzB3SpS2fi8XZ7xTan0HYmXJp_dHNFp78L5jsl</recordid><startdate>20210815</startdate><enddate>20210815</enddate><creator>Guo, Ruohao</creator><creator>Niu, Dantong</creator><creator>Qu, Liao</creator><creator>Li, Zhenbo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210815</creationdate><title>SOTR: Segmenting Objects with Transformers</title><author>Guo, Ruohao ; Niu, Dantong ; Qu, Liao ; Li, Zhenbo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-102b115e0f81277b0c088bd29993069d514a0167f74eb7ede2ae488beef01d3d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Guo, Ruohao</creatorcontrib><creatorcontrib>Niu, Dantong</creatorcontrib><creatorcontrib>Qu, Liao</creatorcontrib><creatorcontrib>Li, Zhenbo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Guo, Ruohao</au><au>Niu, Dantong</au><au>Qu, Liao</au><au>Li, Zhenbo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SOTR: Segmenting Objects with Transformers</atitle><date>2021-08-15</date><risdate>2021</risdate><abstract>Most recent transformer-based models show impressive performance on vision tasks, even better than Convolution Neural Networks (CNN). In this work, we present a novel, flexible, and effective transformer-based model for high-quality instance segmentation. The proposed method, Segmenting Objects with TRansformers (SOTR), simplifies the segmentation pipeline, building on an alternative CNN backbone appended with two parallel subtasks: (1) predicting per-instance category via transformer and (2) dynamically generating segmentation mask with the multi-level upsampling module. SOTR can effectively extract lower-level feature representations and capture long-range context dependencies by Feature Pyramid Network (FPN) and twin transformer, respectively. Meanwhile, compared with the original transformer, the proposed twin transformer is time- and resource-efficient since only a row and a column attention are involved to encode pixels. Moreover, SOTR is easy to be incorporated with various CNN backbones and transformer model variants to make considerable improvements for the segmentation accuracy and training convergence. Extensive experiments show that our SOTR performs well on the MS COCO dataset and surpasses state-of-the-art instance segmentation approaches. We hope our simple but strong framework could serve as a preferment baseline for instance-level recognition. Our code is available at https://github.com/easton-cau/SOTR.</abstract><doi>10.48550/arxiv.2108.06747</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2108.06747
ispartof
issn
language eng
recordid cdi_arxiv_primary_2108_06747
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title SOTR: Segmenting Objects with Transformers
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T04%3A18%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SOTR:%20Segmenting%20Objects%20with%20Transformers&rft.au=Guo,%20Ruohao&rft.date=2021-08-15&rft_id=info:doi/10.48550/arxiv.2108.06747&rft_dat=%3Carxiv_GOX%3E2108_06747%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true