Conformer: Local Features Coupling Global Representations for Recognition and Detection

With convolution operations, Convolutional Neural Networks (CNNs) are good at extracting local features but experience difficulty to capture global representations. With cascaded self-attention modules, vision transformers can capture long-distance feature dependencies but unfortunately deteriorate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2023-08, Vol.45 (8), p.1-15
Hauptverfasser: Peng, Zhiliang, Guo, Zonghao, Huang, Wei, Wang, Yaowei, Xie, Lingxi, Jiao, Jianbin, Tian, Qi, Ye, Qixiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 15
container_issue 8
container_start_page 1
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 45
creator Peng, Zhiliang
Guo, Zonghao
Huang, Wei
Wang, Yaowei
Xie, Lingxi
Jiao, Jianbin
Tian, Qi
Ye, Qixiang
description With convolution operations, Convolutional Neural Networks (CNNs) are good at extracting local features but experience difficulty to capture global representations. With cascaded self-attention modules, vision transformers can capture long-distance feature dependencies but unfortunately deteriorate local feature details. In this paper, we propose a hybrid network structure, termed Conformer, to take both advantages of convolution operations and self-attention mechanisms for enhanced representation learning. Conformer roots in feature coupling of CNN local features and transformer global representations under different resolutions in an interactive fashion. Conformer adopts a dual structure so that local details and global dependencies are retained to the maximum extent. We also propose a Conformer-based detector (ConformerDet), which learns to predict and refine object proposals, by performing region-level feature coupling in an augmented cross-attention fashion. Experiments on ImageNet and MS COCO datasets validate Conformer's superiority for visual recognition and object detection, demonstrating its potential to be a general backbone network. Code is available at https://github.com/pengzhiliang/Conformer .
doi_str_mv 10.1109/TPAMI.2023.3243048
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TPAMI_2023_3243048</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10040235</ieee_id><sourcerecordid>2797150083</sourcerecordid><originalsourceid>FETCH-LOGICAL-c352t-ebc3430feea7ec59cbc5abf7841033c984eb9aff7b5e31ac86c28fee7b1bcef23</originalsourceid><addsrcrecordid>eNpdkNFKwzAUhoMobk5fQEQK3njTmeS0a-rdqG4OJopMvAxJdjoqXTOT9sK3N3NTxKuQc77_5_ARcs7okDGa3yyex4-zIacchsAToIk4IH2WQx5DCvkh6VM24rEQXPTIiffvlLIkpXBMepBRzgWM-uStsE1p3RrdbTS3RtXRBFXbOfRRYbtNXTWraFpbHRYvuAljbFrVVrbxUYiFmbGrptoOItUsozts0Wx_p-SoVLXHs_07IK-T-0XxEM-fprNiPI8NpLyNURsIl5eIKkOT5kabVOkyEwmjACYXCepclWWmUwSmjBgZLgKdaaYNlhwG5HrXu3H2o0PfynXlDda1atB2XvIsz1hKqYCAXv1D323nmnCdDC4CJBiwQPEdZZz13mEpN65aK_cpGZVb7fJbu9xql3vtIXS5r-70Gpe_kR_PAbjYARUi_mmkSahJ4QtTxYf5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2831508131</pqid></control><display><type>article</type><title>Conformer: Local Features Coupling Global Representations for Recognition and Detection</title><source>IEEE Electronic Library (IEL)</source><creator>Peng, Zhiliang ; Guo, Zonghao ; Huang, Wei ; Wang, Yaowei ; Xie, Lingxi ; Jiao, Jianbin ; Tian, Qi ; Ye, Qixiang</creator><creatorcontrib>Peng, Zhiliang ; Guo, Zonghao ; Huang, Wei ; Wang, Yaowei ; Xie, Lingxi ; Jiao, Jianbin ; Tian, Qi ; Ye, Qixiang</creatorcontrib><description>With convolution operations, Convolutional Neural Networks (CNNs) are good at extracting local features but experience difficulty to capture global representations. With cascaded self-attention modules, vision transformers can capture long-distance feature dependencies but unfortunately deteriorate local feature details. In this paper, we propose a hybrid network structure, termed Conformer, to take both advantages of convolution operations and self-attention mechanisms for enhanced representation learning. Conformer roots in feature coupling of CNN local features and transformer global representations under different resolutions in an interactive fashion. Conformer adopts a dual structure so that local details and global dependencies are retained to the maximum extent. We also propose a Conformer-based detector (ConformerDet), which learns to predict and refine object proposals, by performing region-level feature coupling in an augmented cross-attention fashion. Experiments on ImageNet and MS COCO datasets validate Conformer's superiority for visual recognition and object detection, demonstrating its potential to be a general backbone network. Code is available at https://github.com/pengzhiliang/Conformer .</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2023.3243048</identifier><identifier>PMID: 37022836</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Artificial neural networks ; Computer networks ; Convolution ; Coupling ; Couplings ; Detectors ; Feature extraction ; Feature Fusion ; Image Recognition ; Machine learning ; Object detection ; Object recognition ; Representations ; Transformers ; Vision Transformer ; Visualization</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2023-08, Vol.45 (8), p.1-15</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c352t-ebc3430feea7ec59cbc5abf7841033c984eb9aff7b5e31ac86c28fee7b1bcef23</citedby><cites>FETCH-LOGICAL-c352t-ebc3430feea7ec59cbc5abf7841033c984eb9aff7b5e31ac86c28fee7b1bcef23</cites><orcidid>0000-0002-7252-5047 ; 0000-0003-1215-6259 ; 0000-0003-0454-3929 ; 0000-0003-2197-9038 ; 0000-0002-6643-9329 ; 0000-0001-8492-2130 ; 0000-0003-4831-9451 ; 0000-0001-8899-0069</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10040235$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27922,27923,54756</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10040235$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37022836$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Peng, Zhiliang</creatorcontrib><creatorcontrib>Guo, Zonghao</creatorcontrib><creatorcontrib>Huang, Wei</creatorcontrib><creatorcontrib>Wang, Yaowei</creatorcontrib><creatorcontrib>Xie, Lingxi</creatorcontrib><creatorcontrib>Jiao, Jianbin</creatorcontrib><creatorcontrib>Tian, Qi</creatorcontrib><creatorcontrib>Ye, Qixiang</creatorcontrib><title>Conformer: Local Features Coupling Global Representations for Recognition and Detection</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>With convolution operations, Convolutional Neural Networks (CNNs) are good at extracting local features but experience difficulty to capture global representations. With cascaded self-attention modules, vision transformers can capture long-distance feature dependencies but unfortunately deteriorate local feature details. In this paper, we propose a hybrid network structure, termed Conformer, to take both advantages of convolution operations and self-attention mechanisms for enhanced representation learning. Conformer roots in feature coupling of CNN local features and transformer global representations under different resolutions in an interactive fashion. Conformer adopts a dual structure so that local details and global dependencies are retained to the maximum extent. We also propose a Conformer-based detector (ConformerDet), which learns to predict and refine object proposals, by performing region-level feature coupling in an augmented cross-attention fashion. Experiments on ImageNet and MS COCO datasets validate Conformer's superiority for visual recognition and object detection, demonstrating its potential to be a general backbone network. Code is available at https://github.com/pengzhiliang/Conformer .</description><subject>Artificial neural networks</subject><subject>Computer networks</subject><subject>Convolution</subject><subject>Coupling</subject><subject>Couplings</subject><subject>Detectors</subject><subject>Feature extraction</subject><subject>Feature Fusion</subject><subject>Image Recognition</subject><subject>Machine learning</subject><subject>Object detection</subject><subject>Object recognition</subject><subject>Representations</subject><subject>Transformers</subject><subject>Vision Transformer</subject><subject>Visualization</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkNFKwzAUhoMobk5fQEQK3njTmeS0a-rdqG4OJopMvAxJdjoqXTOT9sK3N3NTxKuQc77_5_ARcs7okDGa3yyex4-zIacchsAToIk4IH2WQx5DCvkh6VM24rEQXPTIiffvlLIkpXBMepBRzgWM-uStsE1p3RrdbTS3RtXRBFXbOfRRYbtNXTWraFpbHRYvuAljbFrVVrbxUYiFmbGrptoOItUsozts0Wx_p-SoVLXHs_07IK-T-0XxEM-fprNiPI8NpLyNURsIl5eIKkOT5kabVOkyEwmjACYXCepclWWmUwSmjBgZLgKdaaYNlhwG5HrXu3H2o0PfynXlDda1atB2XvIsz1hKqYCAXv1D323nmnCdDC4CJBiwQPEdZZz13mEpN65aK_cpGZVb7fJbu9xql3vtIXS5r-70Gpe_kR_PAbjYARUi_mmkSahJ4QtTxYf5</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Peng, Zhiliang</creator><creator>Guo, Zonghao</creator><creator>Huang, Wei</creator><creator>Wang, Yaowei</creator><creator>Xie, Lingxi</creator><creator>Jiao, Jianbin</creator><creator>Tian, Qi</creator><creator>Ye, Qixiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-7252-5047</orcidid><orcidid>https://orcid.org/0000-0003-1215-6259</orcidid><orcidid>https://orcid.org/0000-0003-0454-3929</orcidid><orcidid>https://orcid.org/0000-0003-2197-9038</orcidid><orcidid>https://orcid.org/0000-0002-6643-9329</orcidid><orcidid>https://orcid.org/0000-0001-8492-2130</orcidid><orcidid>https://orcid.org/0000-0003-4831-9451</orcidid><orcidid>https://orcid.org/0000-0001-8899-0069</orcidid></search><sort><creationdate>20230801</creationdate><title>Conformer: Local Features Coupling Global Representations for Recognition and Detection</title><author>Peng, Zhiliang ; Guo, Zonghao ; Huang, Wei ; Wang, Yaowei ; Xie, Lingxi ; Jiao, Jianbin ; Tian, Qi ; Ye, Qixiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c352t-ebc3430feea7ec59cbc5abf7841033c984eb9aff7b5e31ac86c28fee7b1bcef23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Computer networks</topic><topic>Convolution</topic><topic>Coupling</topic><topic>Couplings</topic><topic>Detectors</topic><topic>Feature extraction</topic><topic>Feature Fusion</topic><topic>Image Recognition</topic><topic>Machine learning</topic><topic>Object detection</topic><topic>Object recognition</topic><topic>Representations</topic><topic>Transformers</topic><topic>Vision Transformer</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Peng, Zhiliang</creatorcontrib><creatorcontrib>Guo, Zonghao</creatorcontrib><creatorcontrib>Huang, Wei</creatorcontrib><creatorcontrib>Wang, Yaowei</creatorcontrib><creatorcontrib>Xie, Lingxi</creatorcontrib><creatorcontrib>Jiao, Jianbin</creatorcontrib><creatorcontrib>Tian, Qi</creatorcontrib><creatorcontrib>Ye, Qixiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Peng, Zhiliang</au><au>Guo, Zonghao</au><au>Huang, Wei</au><au>Wang, Yaowei</au><au>Xie, Lingxi</au><au>Jiao, Jianbin</au><au>Tian, Qi</au><au>Ye, Qixiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Conformer: Local Features Coupling Global Representations for Recognition and Detection</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2023-08-01</date><risdate>2023</risdate><volume>45</volume><issue>8</issue><spage>1</spage><epage>15</epage><pages>1-15</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>With convolution operations, Convolutional Neural Networks (CNNs) are good at extracting local features but experience difficulty to capture global representations. With cascaded self-attention modules, vision transformers can capture long-distance feature dependencies but unfortunately deteriorate local feature details. In this paper, we propose a hybrid network structure, termed Conformer, to take both advantages of convolution operations and self-attention mechanisms for enhanced representation learning. Conformer roots in feature coupling of CNN local features and transformer global representations under different resolutions in an interactive fashion. Conformer adopts a dual structure so that local details and global dependencies are retained to the maximum extent. We also propose a Conformer-based detector (ConformerDet), which learns to predict and refine object proposals, by performing region-level feature coupling in an augmented cross-attention fashion. Experiments on ImageNet and MS COCO datasets validate Conformer's superiority for visual recognition and object detection, demonstrating its potential to be a general backbone network. Code is available at https://github.com/pengzhiliang/Conformer .</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37022836</pmid><doi>10.1109/TPAMI.2023.3243048</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-7252-5047</orcidid><orcidid>https://orcid.org/0000-0003-1215-6259</orcidid><orcidid>https://orcid.org/0000-0003-0454-3929</orcidid><orcidid>https://orcid.org/0000-0003-2197-9038</orcidid><orcidid>https://orcid.org/0000-0002-6643-9329</orcidid><orcidid>https://orcid.org/0000-0001-8492-2130</orcidid><orcidid>https://orcid.org/0000-0003-4831-9451</orcidid><orcidid>https://orcid.org/0000-0001-8899-0069</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2023-08, Vol.45 (8), p.1-15
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_crossref_primary_10_1109_TPAMI_2023_3243048
source IEEE Electronic Library (IEL)
subjects Artificial neural networks
Computer networks
Convolution
Coupling
Couplings
Detectors
Feature extraction
Feature Fusion
Image Recognition
Machine learning
Object detection
Object recognition
Representations
Transformers
Vision Transformer
Visualization
title Conformer: Local Features Coupling Global Representations for Recognition and Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T21%3A46%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Conformer:%20Local%20Features%20Coupling%20Global%20Representations%20for%20Recognition%20and%20Detection&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Peng,%20Zhiliang&rft.date=2023-08-01&rft.volume=45&rft.issue=8&rft.spage=1&rft.epage=15&rft.pages=1-15&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2023.3243048&rft_dat=%3Cproquest_RIE%3E2797150083%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2831508131&rft_id=info:pmid/37022836&rft_ieee_id=10040235&rfr_iscdi=true