Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation
Recognizing objects in low-resolution images is a challenging task due to the lack of informative details. Recent studies have shown that knowledge distillation approaches can effectively transfer knowledge from a high-resolution teacher model to a low-resolution student model by aligning cross-reso...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2024-04, Vol.34 (4), p.1-1 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | 4 |
container_start_page | 1 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 34 |
creator | Zhang, Kangkai Ge, Shiming Shi, Ruixin Zeng, Dan |
description | Recognizing objects in low-resolution images is a challenging task due to the lack of informative details. Recent studies have shown that knowledge distillation approaches can effectively transfer knowledge from a high-resolution teacher model to a low-resolution student model by aligning cross-resolution representations. However, these approaches still face limitations in adapting to the situation where the recognized objects exhibit significant representation discrepancies between training and testing images. In this study, we propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition. Our approach enables the student model to mimic the behavior of a well-trained teacher model which delivers high accuracy in identifying high-resolution objects. To extract sufficient knowledge, the student learning is supervised with contrastive relational distillation loss, which preserves the similarities in various relational structures in contrastive representation space. In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution object classification and low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach. |
doi_str_mv | 10.1109/TCSVT.2023.3310042 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_3033619089</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10234434</ieee_id><sourcerecordid>3033619089</sourcerecordid><originalsourceid>FETCH-LOGICAL-c247t-49073e75e1a7bf3a173313c0dc63f8412c359fbd20026a3be44f62b464eba0ba3</originalsourceid><addsrcrecordid>eNpNUMtOwzAQtBBIlMIPIA6ROKesvXYeRxSeUqVKpXDhYDnpBlyFGmKXir8naXroaUa7M6udYeySw4RzyG8WxcvbYiJA4ASRA0hxxEZcqSwWAtRxx0HxOBNcnbIz71cAXGYyHbH3qdvGc_Ku2QTr1tGsXFEVojlV7mNtd6OtDZ9R0TrvD4VzakxPTBMVbh1a44P9pejOdtgMq3N2UpvG08Uex-z14X5RPMXT2eNzcTuNKyHTEMscUqRUETdpWaPhaRcBK1hWCdaZ5KJCldflUgCIxGBJUtaJKGUiqTRQGhyz6-Hud-t-NuSDXrlN233mNQJiwnPI8k4lBlXVR2mp1t-t_TLtn-ag-xL1rkTdl6j3JXamq8FkiejAIFBKlPgP3gBvOQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3033619089</pqid></control><display><type>article</type><title>Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Kangkai ; Ge, Shiming ; Shi, Ruixin ; Zeng, Dan</creator><creatorcontrib>Zhang, Kangkai ; Ge, Shiming ; Shi, Ruixin ; Zeng, Dan</creatorcontrib><description>Recognizing objects in low-resolution images is a challenging task due to the lack of informative details. Recent studies have shown that knowledge distillation approaches can effectively transfer knowledge from a high-resolution teacher model to a low-resolution student model by aligning cross-resolution representations. However, these approaches still face limitations in adapting to the situation where the recognized objects exhibit significant representation discrepancies between training and testing images. In this study, we propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition. Our approach enables the student model to mimic the behavior of a well-trained teacher model which delivers high accuracy in identifying high-resolution objects. To extract sufficient knowledge, the student learning is supervised with contrastive relational distillation loss, which preserves the similarities in various relational structures in contrastive representation space. In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution object classification and low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2023.3310042</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation models ; Distillation ; domain adaptation ; Face recognition ; Germanium ; High resolution ; Image resolution ; knowledge distillation ; Knowledge management ; Knowledge transfer ; Low-resolution face recognition ; low-resolution object classification ; Object recognition ; Representations ; Teachers ; Training ; Visualization</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2024-04, Vol.34 (4), p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c247t-49073e75e1a7bf3a173313c0dc63f8412c359fbd20026a3be44f62b464eba0ba3</cites><orcidid>0000-0001-5293-310X ; 0000-0003-0818-5268 ; 0000-0003-1300-1769</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10234434$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10234434$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Kangkai</creatorcontrib><creatorcontrib>Ge, Shiming</creatorcontrib><creatorcontrib>Shi, Ruixin</creatorcontrib><creatorcontrib>Zeng, Dan</creatorcontrib><title>Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Recognizing objects in low-resolution images is a challenging task due to the lack of informative details. Recent studies have shown that knowledge distillation approaches can effectively transfer knowledge from a high-resolution teacher model to a low-resolution student model by aligning cross-resolution representations. However, these approaches still face limitations in adapting to the situation where the recognized objects exhibit significant representation discrepancies between training and testing images. In this study, we propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition. Our approach enables the student model to mimic the behavior of a well-trained teacher model which delivers high accuracy in identifying high-resolution objects. To extract sufficient knowledge, the student learning is supervised with contrastive relational distillation loss, which preserves the similarities in various relational structures in contrastive representation space. In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution object classification and low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach.</description><subject>Adaptation models</subject><subject>Distillation</subject><subject>domain adaptation</subject><subject>Face recognition</subject><subject>Germanium</subject><subject>High resolution</subject><subject>Image resolution</subject><subject>knowledge distillation</subject><subject>Knowledge management</subject><subject>Knowledge transfer</subject><subject>Low-resolution face recognition</subject><subject>low-resolution object classification</subject><subject>Object recognition</subject><subject>Representations</subject><subject>Teachers</subject><subject>Training</subject><subject>Visualization</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNUMtOwzAQtBBIlMIPIA6ROKesvXYeRxSeUqVKpXDhYDnpBlyFGmKXir8naXroaUa7M6udYeySw4RzyG8WxcvbYiJA4ASRA0hxxEZcqSwWAtRxx0HxOBNcnbIz71cAXGYyHbH3qdvGc_Ku2QTr1tGsXFEVojlV7mNtd6OtDZ9R0TrvD4VzakxPTBMVbh1a44P9pejOdtgMq3N2UpvG08Uex-z14X5RPMXT2eNzcTuNKyHTEMscUqRUETdpWaPhaRcBK1hWCdaZ5KJCldflUgCIxGBJUtaJKGUiqTRQGhyz6-Hud-t-NuSDXrlN233mNQJiwnPI8k4lBlXVR2mp1t-t_TLtn-ag-xL1rkTdl6j3JXamq8FkiejAIFBKlPgP3gBvOQ</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Zhang, Kangkai</creator><creator>Ge, Shiming</creator><creator>Shi, Ruixin</creator><creator>Zeng, Dan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-5293-310X</orcidid><orcidid>https://orcid.org/0000-0003-0818-5268</orcidid><orcidid>https://orcid.org/0000-0003-1300-1769</orcidid></search><sort><creationdate>20240401</creationdate><title>Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation</title><author>Zhang, Kangkai ; Ge, Shiming ; Shi, Ruixin ; Zeng, Dan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c247t-49073e75e1a7bf3a173313c0dc63f8412c359fbd20026a3be44f62b464eba0ba3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation models</topic><topic>Distillation</topic><topic>domain adaptation</topic><topic>Face recognition</topic><topic>Germanium</topic><topic>High resolution</topic><topic>Image resolution</topic><topic>knowledge distillation</topic><topic>Knowledge management</topic><topic>Knowledge transfer</topic><topic>Low-resolution face recognition</topic><topic>low-resolution object classification</topic><topic>Object recognition</topic><topic>Representations</topic><topic>Teachers</topic><topic>Training</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Kangkai</creatorcontrib><creatorcontrib>Ge, Shiming</creatorcontrib><creatorcontrib>Shi, Ruixin</creatorcontrib><creatorcontrib>Zeng, Dan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Kangkai</au><au>Ge, Shiming</au><au>Shi, Ruixin</au><au>Zeng, Dan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2024-04-01</date><risdate>2024</risdate><volume>34</volume><issue>4</issue><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Recognizing objects in low-resolution images is a challenging task due to the lack of informative details. Recent studies have shown that knowledge distillation approaches can effectively transfer knowledge from a high-resolution teacher model to a low-resolution student model by aligning cross-resolution representations. However, these approaches still face limitations in adapting to the situation where the recognized objects exhibit significant representation discrepancies between training and testing images. In this study, we propose a cross-resolution relational contrastive distillation approach to facilitate low-resolution object recognition. Our approach enables the student model to mimic the behavior of a well-trained teacher model which delivers high accuracy in identifying high-resolution objects. To extract sufficient knowledge, the student learning is supervised with contrastive relational distillation loss, which preserves the similarities in various relational structures in contrastive representation space. In this manner, the capability of recovering missing details of familiar low-resolution objects can be effectively enhanced, leading to a better knowledge transfer. Extensive experiments on low-resolution object classification and low-resolution face recognition clearly demonstrate the effectiveness and adaptability of our approach.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2023.3310042</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-5293-310X</orcidid><orcidid>https://orcid.org/0000-0003-0818-5268</orcidid><orcidid>https://orcid.org/0000-0003-1300-1769</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2024-04, Vol.34 (4), p.1-1 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_proquest_journals_3033619089 |
source | IEEE Electronic Library (IEL) |
subjects | Adaptation models Distillation domain adaptation Face recognition Germanium High resolution Image resolution knowledge distillation Knowledge management Knowledge transfer Low-resolution face recognition low-resolution object classification Object recognition Representations Teachers Training Visualization |
title | Low-Resolution Object Recognition with Cross-Resolution Relational Contrastive Distillation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T14%3A23%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Low-Resolution%20Object%20Recognition%20with%20Cross-Resolution%20Relational%20Contrastive%20Distillation&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Zhang,%20Kangkai&rft.date=2024-04-01&rft.volume=34&rft.issue=4&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2023.3310042&rft_dat=%3Cproquest_RIE%3E3033619089%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3033619089&rft_id=info:pmid/&rft_ieee_id=10234434&rfr_iscdi=true |