Few-Shot Object Detection Based on Self-Knowledge Distillation

In many fields, due to the lack of large-scale training data, the traditional object detection methods cannot complete the actual work well. The main reason is the overfitting problem and lack of the generalization ability. In this work, we propose a general method to alleviate the overfitting probl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE intelligent systems 2024, p.1-8
Hauptverfasser: Li, Yang, Gong, Yicheng, Zhang, Zhuo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 8
container_issue
container_start_page 1
container_title IEEE intelligent systems
container_volume
creator Li, Yang
Gong, Yicheng
Zhang, Zhuo
description In many fields, due to the lack of large-scale training data, the traditional object detection methods cannot complete the actual work well. The main reason is the overfitting problem and lack of the generalization ability. In this work, we propose a general method to alleviate the overfitting problem in the few-shot object detection. Our work extends Faster R-CNN with self-knowledge distillation algorithm and designs the loss function with attention mechanism, which can improve true detection in the foreground. In this way, object detector can learn an approximate mapping relationship from few samples, which makes the network possess a stronger generalization ability when tackling few images. Through numerous comparative experiments, we demonstrate that our method is general and feasible on VOC and COCO benchmarks datasets with different settings. We provide a new idea for solving the problem of few-shot object detection, and produce an excellent performance of recall rate on few-shot object detection.
doi_str_mv 10.1109/MIS.2022.3205686
format Article
fullrecord <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_MIS_2022_3205686</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9887857</ieee_id><sourcerecordid>10_1109_MIS_2022_3205686</sourcerecordid><originalsourceid>FETCH-LOGICAL-c178t-2330e278828735ec65509817dfa68e8dc8652a6aef76773bfcaf7d3343664be63</originalsourceid><addsrcrecordid>eNo9j81Kw0AYRQdRsFb3gpu8wMT5ycx82QjaHy1t6SK6HiaTbzQlNpIJFN_ehBZX9yzuvXAIuecs5Zzlj9tVkQomRCoFUxr0BZnwPOOUizy7HFiNrI24Jjcx7hkTknGYkKclHmnx1fbJrtyj75M59kPU7SF5cRGrZIACm0DXh_bYYPWJybyOfd00bizdkqvgmoh355ySj-XiffZGN7vX1ex5Qz030FMhJUNhAAQYqdBrpVgO3FTBaUCoPGglnHYYjDZGlsG7YCopM6l1VqKWU8JOv75rY-ww2J-u_nbdr-XMjv528Lejvz37D5OH06RGxP96DmBAGfkH9BRVfg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Few-Shot Object Detection Based on Self-Knowledge Distillation</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Yang ; Gong, Yicheng ; Zhang, Zhuo</creator><creatorcontrib>Li, Yang ; Gong, Yicheng ; Zhang, Zhuo</creatorcontrib><description>In many fields, due to the lack of large-scale training data, the traditional object detection methods cannot complete the actual work well. The main reason is the overfitting problem and lack of the generalization ability. In this work, we propose a general method to alleviate the overfitting problem in the few-shot object detection. Our work extends Faster R-CNN with self-knowledge distillation algorithm and designs the loss function with attention mechanism, which can improve true detection in the foreground. In this way, object detector can learn an approximate mapping relationship from few samples, which makes the network possess a stronger generalization ability when tackling few images. Through numerous comparative experiments, we demonstrate that our method is general and feasible on VOC and COCO benchmarks datasets with different settings. We provide a new idea for solving the problem of few-shot object detection, and produce an excellent performance of recall rate on few-shot object detection.</description><identifier>ISSN: 1541-1672</identifier><identifier>EISSN: 1941-1294</identifier><identifier>DOI: 10.1109/MIS.2022.3205686</identifier><identifier>CODEN: IISYF7</identifier><language>eng</language><publisher>IEEE</publisher><subject>Intelligent systems ; Knowledge transfer ; Object detection ; Task analysis ; Training ; Training data ; Transfer learning</subject><ispartof>IEEE intelligent systems, 2024, p.1-8</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c178t-2330e278828735ec65509817dfa68e8dc8652a6aef76773bfcaf7d3343664be63</citedby><orcidid>0000-0002-3946-0720 ; 0000-0002-4268-4004 ; 0000-0002-4016-6633</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9887857$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9887857$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Gong, Yicheng</creatorcontrib><creatorcontrib>Zhang, Zhuo</creatorcontrib><title>Few-Shot Object Detection Based on Self-Knowledge Distillation</title><title>IEEE intelligent systems</title><addtitle>MIS</addtitle><description>In many fields, due to the lack of large-scale training data, the traditional object detection methods cannot complete the actual work well. The main reason is the overfitting problem and lack of the generalization ability. In this work, we propose a general method to alleviate the overfitting problem in the few-shot object detection. Our work extends Faster R-CNN with self-knowledge distillation algorithm and designs the loss function with attention mechanism, which can improve true detection in the foreground. In this way, object detector can learn an approximate mapping relationship from few samples, which makes the network possess a stronger generalization ability when tackling few images. Through numerous comparative experiments, we demonstrate that our method is general and feasible on VOC and COCO benchmarks datasets with different settings. We provide a new idea for solving the problem of few-shot object detection, and produce an excellent performance of recall rate on few-shot object detection.</description><subject>Intelligent systems</subject><subject>Knowledge transfer</subject><subject>Object detection</subject><subject>Task analysis</subject><subject>Training</subject><subject>Training data</subject><subject>Transfer learning</subject><issn>1541-1672</issn><issn>1941-1294</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9j81Kw0AYRQdRsFb3gpu8wMT5ycx82QjaHy1t6SK6HiaTbzQlNpIJFN_ehBZX9yzuvXAIuecs5Zzlj9tVkQomRCoFUxr0BZnwPOOUizy7HFiNrI24Jjcx7hkTknGYkKclHmnx1fbJrtyj75M59kPU7SF5cRGrZIACm0DXh_bYYPWJybyOfd00bizdkqvgmoh355ySj-XiffZGN7vX1ex5Qz030FMhJUNhAAQYqdBrpVgO3FTBaUCoPGglnHYYjDZGlsG7YCopM6l1VqKWU8JOv75rY-ww2J-u_nbdr-XMjv528Lejvz37D5OH06RGxP96DmBAGfkH9BRVfg</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Li, Yang</creator><creator>Gong, Yicheng</creator><creator>Zhang, Zhuo</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-3946-0720</orcidid><orcidid>https://orcid.org/0000-0002-4268-4004</orcidid><orcidid>https://orcid.org/0000-0002-4016-6633</orcidid></search><sort><creationdate>2024</creationdate><title>Few-Shot Object Detection Based on Self-Knowledge Distillation</title><author>Li, Yang ; Gong, Yicheng ; Zhang, Zhuo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c178t-2330e278828735ec65509817dfa68e8dc8652a6aef76773bfcaf7d3343664be63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Intelligent systems</topic><topic>Knowledge transfer</topic><topic>Object detection</topic><topic>Task analysis</topic><topic>Training</topic><topic>Training data</topic><topic>Transfer learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Gong, Yicheng</creatorcontrib><creatorcontrib>Zhang, Zhuo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE intelligent systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yang</au><au>Gong, Yicheng</au><au>Zhang, Zhuo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Few-Shot Object Detection Based on Self-Knowledge Distillation</atitle><jtitle>IEEE intelligent systems</jtitle><stitle>MIS</stitle><date>2024</date><risdate>2024</risdate><spage>1</spage><epage>8</epage><pages>1-8</pages><issn>1541-1672</issn><eissn>1941-1294</eissn><coden>IISYF7</coden><abstract>In many fields, due to the lack of large-scale training data, the traditional object detection methods cannot complete the actual work well. The main reason is the overfitting problem and lack of the generalization ability. In this work, we propose a general method to alleviate the overfitting problem in the few-shot object detection. Our work extends Faster R-CNN with self-knowledge distillation algorithm and designs the loss function with attention mechanism, which can improve true detection in the foreground. In this way, object detector can learn an approximate mapping relationship from few samples, which makes the network possess a stronger generalization ability when tackling few images. Through numerous comparative experiments, we demonstrate that our method is general and feasible on VOC and COCO benchmarks datasets with different settings. We provide a new idea for solving the problem of few-shot object detection, and produce an excellent performance of recall rate on few-shot object detection.</abstract><pub>IEEE</pub><doi>10.1109/MIS.2022.3205686</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-3946-0720</orcidid><orcidid>https://orcid.org/0000-0002-4268-4004</orcidid><orcidid>https://orcid.org/0000-0002-4016-6633</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1541-1672
ispartof IEEE intelligent systems, 2024, p.1-8
issn 1541-1672
1941-1294
language eng
recordid cdi_crossref_primary_10_1109_MIS_2022_3205686
source IEEE Electronic Library (IEL)
subjects Intelligent systems
Knowledge transfer
Object detection
Task analysis
Training
Training data
Transfer learning
title Few-Shot Object Detection Based on Self-Knowledge Distillation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T18%3A04%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Few-Shot%20Object%20Detection%20Based%20on%20Self-Knowledge%20Distillation&rft.jtitle=IEEE%20intelligent%20systems&rft.au=Li,%20Yang&rft.date=2024&rft.spage=1&rft.epage=8&rft.pages=1-8&rft.issn=1541-1672&rft.eissn=1941-1294&rft.coden=IISYF7&rft_id=info:doi/10.1109/MIS.2022.3205686&rft_dat=%3Ccrossref_RIE%3E10_1109_MIS_2022_3205686%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9887857&rfr_iscdi=true