Improving Resnet-9 Generalization Trained on Small Datasets

This paper presents our proposed approach that won the first prize at the ICLR competition on Hardware Aware Efficient Training. The challenge is to achieve the highest possible accuracy in an image classification task in less than 10 minutes. The training is done on a small dataset of 5000 images p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-09
Hauptverfasser: Omar Mohamed Awad, Habib Hajimolahoseini, Lim, Michael, Gosal, Gurpreet, Ahmed, Walid, Liu, Yang, Deng, Gordon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Omar Mohamed Awad
Habib Hajimolahoseini
Lim, Michael
Gosal, Gurpreet
Ahmed, Walid
Liu, Yang
Deng, Gordon
description This paper presents our proposed approach that won the first prize at the ICLR competition on Hardware Aware Efficient Training. The challenge is to achieve the highest possible accuracy in an image classification task in less than 10 minutes. The training is done on a small dataset of 5000 images picked randomly from CIFAR-10 dataset. The evaluation is performed by the competition organizers on a secret dataset with 1000 images of the same size. Our approach includes applying a series of technique for improving the generalization of ResNet-9 including: sharpness aware optimization, label smoothing, gradient centralization, input patch whitening as well as metalearning based training. Our experiments show that the ResNet-9 can achieve the accuracy of 88% while trained only on a 10% subset of CIFAR-10 dataset in less than 10 minuets
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2863617674</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2863617674</sourcerecordid><originalsourceid>FETCH-proquest_journals_28636176743</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw9swtKMovy8xLVwhKLc5LLdG1VHBPzUstSszJrEosyczPUwgpSszMS01RADKDcxNzchRcEksSi1NLinkYWNMSc4pTeaE0N4Oym2uIs4cu0MTC0tTikvis_NKiPKBUvJGFmbGZobmZuYkxcaoA5u43CA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2863617674</pqid></control><display><type>article</type><title>Improving Resnet-9 Generalization Trained on Small Datasets</title><source>Free E- Journals</source><creator>Omar Mohamed Awad ; Habib Hajimolahoseini ; Lim, Michael ; Gosal, Gurpreet ; Ahmed, Walid ; Liu, Yang ; Deng, Gordon</creator><creatorcontrib>Omar Mohamed Awad ; Habib Hajimolahoseini ; Lim, Michael ; Gosal, Gurpreet ; Ahmed, Walid ; Liu, Yang ; Deng, Gordon</creatorcontrib><description>This paper presents our proposed approach that won the first prize at the ICLR competition on Hardware Aware Efficient Training. The challenge is to achieve the highest possible accuracy in an image classification task in less than 10 minutes. The training is done on a small dataset of 5000 images picked randomly from CIFAR-10 dataset. The evaluation is performed by the competition organizers on a secret dataset with 1000 images of the same size. Our approach includes applying a series of technique for improving the generalization of ResNet-9 including: sharpness aware optimization, label smoothing, gradient centralization, input patch whitening as well as metalearning based training. Our experiments show that the ResNet-9 can achieve the accuracy of 88% while trained only on a 10% subset of CIFAR-10 dataset in less than 10 minuets</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Image classification ; Optimization ; Training</subject><ispartof>arXiv.org, 2023-09</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Omar Mohamed Awad</creatorcontrib><creatorcontrib>Habib Hajimolahoseini</creatorcontrib><creatorcontrib>Lim, Michael</creatorcontrib><creatorcontrib>Gosal, Gurpreet</creatorcontrib><creatorcontrib>Ahmed, Walid</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Deng, Gordon</creatorcontrib><title>Improving Resnet-9 Generalization Trained on Small Datasets</title><title>arXiv.org</title><description>This paper presents our proposed approach that won the first prize at the ICLR competition on Hardware Aware Efficient Training. The challenge is to achieve the highest possible accuracy in an image classification task in less than 10 minutes. The training is done on a small dataset of 5000 images picked randomly from CIFAR-10 dataset. The evaluation is performed by the competition organizers on a secret dataset with 1000 images of the same size. Our approach includes applying a series of technique for improving the generalization of ResNet-9 including: sharpness aware optimization, label smoothing, gradient centralization, input patch whitening as well as metalearning based training. Our experiments show that the ResNet-9 can achieve the accuracy of 88% while trained only on a 10% subset of CIFAR-10 dataset in less than 10 minuets</description><subject>Datasets</subject><subject>Image classification</subject><subject>Optimization</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw9swtKMovy8xLVwhKLc5LLdG1VHBPzUstSszJrEosyczPUwgpSszMS01RADKDcxNzchRcEksSi1NLinkYWNMSc4pTeaE0N4Oym2uIs4cu0MTC0tTikvis_NKiPKBUvJGFmbGZobmZuYkxcaoA5u43CA</recordid><startdate>20230907</startdate><enddate>20230907</enddate><creator>Omar Mohamed Awad</creator><creator>Habib Hajimolahoseini</creator><creator>Lim, Michael</creator><creator>Gosal, Gurpreet</creator><creator>Ahmed, Walid</creator><creator>Liu, Yang</creator><creator>Deng, Gordon</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20230907</creationdate><title>Improving Resnet-9 Generalization Trained on Small Datasets</title><author>Omar Mohamed Awad ; Habib Hajimolahoseini ; Lim, Michael ; Gosal, Gurpreet ; Ahmed, Walid ; Liu, Yang ; Deng, Gordon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28636176743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Datasets</topic><topic>Image classification</topic><topic>Optimization</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Omar Mohamed Awad</creatorcontrib><creatorcontrib>Habib Hajimolahoseini</creatorcontrib><creatorcontrib>Lim, Michael</creatorcontrib><creatorcontrib>Gosal, Gurpreet</creatorcontrib><creatorcontrib>Ahmed, Walid</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Deng, Gordon</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Omar Mohamed Awad</au><au>Habib Hajimolahoseini</au><au>Lim, Michael</au><au>Gosal, Gurpreet</au><au>Ahmed, Walid</au><au>Liu, Yang</au><au>Deng, Gordon</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Improving Resnet-9 Generalization Trained on Small Datasets</atitle><jtitle>arXiv.org</jtitle><date>2023-09-07</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>This paper presents our proposed approach that won the first prize at the ICLR competition on Hardware Aware Efficient Training. The challenge is to achieve the highest possible accuracy in an image classification task in less than 10 minutes. The training is done on a small dataset of 5000 images picked randomly from CIFAR-10 dataset. The evaluation is performed by the competition organizers on a secret dataset with 1000 images of the same size. Our approach includes applying a series of technique for improving the generalization of ResNet-9 including: sharpness aware optimization, label smoothing, gradient centralization, input patch whitening as well as metalearning based training. Our experiments show that the ResNet-9 can achieve the accuracy of 88% while trained only on a 10% subset of CIFAR-10 dataset in less than 10 minuets</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-09
issn 2331-8422
language eng
recordid cdi_proquest_journals_2863617674
source Free E- Journals
subjects Datasets
Image classification
Optimization
Training
title Improving Resnet-9 Generalization Trained on Small Datasets
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T15%3A56%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Improving%20Resnet-9%20Generalization%20Trained%20on%20Small%20Datasets&rft.jtitle=arXiv.org&rft.au=Omar%20Mohamed%20Awad&rft.date=2023-09-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2863617674%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2863617674&rft_id=info:pmid/&rfr_iscdi=true