MDMLP: Image Classification from Scratch on Small Datasets with MLP

The attention mechanism has become a go-to technique for natural language processing and computer vision tasks. Recently, the MLP-Mixer and other MLP-based architectures, based simply on multi-layer perceptrons (MLPs), are also powerful compared to CNNs and attention techniques and raises a new rese...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lv, Tian, Bai, Chongyang, Wang, Chaojie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lv, Tian
Bai, Chongyang
Wang, Chaojie
description The attention mechanism has become a go-to technique for natural language processing and computer vision tasks. Recently, the MLP-Mixer and other MLP-based architectures, based simply on multi-layer perceptrons (MLPs), are also powerful compared to CNNs and attention techniques and raises a new research direction. However, the high capability of the MLP-based networks severely relies on large volume of training data, and lacks of explanation ability compared to the Vision Transformer (ViT) or ConvNets. When trained on small datasets, they usually achieved inferior results than ConvNets. To resolve it, we present (i) multi-dimensional MLP (MDMLP), a conceptually simple and lightweight MLP-based architecture yet achieves SOTA when training from scratch on small-size datasets; (ii) multi-dimension MLP Attention Tool (MDAttnTool), a novel and efficient attention mechanism based on MLPs. Even without strong data augmentation, MDMLP achieves 90.90% accuracy on CIFAR10 with only 0.3M parameters, while the well-known MLP-Mixer achieves 85.45% with 17.1M parameters. In addition, the lightweight MDAttnTool highlights objects in images, indicating its explanation power. Our code is available at https://github.com/Amoza-Theodore/MDMLP.
doi_str_mv 10.48550/arxiv.2205.14477
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_14477</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_14477</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-e162e1493729996a0667118cbdbc4739766826bbf0f9314ed70eb642639502003</originalsourceid><addsrcrecordid>eNotz01OwzAUBGBvWKDSA7DCF0jwX55jdiiFUikVSO0-enZtaimhyLYK3L6lsBrNYkb6CLnlrFZt07B7TN_xWAvBmporpfU16daLdf_2QFcTvnvajZhzDNFhiYcPGtJhohuXsLg9PffNhONIF1gw-5LpVyx7el7fkKuAY_bz_5yR7fPTtnup-tflqnvsKwStK89BeK6M1MIYA8gANOetszvrlJZGA7QCrA0sGMmV32nmLSgB0jRMMCZn5O7v9qIYPlOcMP0Mv5rhopEnpG9B7w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>MDMLP: Image Classification from Scratch on Small Datasets with MLP</title><source>arXiv.org</source><creator>Lv, Tian ; Bai, Chongyang ; Wang, Chaojie</creator><creatorcontrib>Lv, Tian ; Bai, Chongyang ; Wang, Chaojie</creatorcontrib><description>The attention mechanism has become a go-to technique for natural language processing and computer vision tasks. Recently, the MLP-Mixer and other MLP-based architectures, based simply on multi-layer perceptrons (MLPs), are also powerful compared to CNNs and attention techniques and raises a new research direction. However, the high capability of the MLP-based networks severely relies on large volume of training data, and lacks of explanation ability compared to the Vision Transformer (ViT) or ConvNets. When trained on small datasets, they usually achieved inferior results than ConvNets. To resolve it, we present (i) multi-dimensional MLP (MDMLP), a conceptually simple and lightweight MLP-based architecture yet achieves SOTA when training from scratch on small-size datasets; (ii) multi-dimension MLP Attention Tool (MDAttnTool), a novel and efficient attention mechanism based on MLPs. Even without strong data augmentation, MDMLP achieves 90.90% accuracy on CIFAR10 with only 0.3M parameters, while the well-known MLP-Mixer achieves 85.45% with 17.1M parameters. In addition, the lightweight MDAttnTool highlights objects in images, indicating its explanation power. Our code is available at https://github.com/Amoza-Theodore/MDMLP.</description><identifier>DOI: 10.48550/arxiv.2205.14477</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.14477$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.14477$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lv, Tian</creatorcontrib><creatorcontrib>Bai, Chongyang</creatorcontrib><creatorcontrib>Wang, Chaojie</creatorcontrib><title>MDMLP: Image Classification from Scratch on Small Datasets with MLP</title><description>The attention mechanism has become a go-to technique for natural language processing and computer vision tasks. Recently, the MLP-Mixer and other MLP-based architectures, based simply on multi-layer perceptrons (MLPs), are also powerful compared to CNNs and attention techniques and raises a new research direction. However, the high capability of the MLP-based networks severely relies on large volume of training data, and lacks of explanation ability compared to the Vision Transformer (ViT) or ConvNets. When trained on small datasets, they usually achieved inferior results than ConvNets. To resolve it, we present (i) multi-dimensional MLP (MDMLP), a conceptually simple and lightweight MLP-based architecture yet achieves SOTA when training from scratch on small-size datasets; (ii) multi-dimension MLP Attention Tool (MDAttnTool), a novel and efficient attention mechanism based on MLPs. Even without strong data augmentation, MDMLP achieves 90.90% accuracy on CIFAR10 with only 0.3M parameters, while the well-known MLP-Mixer achieves 85.45% with 17.1M parameters. In addition, the lightweight MDAttnTool highlights objects in images, indicating its explanation power. Our code is available at https://github.com/Amoza-Theodore/MDMLP.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz01OwzAUBGBvWKDSA7DCF0jwX55jdiiFUikVSO0-enZtaimhyLYK3L6lsBrNYkb6CLnlrFZt07B7TN_xWAvBmporpfU16daLdf_2QFcTvnvajZhzDNFhiYcPGtJhohuXsLg9PffNhONIF1gw-5LpVyx7el7fkKuAY_bz_5yR7fPTtnup-tflqnvsKwStK89BeK6M1MIYA8gANOetszvrlJZGA7QCrA0sGMmV32nmLSgB0jRMMCZn5O7v9qIYPlOcMP0Mv5rhopEnpG9B7w</recordid><startdate>20220528</startdate><enddate>20220528</enddate><creator>Lv, Tian</creator><creator>Bai, Chongyang</creator><creator>Wang, Chaojie</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220528</creationdate><title>MDMLP: Image Classification from Scratch on Small Datasets with MLP</title><author>Lv, Tian ; Bai, Chongyang ; Wang, Chaojie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-e162e1493729996a0667118cbdbc4739766826bbf0f9314ed70eb642639502003</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Lv, Tian</creatorcontrib><creatorcontrib>Bai, Chongyang</creatorcontrib><creatorcontrib>Wang, Chaojie</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lv, Tian</au><au>Bai, Chongyang</au><au>Wang, Chaojie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MDMLP: Image Classification from Scratch on Small Datasets with MLP</atitle><date>2022-05-28</date><risdate>2022</risdate><abstract>The attention mechanism has become a go-to technique for natural language processing and computer vision tasks. Recently, the MLP-Mixer and other MLP-based architectures, based simply on multi-layer perceptrons (MLPs), are also powerful compared to CNNs and attention techniques and raises a new research direction. However, the high capability of the MLP-based networks severely relies on large volume of training data, and lacks of explanation ability compared to the Vision Transformer (ViT) or ConvNets. When trained on small datasets, they usually achieved inferior results than ConvNets. To resolve it, we present (i) multi-dimensional MLP (MDMLP), a conceptually simple and lightweight MLP-based architecture yet achieves SOTA when training from scratch on small-size datasets; (ii) multi-dimension MLP Attention Tool (MDAttnTool), a novel and efficient attention mechanism based on MLPs. Even without strong data augmentation, MDMLP achieves 90.90% accuracy on CIFAR10 with only 0.3M parameters, while the well-known MLP-Mixer achieves 85.45% with 17.1M parameters. In addition, the lightweight MDAttnTool highlights objects in images, indicating its explanation power. Our code is available at https://github.com/Amoza-Theodore/MDMLP.</abstract><doi>10.48550/arxiv.2205.14477</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2205.14477
ispartof
issn
language eng
recordid cdi_arxiv_primary_2205_14477
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title MDMLP: Image Classification from Scratch on Small Datasets with MLP
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T01%3A14%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MDMLP:%20Image%20Classification%20from%20Scratch%20on%20Small%20Datasets%20with%20MLP&rft.au=Lv,%20Tian&rft.date=2022-05-28&rft_id=info:doi/10.48550/arxiv.2205.14477&rft_dat=%3Carxiv_GOX%3E2205_14477%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true