Uncertainty Quantification for MLP-Mixer Using Bayesian Deep Learning
Convolutional neural networks (CNNs) have become a popular choice for various image classification applications. However, the multi-layer perceptron mixer (MLP-Mixer) architecture has been proposed as a promising alternative, particularly for large datasets. Despite its advantages in handling large...
Gespeichert in:
Veröffentlicht in: | Applied sciences 2023-04, Vol.13 (7), p.4547 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Convolutional neural networks (CNNs) have become a popular choice for various image classification applications. However, the multi-layer perceptron mixer (MLP-Mixer) architecture has been proposed as a promising alternative, particularly for large datasets. Despite its advantages in handling large datasets and models, MLP-Mixer models have limitations when dealing with small datasets. This study aimed to quantify and evaluate the uncertainty associated with MLP-Mixer models for small datasets using Bayesian deep learning (BDL) methods to quantify uncertainty and compare the results to existing CNN models. In particular, we examined the use of variational inference and Monte Carlo dropout methods. The results indicated that BDL can improve the performance of MLP-Mixer models by 9.2 to 17.4% in term of accuracy across different mixer models. On the other hand, the results suggest that CNN models tend to have limited improvement or even decreased performance in some cases when using BDL. These findings suggest that BDL is a promising approach to improve the performance of MLP-Mixer models, especially for small datasets. |
---|---|
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app13074547 |