Using Generative Adversarial Networks in Classification Tasks with Very Small Amounts of Training Data
Deep learning algorithms are incredibly powerful and have achieved impressive results in various classification tasks. However, one of their limitations is their dependence on large amounts of training data. When there is limited training data available, the standard approach is to increase the data...
Gespeichert in:
Veröffentlicht in: | WSEAS TRANSACTIONS ON COMPUTER RESEARCH 2023-07, Vol.11, p.135-142 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning algorithms are incredibly powerful and have achieved impressive results in various classification tasks. However, one of their limitations is their dependence on large amounts of training data. When there is limited training data available, the standard approach is to increase the dataset size by using data augmentation and training a neural network on the expanded dataset. This method can be effective, but it requires significant computational resources and may not always be feasible. In our work, we proposed a new approach to address the problem of limited training data. Instead of relying solely on increasing the dataset size, we train Generative Adversarial Networks (GANs) to generate distributions of individual categories. The classification of the unknown element is then performed using distributions generated by trained GAN networks. We proposed four methods that compare the unknown element with the elements generated by trained GAN networks and establish estimates of the conditional probabilities of the unknown element belonging to individual categories. These conditional probabilities are then used for the classification of the unknown element into individual categories. This approach enables us to make informed decisions and achieve accurate classification results even when dealing with limited training data. |
---|---|
ISSN: | 1991-8755 2415-1521 |
DOI: | 10.37394/232018.2023.11.12 |