Robust Deep Neural Network for Learning in Noisy Multi-Label Food Images

Deep networks can facilitate the monitoring of a balanced diet to help prevent various health problems related to eating disorders. Large, diverse, and clean data are essential for learning these types of algorithms. Although data can be collected automatically, the data cleaning process is time-con...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Switzerland), 2024-04, Vol.24 (7), p.2034
Hauptverfasser: Morales, Roberto, Martinez-Arroyo, Angela, Aguilar, Eduardo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep networks can facilitate the monitoring of a balanced diet to help prevent various health problems related to eating disorders. Large, diverse, and clean data are essential for learning these types of algorithms. Although data can be collected automatically, the data cleaning process is time-consuming. This study aims to provide the model with the ability to learn even when the data are not completely clean. For this purpose, we extend the Attentive Feature MixUp method to enable its learning on noisy multi-label food data. The extension was based on the hypothesis that during the MixUp phase, when a pair of images are mixed, the resulting soft labels should be different for each ingredient, being larger for ingredients that are mixed with the background because they are better distinguished than when they are mixed with other ingredients. Furthermore, to address data perturbation, the incorporation of the Laplace approximation as a post-hoc method was analyzed. The evaluation of the proposed method was performed on two food datasets, where a notable performance improvement was obtained in terms of Jaccard index and F1 score, which validated the hypothesis raised. With the proposed MixUp, our method reduces the memorization of noisy multi-labels, thereby improving its performance.
ISSN:1424-8220
1424-8220
DOI:10.3390/s24072034