Computer vision model for food identification in meals from the segmentation obtained by a set of fully convolutional networks

The strategy of counting the carbohydrates in consumed foods is recommended by scientific societies as a way to improve the quality of life of life of diabetes patients. Monitoring food intake can be facilitated through the use of a mobile application that automatically recognizes the foods in a mea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of ambient intelligence and humanized computing 2023-12, Vol.14 (12), p.16879-16890
Hauptverfasser: Carvalho, Marcos A., Pimenta, Tales C., Silvério, Alessandra C. P., Carvalho, Jaqueline C. S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The strategy of counting the carbohydrates in consumed foods is recommended by scientific societies as a way to improve the quality of life of life of diabetes patients. Monitoring food intake can be facilitated through the use of a mobile application that automatically recognizes the foods in a meal. Automatically recognizing food images is considered a challenging task for computer vision due to the similarity between foods. This challenge increases when the goal is to classify foods from a specific region and with a dataset containing only foods from that region and therefore small compared to public datasets from other countries. For this task, this work presents a model that uses a set of Fully Convolutional Networks (FCNs) to generate segmentations of foods in a meal. These segmentations are processed by an algorithm that uses digital image processing techniques to identify the foods. The model has low training costs due to being scalable, that is, the model can be trained to recognize a new food without the need to retrain the entire model. In the tests, foods consumed in Brazil were used, obtaining an accuracy of 98% and a recall of 88%.
ISSN:1868-5137
1868-5145
DOI:10.1007/s12652-023-04703-9