Orange Recognition on Tree Using Image Processing Method Based on Lighting Density Pattern

Within the last few years, a new tendency has been created towards robotic harvesting of oranges and some of citrus fruits. The first step in robotic harvesting is accurate recognition and positioning of fruits. Detection through image processing by color cameras and computer is currently the most c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Māshīnʹhā-yi kishāvarzī 2015-03, Vol.5 (1), p.92-100
Hauptverfasser: H. R Ahmadi, J Amiri Parian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Within the last few years, a new tendency has been created towards robotic harvesting of oranges and some of citrus fruits. The first step in robotic harvesting is accurate recognition and positioning of fruits. Detection through image processing by color cameras and computer is currently the most common method. Obviously, a harvesting robot faces with natural conditions and, therefore, detection must be done in various light conditions and environments. In this study, it was attempted to provide a suitable algorithm for recognizing the orange fruits on tree. In order to evaluate the proposed algorithm, 500 images were taken in different conditions of canopy, lighting and the distance to the tree. The algorithm included sub-routines for optimization, segmentation, size filtering, separation of fruits based on lighting density method and coordinates determination. In this study, MLP neural network (with 3 hidden layers) was used for segmentation that was found to be successful with an accuracy of 88.2% in correct detection. As there exist a high percentage of the clustered oranges in images, any algorithm aiming to detect oranges on the trees successfully should offer a solution to separate these oranges first. A new method based on the light and shade density method was applied and evaluated in this research. Finally, the accuracies for differentiation and recognition were obtained to be 89.5% and 88.2%, respectively.
ISSN:2228-6829
2423-3943
DOI:10.22067/jam.v5i1.23852