Using Learning from Demonstration (LfD) to perform the complete apple harvesting task

Learning from Demonstration (LfD) can be used to make it easier for robots to learn to perform tasks such as apple harvesting. The required apple harvesting motions can be split into four steps: 1. Approaching, 2. Grasping, 3. Detaching, 4. Placing. So far, only parts of the apple harvesting task ha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers and electronics in agriculture 2024-09, Vol.224, p.109195, Article 109195
Hauptverfasser: van de Ven, Robert, Leylavi Shoushtari, Ali, Nieuwenhuizen, Ard, Kootstra, Gert, van Henten, Eldert J.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Learning from Demonstration (LfD) can be used to make it easier for robots to learn to perform tasks such as apple harvesting. The required apple harvesting motions can be split into four steps: 1. Approaching, 2. Grasping, 3. Detaching, 4. Placing. So far, only parts of the apple harvesting task have been learned using LfD. In our work, we learned an abstracted version of the complete apple harvesting task using a cube. We used two methods and tested the sensitivity to model parameters and the number of demonstrations. We selected (1) the Gaussian Mixture Model (GMM) with Gaussian Mixture Regression (GMR) and (2) the Hidden Markov Model (HMM) with Linear Quadratic Regression (LQR). Both methods make use of Task Parameterization (TP), which allows models to combine multiple phases from the apple harvesting task. Combining multiple harvesting phases is important for effective LfD. We analysed the performance of these algorithms on four performance metrics, namely the grasp pose, the detachment motion, the place pose, and the success rate when using the real robot. A high grasp pose accuracy was achieved, specifically a position error of 0.004 and an orientation error of 0.004 for the GMM with GMR, trained with 100 demonstrations and 9 Gaussian components. The detachment motion was performed but with a reduced range. The model that performed the best was the GMM with GMR, trained with 60 demonstrations and 5 Gaussian components. A high place pose accuracy was achieved, specifically a position error of 0.003 and an orientation error of 0.003 for the GMM with GMR, trained with 100 demonstrations and 15 Gaussian components. The task was executed by a real robot successfully, resulting in a success rate of 67% for HMM with LQR, trained with 40 demonstrations, 5 states, and a scaling factor of 1.0. For the number of components, no clear relation was found. The same was true for the number of states in the HMM with LQR. For the scaling factor, a clear relation could be found. For this task, a value in the range of -1.0 to 1.0 should be used. For the number of demonstrations, there was an optimum of 40 demonstrations. Further improvements are needed to deal with challenges such as apple shift and detecting detachment of fruits. •Learning from Demonstration was used to learn the apple harvesting task.•Methods were trained on at most 100 demonstrations.•High accuracy at grasp pose and place pose.•Effect of training with fewer demonstrations investigated.•Sensitivity
ISSN:0168-1699
1872-7107
DOI:10.1016/j.compag.2024.109195