Optimized vision-based robot motion planning from multiple demonstrations

This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Autonomous robots 2018-08, Vol.42 (6), p.1117-1132
Hauptverfasser: Shen, Tiantian, Radmard, Sina, Chan, Ambrose, Croft, Elizabeth A., Chesi, Graziano
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper combines workspace models with optimization techniques to simultaneously address whole-arm collision avoidance, joint limits and camera field of view (FOV) limits for vision-based motion planning of a robot manipulator. A small number of user demonstrations are used to generate a feasible domain over which the whole robot arm can servo without violating joint limits or colliding with obstacles. Our algorithm utilizes these demonstrations to generate new feasible trajectories that keep the target in the camera’s FOV and achieve the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. To fulfill these requirements, a set of control points are selected within the feasible domain. Camera trajectories that traverse these control points are modeled and optimized using either quintic splines (for fast computation) or general polynomials (for better constraint satisfaction). Experiments with a seven degree of freedom articulated arm validate the proposed scheme.
ISSN:0929-5593
1573-7527
DOI:10.1007/s10514-017-9667-4