Extension to End-effector Position and Orientation Control of a Learning-based Neurocontroller for a Humanoid Arm
This paper presents a self-organizing neural network model for visuo-motor coordination of a redundant humanoid robot arm in reaching tasks. The proposed approach is based on a biologically-inspired model which replicates some characteristics of human control: learning occurs through an action-perce...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a self-organizing neural network model for visuo-motor coordination of a redundant humanoid robot arm in reaching tasks. The proposed approach is based on a biologically-inspired model which replicates some characteristics of human control: learning occurs through an action-perception cycle and does not requires explicit knowledge of the geometry of the manipulator. The transformation learned is a mapping from spatial movement direction to joint rotation. During learning, the system creates relations between the motor data associated to endogenous movements performed by the robotic arm and the sensory consequences of such motor actions, i.e. the final position and orientation of the end effector. The learnt relations are stored in the neural map structure and are then used, after learning, for generating motor commands aimed at reaching a given point in 3D space. The work is an extension of (E. Guglielmelli, et al.) including the end-effector orientation control. Experimental trials confirmed the system capability to control the end effector position and orientation and also to manage the redundancy of the robotic manipulator in reaching the 3D target point even with additional constraints, such as one or more clamped joints without additional learning phases |
---|---|
ISSN: | 2153-0858 2153-0866 |
DOI: | 10.1109/IROS.2006.281904 |