A Reinforcement Learning Model of Reaching Integrating Kinematic and Dynamic Control in a Simulated Arm Robot

Models proposed within the literature of motor control have polarised around two classes of controllers which differ in terms of controlled variables: the Force-Control Models(FCMs), based on dynamic control, and the Equilibrium-Point Models (EPMs), based on kinematic control. This paper proposes a bioinspired model which aims to exploit the strengths of the two classes of models. The model is tested with a 3D physical simulator of a 2DOF-controlled arm robot engaged in a reaching task which requires the production of curved trajectories to be solved. The model is based on an actor-critic reinforcementlearning algorithm which uses neural maps to represent both percepts and actions encoded as joint-angle desired equilibrium points (EPs), and a noise generator suitable for fine tuning the exploration/exploitation ratio. The tests of the model show how it is capable of exploiting the simplicity and speed of learning of EPMs as well as the flexibility of FCMs in generating curved trajectories. Overall, the model represents a first step towards the generation of models which exploit the strengths of both EPMs and FCMs and has the potential of being used as a new tool for investigating phenomena related to the organisation and learning of motor behaviour in organisms.

Tipo Pubblicazione: 
Contributo in atti di convegno
Author or Creator: 
Caligiore D.
Guglielmelli E.
Borghi
A.M.
Parisi D.
Baldassarre G.
Publisher: 
IEEE, New York, USA
Source: 
IEEE International Conference on Development and Learning (ICDL2010), Ann Arbor, MA, USA, 18-21/08/2010
Date: 
2010
Resource Identifier: 
http://www.cnr.it/prodotto/i/140240
Language: 
Eng