Integrating Reinforcement Learning, Equilibrium Points, and Minimum Variance to Understand the Development of Reaching: A Computational Model

Despite the huge literature on reaching behavior, a clear idea about the motor control processes underlying its development in infants is still lacking. This article contributes to overcoming this gap by proposing a computational model based on three key hypotheses: (a) trial-and-error learning processes drive the progressive development of reaching; (b) the control of the movements based on equilibrium points allows the model to quickly find the initial approximate solution to the problem of gaining contact with the target objects; (c) the request of precision of the end movement in the presence of muscular noise drives the progressive refinement of the reaching behavior. The tests of the model, based on a two degrees of freedom simulated dynamical arm, show that it is capable of reproducing a large number of empirical findings, most deriving from longitudinal studies with children: the developmental trajectory of several dynamical and kinematic variables of reaching movements, the time evolution of submovements composing reaching, the progressive development of a bell-shaped speed profile, and the evolution of the management of redundant degrees of freedom. The model also produces testable predictions on several of these phenomena. Most of these empirical data have never been investigated by previous computational models and, more important, have never been accounted for by a unique model. In this respect, the analysis of the model functioning reveals that all these results are ultimately explained, sometimes in unexpected ways, by the same developmental trajectory emerging from the interplay of the three mentioned hypotheses: The model first quickly learns to perform coarse movements that assure a contact of the hand with the target (an achievement with great adaptive value) and then slowly refines the detailed control of the dynamical aspects of movement to increase accuracy.

Publication type: 
Articolo
Author or Creator: 
Caligiore, Daniele
Parisi, Domenico
Baldassarre, Gianluca
Publisher: 
American Psychological Association [etc.], [Washington, etc.], Stati Uniti d'America
Source: 
Psychological review 121 (2014): 389–421. doi:10.1037/a0037016
info:cnr-pdr/source/autori:Caligiore, Daniele; Parisi, Domenico; Baldassarre, Gianluca/titolo:Integrating Reinforcement Learning, Equilibrium Points, and Minimum Variance to Understand the Development of Reaching: A Computational Model/doi:10.1037/a003701
Date: 
2014
Resource Identifier: 
http://www.cnr.it/prodotto/i/312119
https://dx.doi.org/10.1037/a0037016
info:doi:10.1037/a0037016
Language: 
Eng
ISTC Author: