Evolving internal reinforcers for an intrinsically motivated reinforcement-learning robot

Intrinsically Motivated Reinforcement Learning (IMRL) has been proposed as a framework within which agents exploit "internal reinforcement" to acquire general-purpose building-block behaviors ("skills") which can be later combined for solving several specific tasks. The architectures so far proposed within this framework are limited in that: (1) they use hardwired "salient events" to form and train skills, and this limits agents' autonomy; (2) they are applicable only to problems with abstract states and actions, as grid-world problems. This paper proposes solutions to these problems in the form of a hierarchical reinforcement-learning architecture that: (1) exploits the ideas and techniques of Evolutionary Robotics to allow the system to autonomously discover "salient events"; (2) uses neural networks to allow the system to cope with continuous states and noisy environments. The paper also starts to explore a new way of producing intrinsic motivations on the basis of the learning progress of skills. The viability of the proposed approach is demonstrated with a simulated robotic scenario.

Publication type: 
Contributo in volume
Author or Creator: 
Schembri M.
Mirolli M.
Baldassarre G.
Publisher: 
Imperial College, London, GBR
Source: 
IEEE 6th International Conference on Development and Learning (ICDL2007), edited by Demiris Y.; Scassellati B.; Mareschal D., pp. 282–287. London: Imperial College, 2007
Date: 
2007
Resource Identifier: 
http://www.cnr.it/prodotto/i/139992
https://dx.doi.org/10.1109/DEVLRN.2007.4354052
info:doi:10.1109/DEVLRN.2007.4354052
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4354052
urn:isbn:978-1-4244-1116-0
Language: 
Eng
ISTC Author: