Autonomous Open-Ended Learning of Interdependent Tasks

Autonomy is fundamental for artificial agents acting in complex real-world scenarios. The acquisition of many differentskills is pivotal to foster versatile autonomous behaviour and thus a main objective for robotics and machine learning.Intrinsic motivations have proven to properly generate a task-agnostic signal to drive the autonomous acquisition ofmultiple policies in settings requiring the learning of multiple tasks. However, in real-world scenarios tasks may beinterdependent so that some of them may constitute the precondition for learning other ones. Despite different strategieshave been used to tackle the acquisition of interdependent/hierarchical tasks, fully autonomous open-ended learning inthese scenarios is still an open question. Building on previous research within the framework of intrinsically-motivated open-ended learning, we propose an architecture for robot control that tackles this problem from the point of view ofdecision making, i.e. treating the selection of tasks as a Markov Decision Process where the system selects the policiesto be trained in order to maximise its competence over all the tasks. The system is then tested with a humanoid robotsolving interdependent multiple reaching tasks.

Publication type: 
Contributo in atti di convegno
Author or Creator: 
Vieri Giuliano Santucci
Emilio Cartoni
Bruno Castro da Silva
Gianluca Baldassarre
Source: 
The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making, Montreal, Quebec, Canada, 07-10/07/2019
info:cnr-pdr/source/autori:Vieri Giuliano Santucci, Emilio Cartoni, Bruno Castro da Silva, Gianluca Baldassarre/congresso_nome:The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making/congresso_luogo:Montreal, Quebec, Canada/cong
Date: 
2019
Resource Identifier: 
http://www.cnr.it/prodotto/i/403240
https://arxiv.org/pdf/1905.02690.pdf
Language: 
Eng
ISTC Author: 
Emilio Cartoni's picture
Real name: