Share

Can a robot predict long-term consequences?

Forecasting the future, avoiding dangers and anticipating opportunities: these skills are still very limited in current robotic systems. But at ISTC the Goal-Oriented Agents Laboratory (GOAL) is walking this path. The challenge is the realization of robots able to predict the long-term consequences of their actions. This research line aims at replacing reactive robots, which simply respond to their environmental stimuli, with goal-directed robots.

 

"What will happen if...?" Everyone makes predictions about the future by trying to imagine upcoming events. These anticipatory mechanisms are a crucial aspect of human cognition. Likewise, many animals do not passively attend stimuli, but predict those that will probably arrive.

Cognitive scientists consider such anticipatory capabilities as a precondition for autonomous mental life, since they allow cognitive agents to build up mental representations and pursue specific goals.

At ISTC the Goal-Oriented Agents Laboratory (GOAL) carries out pioneering studies on anticipatory behavior. The idea is to explore these cognitive mechanisms from a computational point of view. The group aims at contributing to a general understanding of anticipatory behaviour through its modelization in artificial cognitive systems. This means, developing robots that can predict their own actions.

Current robotics systems can select the most adequate responses for their stimuli, but they have limited abilities to predict long-term effects of their actions. This makes them scarcely adaptive in open-ended scenarios, where task achievement can depend on factors that are not perceptually available.

The Goal-Oriented Agents Laboratory is starting to devise proactive robots, which can go beyond the here-and-now of current perception and behave as guided by internally generated goals. This will make the empirical study of advanced decision-making possible: goal-directed robots will help to better understand some features of human reasoning. The cognitive mechanisms underlying questions like "What will happen if...?" could be thus understood. 

Contact: Giovanni Pezzulo

ISTC Group: Goal-Oriented Agents Laboratory

Relevant Publications

Gigliotta, O.; Pezzulo, G. & Nolfi, S. (2011) Evolution of a Predictive Internal Model in an Embodied and Situated Agent.Theory in Biosciences.

Pezzulo, G.; Butz, M. V.; Castelfranchi, C. & Falcone, R., eds. (2008), The Challenge of Anticipation: A Unifying Framework for the Analysis and Design of Artificial Cognitive Systems. Springer LNAI 5225.