Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems. In this paper we present an event-based approach for integrating a conversational HRI system. This approach has been instantiated using the Urbi middleware [6] on a Nao robot, used as a testbed for investigating child-robot interaction in the ALIZ-E project. We focus on the implementation for two scenarios: an imitation game of arm movements and a quiz game.
An Event-Based Conversational System for the Nao Robot
Publication type:
Contributo in volume
Publisher:
Springer Science+Business Media, New York, USA
Source:
Proceedings of the Paralinguistic Information and its Integration in Spoken Dialogue Systems Workshop, edited by Ramón López-Cózar Delgado, Tetsunori Kobayashi, pp. 125–132. New York: Springer Science+Business Media, 2011
Date:
2011
Resource Identifier:
http://www.cnr.it/prodotto/i/203900
https://dx.doi.org/10.1007/978-1-4614-1335-6_14
info:doi:10.1007/978-1-4614-1335-6_14
http://www.springerlink.com/content/g1v270v17517j27h
urn:isbn:978-1-4614-1334-9
Language:
Eng