Voice GMM modelling of voice quality for FESTIVAL/MBROLA emotive TTS synthesis

Voice quality is recognized to play an important role for the rendering of emotions in verbal communication. In this paper we explore the effectiveness of a processing framework for voice transformations finalized to the analysis and synthesis of emotive speech. We use a GMM-based model to compute the differences
between an MBROLA voice and an anger voice, and we address the modification of the MBROLA voice spectra by using a set of spectral conversion functions trained on the data. We propose to organize the speech data for the training in such way that the target emotive speech data and the diphone database used for the text-to-speech synthesis, both come from the same speaker. A copy-synthesis procedure is used to produce synthesis speech utterances where pitch patterns, phoneme duration, and principal speaker characteristics are the same as in the target emotive utterances. This results in a better isolation of the voice quality differences due to the emotive arousal. Three different models to represent voice quality differences are applied and compared. The models are all based on a GMM representation of the acoustic space. The performance of these models is discussed and the experimental results and assessment are presented.

Tipo Pubblicazione: 
Contributo in atti di convegno
Author or Creator: 
Mauro Nicolao
Carlo Drioli
Piero Cosi
Publisher: 
ISCA-INST SPEECH COMMUNICATION ASSOCIATION, C/O EMMANUELLE FOXONET, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRA
Source: 
Interspeech 2006 -- ICSLP - 9th International Conference on Spoken Language Processing, pp. 1794–1797, Pittsburgh, PA, USA, 17-21, Settembre 2006
Date: 
2006
Resource Identifier: 
http://www.cnr.it/prodotto/i/180055
http://www.isca-speech.org/archive/interspeech_2006/i06_1597.html
urn:isbn:978-1-60423-449-7
Language: 
Eng