A system for recognition of emotions based
on speech analysis can have interesting applications in
human-robot interaction. In this paper, we carry out an
exploratory study on the possibility to use a proposed
methodology to recognize basic emotions (sadness, surprise,
happiness, anger, fear and disgust) based on phonetic
and acoustic properties of emotive speech with the
minimal use of signal processing algorithms. We set up
an experimental test, consisting of choosing three types
of speakers, namely: (i) ve adult European speakers, (ii)
ve Asian (Middle East) adult speakers and (iii) ve adult
American speakers. The speakers had to repeat 6 sentences
in English (with durations typically between 1 s and
3 s) in order to emphasize rising-falling intonation and
pitch movement. Intensity, peak and range of pitch and
speech rate have been evaluated. The proposed methodology
consists of generating and analyzing a graph of formant,
pitch and intensity, using the open-source PRAAT
program. From the experimental results, it was possible to
recognize the basic emotions in most of the cases.