Real-time stress detection model and voice analysis: an integrated VR based game for training public speaking skills
Conference Publication ResearchOnline@JCUAbstract
This paper describes a work in progress Virtual Reality (VR) based serious game, integrated with algorithmic classification model for detecting stress during public speaking in real-time by analysing the speaker’s voice. The designed VR game offers real-time virtual social support/feedback for the training of public speaking skills. We developed a stress detection model that recognises stress and an altered normal confident state based on 24 actors’ voice expressions from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Three different classifiers model are constructed for each actor by extracting and identifying overall significant voice features. The results shows that random forest classification using features Amplitude Envelope (AE), Root-Mean Square (RMS), and Mel-Frequency Cepstral Coefficients (MFCCs) provides high accuracy for detection of stress, while many more features can possibly be explored.
Journal
N/A
Publication Name
3rd IEEE Conference on Games
Volume
N/A
ISBN/ISSN
N/A
Edition
N/A
Issue
N/A
Pages Count
4
Location
Copenhagen, Denmark
Publisher
Institute of Electrical and Electronics Engineers
Publisher Url
N/A
Publisher Location
Piscataway, NJ, USA
Publish Date
N/A
Url
N/A
Date
N/A
EISSN
N/A
DOI
N/A