Speech Emotion Recognition from Variable-Length Inputs with Triplet Loss Function
Huang, Jian2,3; Li, Ya3; Tao, Jianhua1,2,3; Lian, Zheng2,3
2018-09
会议日期2018.9.2-2018.9.6
会议地点Hyderabad, India
英文摘要

Automatic emotion recognition is a crucial element on understanding human behavior and interaction. Prior works on speech emotion recognition focus on exploring various feature sets and models. Compared with these methods, we propose a triplet framework based on Long Short-Term Memory Neural Network (LSTM) for speech emotion recognition. The system learns a mapping from acoustic features to discriminative embedding features, which are regarded as basis of testing with SVM. The proposed model is trained with triplet loss and supervised loss simultaneously. The triplet loss makes intra-class distance shorter and inter-class distance longer, and supervised loss incorporates class label information. In view of variable-length inputs, we explore three different strategies to handle this problem, and meanwhile make better use of temporal dynamic process information. Our experimental results on the Interactive Emotional Motion Capture (IEMOCAP) database reveal that the proposed methods are beneficial to performance improvement. We demonstrate promise of triplet framework for speech emotion recognition and present our analysis.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/39301]  
专题模式识别国家重点实验室_智能交互
作者单位1.CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
3.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
推荐引用方式
GB/T 7714
Huang, Jian,Li, Ya,Tao, Jianhua,et al. Speech Emotion Recognition from Variable-Length Inputs with Triplet Loss Function[C]. 见:. Hyderabad, India. 2018.9.2-2018.9.6.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace