Multi-Cue Guided Semi-Supervised Learning Toward Target Speaker Separation in Real Environments | |
Xu, Jiaming1; Cui, Jian2,3; Hao, Yunzhe2,3; Xu, Bo2,3,4 | |
刊名 | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING |
2024 | |
卷号 | 32页码:151-163 |
关键词 | Cocktail party problem target speaker separation multi-cue guided separation semi-supervised learning |
ISSN号 | 2329-9290 |
DOI | 10.1109/TASLP.2023.3323856 |
通讯作者 | Xu, Bo(xubo@ia.ac.cn) |
英文摘要 | To solve the cocktail party problem in real multi-talker environments, this article proposed a multi-cue guided semi-supervised target speaker separation method (MuSS). Our MuSS integrates three target speaker-related cues, including spatial, visual, and voiceprint cues. Under the guidance of the cues, the target speaker is separated into a predefined output channel, and the interfering sources are separated into other output channels with the optimal permutation. Both synthetic mixtures and real mixtures are utilized for semi-supervised training. Specifically, for synthetic mixtures, the separated target source and other separated interfering sources are trained to reconstruct the ground-truth references, while for real mixtures, the mixture of two real mixtures is fed into our separation model, and the separated sources are remixed to reconstruct the two real mixtures. Besides, in order to facilitate finetuning and evaluating the estimated source on real mixtures, we introduce a real multi-modal speech separation dataset, RealMuSS, which is collected in real-world scenarios and is comprised of more than one hundred hours of multi-talker mixtures with high-quality pseudo references of the target speakers. Experimental results show that the pseudo references effectively improve the finetuning efficiency and enable the model to successfully learn and evaluate estimating speech on real mixtures, and various cue-driven separation models are greatly improved in signal-to-noise ratio and speech recognition accuracy under our semi-supervised learning framework. |
资助项目 | National Key Research and Development Program of China[2021ZD0201500] ; Strategic Priority Research Program of the Chinese Academy of Sciences[XDB32070000] |
WOS关键词 | SPEECH RECOGNITION ; EXTRACTION |
WOS研究方向 | Acoustics ; Engineering |
语种 | 英语 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
WOS记录号 | WOS:001097062800011 |
资助机构 | National Key Research and Development Program of China ; Strategic Priority Research Program of the Chinese Academy of Sciences |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/55152] |
专题 | 复杂系统认知与决策实验室 |
通讯作者 | Xu, Bo |
作者单位 | 1.Xiaomi Corp, Beijing 100085, Peoples R China 2.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408, Peoples R China 4.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Shanghai 200031, Peoples R China |
推荐引用方式 GB/T 7714 | Xu, Jiaming,Cui, Jian,Hao, Yunzhe,et al. Multi-Cue Guided Semi-Supervised Learning Toward Target Speaker Separation in Real Environments[J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING,2024,32:151-163. |
APA | Xu, Jiaming,Cui, Jian,Hao, Yunzhe,&Xu, Bo.(2024).Multi-Cue Guided Semi-Supervised Learning Toward Target Speaker Separation in Real Environments.IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING,32,151-163. |
MLA | Xu, Jiaming,et al."Multi-Cue Guided Semi-Supervised Learning Toward Target Speaker Separation in Real Environments".IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING 32(2024):151-163. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论