Multi-Domain and Multi-Task Learning for Human Action Recognition
Xu, Ning3; Zhang, Yong-Dong1,2; Su, Yu-Ting3; Nie, Wei-Zhi3; Liu, An-An3
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
2019-02-01
卷号28期号:2页码:853-867
关键词Domain-invariant Learning multi-task learning human action recognition
ISSN号1057-7149
DOI10.1109/TIP.2018.2872879
英文摘要Domain-invariant (view-invariant and modality-invariant) feature representation is essential for human action recognition. Moreover, given a discriminative visual representation, it is critical to discover the latent correlations among multiple actions in order to facilitate action modeling. To address these problems, we propose a multi-domain and multi-task learning (MDMTL) method to: 1) extract domain-invariant information for multi-view and multi-modal action representation and 2) explore the relatedness among multiple action categories. Specifically, we present a sparse transfer learning-based method to co-embed multi-domain (multi-view and multi-modality) data into a single common space for discriminative feature learning. Additionally, visual feature learning is incorporated into the multi-task learning framework, with the Frobenius-norm regularization term and the sparse constraint term, for joint task modeling and task relatedness-induced feature learning. To the best of our knowledge, MDMTL is the first supervised framework to jointly realize domain-invariant feature learning and task modeling for multi-domain action recognition. Experiments conducted on the INRIA Xmas Motion Acquisition Sequences data set, the MSR Daily Activity 3D (DailyActivity3D) data set, and the Multi-modal & Multi-view & Interactive data set, which is the most recent and largest multi-view and multi-model action recognition data set, demonstrate the superiority of MDMTL over the state-of-the-art approaches.
资助项目National Natural Science Foundation of China[61772359] ; National Natural Science Foundation of China[61472275] ; National Natural Science Foundation of China[61525206] ; National Natural Science Foundation of China[61872267] ; National Natural Science Foundation of China[61502337] ; National Key Research and Development Program of China[2017YFC0820600] ; National Defense Science and Technology Fund for Distinguished Young Scholars[2017-JCJQ-ZQ-022]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000448501800002
内容类型期刊论文
源URL[http://119.78.100.204/handle/2XEOYT63/3650]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Xu, Ning; Nie, Wei-Zhi; Liu, An-An
作者单位1.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
2.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China
3.Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
推荐引用方式
GB/T 7714
Xu, Ning,Zhang, Yong-Dong,Su, Yu-Ting,et al. Multi-Domain and Multi-Task Learning for Human Action Recognition[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2019,28(2):853-867.
APA Xu, Ning,Zhang, Yong-Dong,Su, Yu-Ting,Nie, Wei-Zhi,&Liu, An-An.(2019).Multi-Domain and Multi-Task Learning for Human Action Recognition.IEEE TRANSACTIONS ON IMAGE PROCESSING,28(2),853-867.
MLA Xu, Ning,et al."Multi-Domain and Multi-Task Learning for Human Action Recognition".IEEE TRANSACTIONS ON IMAGE PROCESSING 28.2(2019):853-867.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace