Motion Complementary Network for Efficient Action Recognition
Cheng,Ke2,3; Zhang,Yifan2,3; Li,Chenghua2,3; Cheng,Jian1,2,3; Lu,Hanqing2,3
2021-01
会议日期January 2021
会议地点线上
英文摘要

Both two-stream ConvNet and 3D ConvNet are widely used in action recognition. However, both methods are not efficient for deployment: calculating optical flow is very slow, while 3D convolution is computationally expensive. Our key insight is that the motion information from optical flow maps is complementary to the motion information from 3D ConvNet. Instead of simply combining these two methods, we propose two novel techniques to enhance the performance with less computational cost: fixed-motion-accumulation and balanced-motion-policy. With these two techniques, we propose a novel framework called Efficient Motion Complementary Network(EMC-Net) that enjoys both high efficiency and high performance. We conduct extensive experiments on Kinetics, UCF101, and Jester datasets. We achieve notably higher performance while consuming 4.7× less computation than I3D, 11.6× less computation than ECO, 17.8× less computation than R(2+1)D. On Kinetics dataset, we achieve 2.6% better performance than the recent proposed TSM with 1.4× fewer FLOPs and 10ms faster on K80 GPU.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/45075]  
专题类脑芯片与系统研究
通讯作者Zhang,Yifan
作者单位1.Research Center for Brain-inspired Intelligence, CASIA
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.NLPR & AIRIA, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Cheng,Ke,Zhang,Yifan,Li,Chenghua,et al. Motion Complementary Network for Efficient Action Recognition[C]. 见:. 线上. January 2021.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace