Data-Distortion Guided Self-Distillation for Deep Neural Networks
Xu, Ting-Bing1,2; Liu, Cheng-Lin1,2,3
2019
会议日期2019-01-27
会议地点Hawaii, USA
英文摘要

Knowledge distillation is an effective technique that has been widely used for transferring knowledge from a network to another network. Despite its effective improvement of network performance, the dependence of accompanying assistive models complicates the training process of single network in the need of large memory and time cost. In this paper, we design a more elegant self-distillation mechanism to transfer knowledge between different distorted versions of same training data without the reliance on accompanying models. Specifically, the potential capacity of single network is excavated by learning consistent global feature distributions and posterior distributions (class probabilities) across these distorted versions of data. Extensive experiments on multiple datasets (i.e., CIFAR-10/100 and ImageNet) demonstrate that the proposed method can effectively improve the generalization performance of various network architectures (such as AlexNet, ResNet, Wide ResNet, and DenseNet), outperform existing distillation methods with little extra training efforts. 

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/26227]  
专题自动化研究所_模式识别国家重点实验室_模式分析与学习团队
通讯作者Xu, Ting-Bing
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation of Chinese Academy of Sciences, Beijing, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
3.CAS Center for Excellence of Brain Science and Intelligence Technology, Beijing, China
推荐引用方式
GB/T 7714
Xu, Ting-Bing,Liu, Cheng-Lin. Data-Distortion Guided Self-Distillation for Deep Neural Networks[C]. 见:. Hawaii, USA. 2019-01-27.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace