Pushing and Bounding Loss for Training Deep Super-Resolution Network
Shang Li1,2; Guixuan Zhang2; Jie Liu2; Shuwu Zhang2
2020-10
会议日期October 30-31, 2020
会议地点Beijing, China
国家中国
英文摘要

As deep neural networks (DNNs) are hard to be trained due to gradient vanishing, intermediate supervision is typically used to help earlier layers to be better optimized. Such deeply supervised methods have proved to be beneficial to various tasks such as classification and pose estimation, but it is rarely used for image super-resolution (SR). This is because intermediate supervision needs a set of intermediate labels, but in SR, these labels are hard to be defined. Experiments show that identity labels across the whole network, which are used for classification, will cause inconsistence and harm the final performance. We argue that ‘mediately accurate’ labels, i.e.relatively soft labels, are more suitable for intermediate supervision on SR networks. But labels in SR networks are of either completely high resolution or completely low resolution. To address this problem, we propose what we call pushing and bounding loss, which forces the network to learn better features as it goes deeper. In this way, we do not need to explicitly give any ‘mediately accurate’ labels but all internal layers can also be directly supervised. Extensive experiments show that deep SR networks trained in this scheme will receive a stable gain without adding any extra modules.

源文献作者中国传媒大学,中国科学院自动化研究所
产权排序1
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/47523]  
专题数字内容技术与服务研究中心_新媒体服务与管理技术
通讯作者Shang Li
作者单位1.University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Shang Li,Guixuan Zhang,Jie Liu,et al. Pushing and Bounding Loss for Training Deep Super-Resolution Network[C]. 见:. Beijing, China. October 30-31, 2020.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace