CNQ: Compressor-Based Non-uniform Quantization of Deep Neural NetworksInspec keywordsOther keywordsKey words | |
Yuan, Yong2,3; Chen, Chen2,3; Hu, Xiyuan4; Peng, Silong1,2,3 | |
刊名 | CHINESE JOURNAL OF ELECTRONICS |
2020-11-01 | |
卷号 | 29期号:6页码:1126-1133 |
关键词 | entropy image classification learning (artificial intelligence) neural nets object detection optimisation quantisation (signal) network structure DNN low-bit quantization time-consuming training compressor-based fast nonuniform quantization method quantization model post-training quantization methods deep neural networks network quantization compressor-based nonuniform quantization CNQ Non-uniform quantization Knowledge distillation Unlabeled samples Network compression |
ISSN号 | 1022-4653 |
DOI | 10.1049/cje.2020.09.014 |
通讯作者 | Chen, Chen(chen.chen@ia.ac.cn) |
英文摘要 | Deep neural networks (DNNs) have achieved state-of-the-art performance in a number of domains but suffer intensive complexity. Network quantization can effectively reduce computation and memory costs without changing network structure, facilitating the deployment of DNNs on mobile devices. While the existing methods can obtain good performance, low-bit quantization without time-consuming training or access to the full dataset is still a challenging problem. In this paper, we develop a novel method named Compressorbased non-uniform quantization (CNQ) method to achieve non-uniform quantization of DNNs with few unlabeled samples. Firstly, we present a compressor-based fast nonuniform quantization method, which can accomplish nonuniform quantization without iterations. Secondly, we propose to align the feature maps of the quantization model with the pre-trained model for accuracy recovery. Considering the property difference between different activation channels, we utilize the weighted-entropy perchannel to optimize the alignment loss. In the experiments, we evaluate the proposed method on image classification and object detection. Our results outperform the existing post-training quantization methods, which demonstrate the effectiveness of the proposed method. |
资助项目 | National Natural Science Foundation of China[61906194] ; National Natural Science Foundation of China[61571438] |
WOS研究方向 | Engineering |
语种 | 英语 |
出版者 | TECHNOLOGY EXCHANGE LIMITED HONG KONG |
WOS记录号 | WOS:000609935600016 |
资助机构 | National Natural Science Foundation of China |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/42894] |
专题 | 自动化研究所_智能制造技术与系统研究中心_多维数据分析团队 |
通讯作者 | Chen, Chen |
作者单位 | 1.Beijing Visyst Co Ltd, Beijing 100083, Peoples R China 2.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Beijing 100049, Peoples R China 4.Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China |
推荐引用方式 GB/T 7714 | Yuan, Yong,Chen, Chen,Hu, Xiyuan,et al. CNQ: Compressor-Based Non-uniform Quantization of Deep Neural NetworksInspec keywordsOther keywordsKey words[J]. CHINESE JOURNAL OF ELECTRONICS,2020,29(6):1126-1133. |
APA | Yuan, Yong,Chen, Chen,Hu, Xiyuan,&Peng, Silong.(2020).CNQ: Compressor-Based Non-uniform Quantization of Deep Neural NetworksInspec keywordsOther keywordsKey words.CHINESE JOURNAL OF ELECTRONICS,29(6),1126-1133. |
MLA | Yuan, Yong,et al."CNQ: Compressor-Based Non-uniform Quantization of Deep Neural NetworksInspec keywordsOther keywordsKey words".CHINESE JOURNAL OF ELECTRONICS 29.6(2020):1126-1133. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论