Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip
Weilun, Chen1,2; Zhaoxiang, Zhang1,2,6; Xiaolin, Hu3; Baoyuan, Wu4,5
2020-08
会议日期2020-8
会议地点UK
英文摘要

Decision-based black-box adversarial attacks (decision-based attack) pose a severe threat to current deep neural networks, as they only need the predicted label of the target model to craft adversarial examples. However, existing decision-based attacks perform poorly on the $ l_\infty $ setting and the required enormous queries cast a shadow over the practicality. In this paper, we show that just randomly flipping the signs of a small number of entries in adversarial perturbations can significantly boost the attack performance. We name this simple and highly efficient decision-based $ l_\infty $ attack as Sign Flip Attack. Extensive experiments on CIFAR-10 and ImageNet show that the proposed method outperforms existing decision-based attacks by large margins and can serve as a strong baseline to evaluate the robustness of defensive models. We further demonstrate the applicability of the proposed method on real-world systems.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/44323]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Zhaoxiang, Zhang
作者单位1.Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA)
2.Center for Excellence in Brain Science and Intelligence Technology, CAS
3.The Chinese University of Hong Kong, Shenzhen
4.Tencent AI Lab
5.School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS)
6.Tsinghua University
推荐引用方式
GB/T 7714
Weilun, Chen,Zhaoxiang, Zhang,Xiaolin, Hu,et al. Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip[C]. 见:. UK. 2020-8.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace