Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu; Yao Ma; Hao-Chen Liu; Debayan Deb; Hui Liu; Ji-Liang Tang; Anil K. Jain
刊名International Journal of Automation and Computing
2020
卷号17期号:2页码:151-178
关键词Adversarial example model safety robustness defenses deep learning.
ISSN号1476-8186
DOI10.1007/s11633-019-1211-x
英文摘要Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples raises our concerns in adopting deep learning to safety-critical applications. As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types, such as images, graphs and text. Thus, it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures. In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for three most popular data types, including images, graphs and text.
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/42295]  
专题自动化研究所_学术期刊_International Journal of Automation and Computing
作者单位Department of Computer Science and Engineering, Michigan State University, Michigan 48823, USA
推荐引用方式
GB/T 7714
Han Xu,Yao Ma,Hao-Chen Liu,et al. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review[J]. International Journal of Automation and Computing,2020,17(2):151-178.
APA Han Xu.,Yao Ma.,Hao-Chen Liu.,Debayan Deb.,Hui Liu.,...&Anil K. Jain.(2020).Adversarial Attacks and Defenses in Images, Graphs and Text: A Review.International Journal of Automation and Computing,17(2),151-178.
MLA Han Xu,et al."Adversarial Attacks and Defenses in Images, Graphs and Text: A Review".International Journal of Automation and Computing 17.2(2020):151-178.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace