Logic Traps in Evaluating Attribution Scores
Ju YM(鞠一鸣); Zhang YZ(张元哲); Yang C(杨朝); Jiang ZT(江忠涛); Liu K(刘康); Zhao J(赵军)
2022-05
会议日期22nd - 27th May 2022
会议地点Dublin
页码5911–5922
英文摘要

Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict.This goal is usually approached with attribution method, which assesses the influence of features on model predictions. As an explanation method, the evaluation criteria of attribution methods is how accurately it reflects the actual reasoning process of the model (faithfulness). Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments.However, some crucial logic traps in these evaluation methods are ignored in most works, causing inaccurate evaluation and unfair comparison.This paper systematically reviews existing methods for evaluating attribution scores and summarizes the logic traps in these methods.We further conduct experiments to demonstrate the existence of each logic trap.Through both theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores. Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/52277]  
专题模式识别国家重点实验室_自然语言处理
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
2.National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China
推荐引用方式
GB/T 7714
Ju YM,Zhang YZ,Yang C,et al. Logic Traps in Evaluating Attribution Scores[C]. 见:. Dublin. 22nd - 27th May 2022.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace