SDTP: Semantic-aware Decoupled Transformer Pyramid for Dense Image Prediction | |
Li Zekun5,6; Li Yufan5,6; Li BIng4,5,6; Feng Bailan1; Wu Kebin1; Peng Chengwei2; Hu Weiming3,5,6 | |
刊名 | IEEE Transactions on Circuits and Systems for Video Technology |
2022 | |
页码 | 14 |
英文摘要 | Although transformer has achieved great progress on computer vision tasks, the scale variation in dense image prediction is still the key challenge. Few effective multi-scale techniques are applied in transformer and there are two main limitations in the current methods. On the one hand, self-attention module in vanilla transformer fails to sufficiently exploit the diversity of semantic information because of its rigid mechanism. On the other hand, it is difficult to build attention and interaction among different levels due to the heavy computational burden. To alleviate this problem, we first revisit multi-scale problem in dense prediction, verifying the significance of diverse semantic representation and multi-scale interaction, and exploring the adaptation of transformer to pyramidal structure. Inspired by these findings, we propose a novel Semantic-aware Decoupled Transformer Pyramid (SDTP) for dense image prediction, consisting of Intra-level Semantic Promotion (ISP), Cross-level Decoupled Interaction (CDI) and Attention Refinement Function (ARF). ISP explores the semantic diversity in different receptive space through more flexible self-attention strategy. CDI builds the global attention and interaction among different levels in decoupled space which also solves the problem of heavy computation. Besides, ARF is further added to refine the attention in transformer. Experimental results demonstrate the validity and generality of the proposed method, which outperforms the state-of-the-art by a significant margin in dense image prediction tasks. Furthermore, the proposed components are all plug-and-play, which can be embedded in other methods. |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/48833] |
专题 | 自动化研究所_模式识别国家重点实验室_视频内容安全团队 |
通讯作者 | Li BIng |
作者单位 | 1.Noah’s Ark Lab, Huawei Technologies 2.National Computer Network Emergency Response Technical Team/Coordination Center of China 3.CAS Center for Excellence in Brain Science and Intelligence Technology 4.PeopleAI Inc 5.School of Artificial Intelligence, University of Chinese Academy of Sciences 6.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Li Zekun,Li Yufan,Li BIng,et al. SDTP: Semantic-aware Decoupled Transformer Pyramid for Dense Image Prediction[J]. IEEE Transactions on Circuits and Systems for Video Technology,2022:14. |
APA | Li Zekun.,Li Yufan.,Li BIng.,Feng Bailan.,Wu Kebin.,...&Hu Weiming.(2022).SDTP: Semantic-aware Decoupled Transformer Pyramid for Dense Image Prediction.IEEE Transactions on Circuits and Systems for Video Technology,14. |
MLA | Li Zekun,et al."SDTP: Semantic-aware Decoupled Transformer Pyramid for Dense Image Prediction".IEEE Transactions on Circuits and Systems for Video Technology (2022):14. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论