DPT: Deformable Patch-based Transformer for Visual Recognition
Chen,Zhiyang1,2; Zhu, Yousong2; Zhao,Chaoyang2; Hu, Guosheng3; Zeng, Wei4; Wang, Jinqiao1,2; Tang, Ming2
2021-10
会议日期2021-10-20
会议地点Chengdu, China
英文摘要

Transformer has achieved great success in computer vision, while
how to split patches in an image remains a problem. Existing methods usually use a fixed-size patch embedding which might destroy
the semantics of objects. To address this problem, we propose a new
Deformable Patch (DePatch) module which learns to adaptively
split the images into patches with different positions and scales in
a data-driven way rather than using predefined fixed patches. In
this way, our method can well preserve the semantics in patches.
The DePatch module can work as a plug-and-play module, which
can easily be incorporated into different transformers to achieve
an end-to-end training. We term this DePatch-embedded transformer as Deformable Patch-based Transformer (DPT) and conduct
extensive evaluations of DPT on image classification and object
detection. Results show DPT can achieve 81.9% top-1 accuracy on
ImageNet classification, and 43.7% box mAP with RetinaNet, 44.3%
with Mask R-CNN on MSCOCO object detection. Code has been
made available at: https://github.com/CASIA-IVA-Lab/DPT.
 

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/47414]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Zhao,Chaoyang
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
3.AnyVision, Belfast, UK
4.Peking University, Beijing, China
推荐引用方式
GB/T 7714
Chen,Zhiyang,Zhu, Yousong,Zhao,Chaoyang,et al. DPT: Deformable Patch-based Transformer for Visual Recognition[C]. 见:. Chengdu, China. 2021-10-20.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace