PDNet: Toward Better One-Stage Object Detection With Prediction Decoupling
Yang, Li4,5; Xu, Yan3; Wang, Shaoru4,5; Yuan, Chunfeng5; Zhang, Ziqi4,5; Li, Bing2,5; Hu, Weiming1,4,5
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
2022
卷号31页码:5121-5133
关键词Object detection prediction decoupling convolutional neural network
ISSN号1057-7149
DOI10.1109/TIP.2022.3193223
通讯作者Yuan, Chunfeng(cfyuan@nlpr.ia.ac.cn)
英文摘要Recent one-stage object detectors follow a per-pixel prediction approach that predicts both the object category scores and boundary positions from every single grid location. However, the most suitable positions for inferring different targets, i.e., the object category and boundaries, are generally different. Predicting all these targets from the same grid location thus may lead to sub-optimal results. In this paper, we analyze the suitable inference positions for object category and boundaries, and propose a prediction-target-decoupled detector named PDNet to establish a more flexible detection paradigm. Our PDNet with the prediction decoupling mechanism encodes different targets separately in different locations. A learnable prediction collection module is devised with two sets of dynamic points, i.e., dynamic boundary points and semantic points, to collect and aggregate the predictions from the favorable regions for localization and classification. We adopt a two-step strategy to learn these dynamic point positions, where the prior positions are estimated for different targets first, and the network further predicts residual offsets to the positions with better perceptions of the object properties. Extensive experiments on the MS COCO benchmark demonstrate the effectiveness and efficiency of our method. With a single ResNeXt-64x4d-101-DCN as the backbone, our detector achieves 50.1 AP with single-scale testing, which outperforms the state-of-the-art methods by an appreciable margin under the same experimental settings. Moreover, our detector is highly efficient as a one-stage framework. Our code is public at https://github.com/yangli18/PDNet.
资助项目National Key Research and Development Program of China[2020AAA0106800] ; Beijing Natural Science Foundation[JQ21017] ; Beijing Natural Science Foundation[4224091] ; National Natural Science Foundation of China[61972397] ; National Natural Science Foundation of China[62036011] ; National Natural Science Foundation of China[62192782] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[61906192] ; Key Research Program of Frontier Sciences, CAS[QYZDJ-SSW-JSC040] ; China Postdoctoral Science Foundation[2021M693402]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000835774000011
资助机构National Key Research and Development Program of China ; Beijing Natural Science Foundation ; National Natural Science Foundation of China ; Key Research Program of Frontier Sciences, CAS ; China Postdoctoral Science Foundation
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/49821]  
专题自动化研究所_模式识别国家重点实验室_视频内容安全团队
通讯作者Yuan, Chunfeng
作者单位1.CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai 200031, Peoples R China
2.PeopleAI Inc, Beijing 100190, Peoples R China
3.Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
4.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
5.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Yang, Li,Xu, Yan,Wang, Shaoru,et al. PDNet: Toward Better One-Stage Object Detection With Prediction Decoupling[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2022,31:5121-5133.
APA Yang, Li.,Xu, Yan.,Wang, Shaoru.,Yuan, Chunfeng.,Zhang, Ziqi.,...&Hu, Weiming.(2022).PDNet: Toward Better One-Stage Object Detection With Prediction Decoupling.IEEE TRANSACTIONS ON IMAGE PROCESSING,31,5121-5133.
MLA Yang, Li,et al."PDNet: Toward Better One-Stage Object Detection With Prediction Decoupling".IEEE TRANSACTIONS ON IMAGE PROCESSING 31(2022):5121-5133.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace