Improving Visual Grounding With Visual-Linguistic Verification and Iterative Reasoning
Li Yang3,4; Yan Xu2; Chunfeng Yuan4; Wei Liu4; Bing Li4; Weiming Hu1,3,4
2022-06
会议日期2022-6
会议地点New Orleans, Louisiana
英文摘要

Visual grounding is a task to locate the target indicated by a natural language expression. Existing methods extend the generic object detection framework to this problem. They base the visual grounding on the features from pre-generated proposals or anchors, and fuse these features with the text embeddings to locate the target mentioned by the text. However, modeling the visual features from these predefined locations may fail to fully exploit the visual context and attribute information in the text query, which limits their performance. In this paper, we propose a transformer-based framework for accurate visual grounding by establishing text-conditioned discriminative features and performing multi-stage cross-modal reasoning. Specifically, we develop a visual-linguistic verification module to focus the visual features on regions relevant to the textual descriptions while suppressing the unrelated areas. A language-guided feature encoder is also devised to aggregate the visual contexts of the target object to improve the object's distinctiveness. To retrieve the target from the encoded visual features, we further propose a multi-stage cross-modal decoder to iteratively speculate on the correlations between the image and text for accurate target localization. Extensive experiments on five widely used datasets validate the efficacy of our proposed components and demonstrate state-of-the-art performance.

会议录出版者Institute of Electrical and Electronics Engineers (IEEE)
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/52140]  
专题自动化研究所_模式识别国家重点实验室_视频内容安全团队
通讯作者Chunfeng Yuan
作者单位1.CAS Center for Excellence in Brain Science and Intelligence Technology
2.The Chinese University of Hong Kong
3.School of Artificial Intelligence, University of Chinese Academy of Sciences
4.NLPR, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Li Yang,Yan Xu,Chunfeng Yuan,et al. Improving Visual Grounding With Visual-Linguistic Verification and Iterative Reasoning[C]. 见:. New Orleans, Louisiana. 2022-6.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace