Pedestrian Attribute Recognition by Joint Visual-semantic Reasoning and Knowledge Distillation | |
Li QZ(李乔哲); Zhao X(赵鑫); He R(赫然); Huang KQ(黄凯奇) | |
2019-08 | |
会议日期 | 2019-8 |
会议地点 | 中国澳门 |
英文摘要 | Pedestrian attribute recognition in surveillance is a challenging task in computer vision due to significant pose variation, viewpoint change and poor image quality. To achieve effective recognition, this paper presents a graph-based global reasoning framework to jointly model potential visual-semantic relations of attributes and distill auxiliary human parsing knowledge to guide the relational learning. The reasoning framework models attribute groups on a graph and learns a projection function to adaptively assign local visual features to the nodes of the graph. After feature projection, graph convolution is utilized to perform global reasoning between the attribute groups to model their mutual dependencies. Then, the learned node features are projected back to visual space to facilitate knowledge transfer. An additional regularization term is proposed by distilling human parsing knowledge from a pre-trained teacher model to enhance feature representations. The proposed framework is verified on three large scale pedestrian attribute datasets including PETA, RAP, and PA100k. Experiments show that our method achieves state-of-the-art results. |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/28373] |
专题 | 中国科学院自动化研究所 |
通讯作者 | Huang KQ(黄凯奇) |
作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Li QZ,Zhao X,He R,et al. Pedestrian Attribute Recognition by Joint Visual-semantic Reasoning and Knowledge Distillation[C]. 见:. 中国澳门. 2019-8. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论