Explanation guided cross-modal social image clustering
Yan, Xiaoqiang3; Mao, Yiqiao3; Ye, Yangdong3; Yu, Hui4; Wang, Fei-Yue1,2,5
刊名INFORMATION SCIENCES
2022-05-01
卷号593页码:1-16
关键词Social image clustering Human explanation Side information Information maximization Interactive optimization
ISSN号0020-0255
DOI10.1016/j.ins.2022.01.065
通讯作者Ye, Yangdong(ieydye@zzu.edu.cn)
英文摘要The integration of visual and semantic information has been found to play a role in increasing the accuracy of social image clustering methods. However, existing approaches are limited by the heterogeneity gap between the visual and semantic modalities, and their performances significantly degrade due to the commonly sparse and incomplete tags in semantic modality. To address these problems, we propose a novel clustering framework to discover reasonable categories in unlabeled social images under the guidance of human explanations. First of all, a novel Explanation Generation Model (EGM) is proposed to automatically boost textual information for the sparse and incomplete tags based on an extra lexical database with human knowledge. Then, a novel clustering algorithm called Group Constrained Information Maximization (GCIM) is proposed to learn image categories. In this algorithm, a new type of constraint named group level side information is unprecedentedly defined to bridge the well-known heterogeneity gap between visual and textual modalities. Finally, an interactive draw-and-merge optimization method is proposed to ensure an optimal solution. Extensive experiments on several social image datasets including NUS-Wide, IAPRTC, MIRFlickr, ESP-Game and COCO demonstrate the superiority of the proposed approach to state-of-the-art baselines. (c) 2022 Elsevier Inc. All rights reserved.
资助项目National Natural Science Foundation of China[61906172] ; National Natural Science Foundation of China[62176239] ; Postdoctoral Research Foundation of China[2020M682357] ; EPSRC through project 4D Facial Sensing and Modelling[EP/N025849/1]
WOS关键词INFORMATION BOTTLENECK ; MULTIVIEW
WOS研究方向Computer Science
语种英语
出版者ELSEVIER SCIENCE INC
WOS记录号WOS:000770686400001
资助机构National Natural Science Foundation of China ; Postdoctoral Research Foundation of China ; EPSRC through project 4D Facial Sensing and Modelling
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/48228]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_先进控制与自动化团队
通讯作者Ye, Yangdong
作者单位1.Macau Univ Sci & Technol, Inst Syst Engn, Taipa 999078, Macao, Peoples R China
2.Qingdao Acad Intelligent Ind, Qingdao 266109, Peoples R China
3.Zhengzhou Univ, Sch Informat Engn, Zhengzhou 450052, Peoples R China
4.Univ Portsmouth, Sch Creat Technol, Portsmouth PO1 2DJ, Hants, England
5.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Yan, Xiaoqiang,Mao, Yiqiao,Ye, Yangdong,et al. Explanation guided cross-modal social image clustering[J]. INFORMATION SCIENCES,2022,593:1-16.
APA Yan, Xiaoqiang,Mao, Yiqiao,Ye, Yangdong,Yu, Hui,&Wang, Fei-Yue.(2022).Explanation guided cross-modal social image clustering.INFORMATION SCIENCES,593,1-16.
MLA Yan, Xiaoqiang,et al."Explanation guided cross-modal social image clustering".INFORMATION SCIENCES 593(2022):1-16.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace