Contrastive Multi-Modal Knowledge Graph Representation Learning
Fang, Quan2; Zhang, Xiaowei1; Hu, Jun2; Wu, Xian3; Xu, Changsheng2,4
刊名IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
2023-09-01
卷号35期号:9页码:8983-8996
关键词Knowledge graph multimedia graph neural network contrastive learning
ISSN号1041-4347
DOI10.1109/TKDE.2022.3220625
通讯作者Xu, Changsheng(csxu@nlpr.ia.ac.cn)
英文摘要Representation learning of knowledge graphs (KGs) aims to embed both entities and relations as vectors in a continuous low-dimensional space, which has facilitated various applications such as link prediction and entity retrieval. Most existing KG embedding methods focus on modeling the structured fact triples independently and ignore the multi-type relations among triples as well as the variety of data types (e.g., texts and images) associated with entities in KGs, and thus fail to capture the complex and multi-modal information that is inherently inside the entity-relation triples. In this paper, we propose a novel approach for knowledge graph embedding named Contrastive Multi-modal Graph Neural Network (CMGNN), which can encapsulate comprehensive features from multi-modal content descriptions of entities and high-order connectivity structures. Specifically, CMGNN first learns entity embeddings from multi-modal content and then contrasts encodings from multi-relational local neighbors and high-order connectivities to obtain latent representations of entities and relations simultaneously. Experimental results demonstrate that CMGNN can effectively model the multi-modalities and multi-type structures in KGs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the tasks of link prediction and entity classification.
资助项目National Natural Science Foundation of China[62072456] ; National Natural Science Foundation of China[62036012] ; National Natural Science Foundation of China[62106262] ; Open Research Projects of Zhejiang Lab[2021KE0AB05]
WOS关键词NETWORK
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE COMPUTER SOC
WOS记录号WOS:001045704800021
资助机构National Natural Science Foundation of China ; Open Research Projects of Zhejiang Lab
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/53969]  
专题多模态人工智能系统全国重点实验室
通讯作者Xu, Changsheng
作者单位1.Zhengzhou Univ, Zhengzhou 450001, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
3.Tencent Med AI Lab, Beijing 100080, Peoples R China
4.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Fang, Quan,Zhang, Xiaowei,Hu, Jun,et al. Contrastive Multi-Modal Knowledge Graph Representation Learning[J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,2023,35(9):8983-8996.
APA Fang, Quan,Zhang, Xiaowei,Hu, Jun,Wu, Xian,&Xu, Changsheng.(2023).Contrastive Multi-Modal Knowledge Graph Representation Learning.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,35(9),8983-8996.
MLA Fang, Quan,et al."Contrastive Multi-Modal Knowledge Graph Representation Learning".IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 35.9(2023):8983-8996.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace