Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
Xiao Wang3,4
刊名Machine Intelligence Research
2023
卷号20期号:4页码:447-482
关键词Multi-modal (MM), pre-trained model (PTM), information fusion, representation learning, deep learning
ISSN号2731-538X
DOI10.1007/s11633-022-1410-8
英文摘要With the urgent demand for generalized deep models, many pre-trained big models are proposed, such as bidirectional encoder representations (BERT), vision transformer (ViT), generative pre-trained transformers (GPT), etc. Inspired by the success of these models in single domains (like computer vision and natural language processing), the multi-modal pre-trained big models have also drawn more and more attention in recent years. In this work, we give a comprehensive survey of these models and hope this paper could provide new insights and helps fresh researchers to track the most cutting-edge works. Specifically, we firstly introduce the background of multi-modal pre-training by reviewing the conventional deep learning, pre-training works in natural language process, computer vision, and speech. Then, we introduce the task definition, key challenges, and advantages of multi-modal pre-training models (MM PTMs), and discuss the MM-PTMs with a focus on data, objectives, network architectures, and knowledge enhanced pre-training. After that, we introduce the downstream tasks used for the validation of large-scale MM-PTMs, including generative, classification, and regression tasks. We also give visualization and analysis of the model parameters and results on representative downstream tasks. Finally, we point out possible research directions for this topic that may benefit future works. In addition, we maintain a continuously updated paper list for large-scale pre-trained multi-modal big models: https://github.com/wangxiao5791509/MultiModal_BigModels_Survey.
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/52345]  
专题自动化研究所_学术期刊_International Journal of Automation and Computing
作者单位1.College of Computer Science, Sichuan University, Chengdu 610065, China
2.School of Computer Science, Peking University, Beijing 100871, China
3.School of Computer Science and Technology, Anhui University, Hefei 230601, China
4.Peng Cheng Laboratory, Shenzhen 518055, China
推荐引用方式
GB/T 7714
Xiao Wang. Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey[J]. Machine Intelligence Research,2023,20(4):447-482.
APA Xiao Wang.(2023).Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey.Machine Intelligence Research,20(4),447-482.
MLA Xiao Wang."Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey".Machine Intelligence Research 20.4(2023):447-482.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace