Fusion of heterogeneous attention mechanisms in multi-view convolutional neural network for text classification
Liang, Yunji2; Li, Huihui2; Guo, Bin2; Yu, Zhiwen2; Zheng, Xiaolong2,3,4; Samtani, Sagar1; Zeng, Daniel D.3,4
刊名INFORMATION SCIENCES
2021-02-16
卷号548页码:295-312
关键词View attention Spatial attention Multi-view representation Series and parallel connection Conventional neural network Text classification
ISSN号0020-0255
DOI10.1016/j.ins.2020.10.021
通讯作者Liang, Yunji(liangyunji@nwpu.edu.cn) ; Zheng, Xiaolong(xiaolong.zheng@ia.ac.cn)
英文摘要The rapid proliferation of user generated content has given rise to large volumes of text corpora. Increasingly, scholars, researchers, and organizations employ text classification to mine novel insights for high-impact applications. Despite their prevalence, conventional text classification methods rely on labor-intensive feature engineering efforts that are task specific, omit long-term relationships, and are not suitable for the rapidly evolving domains. While an increasing body of deep learning and attention mechanism literature aim to address these issues, extant methods often represent text as a single view and omit multiple sets of features at varying levels of granularity. Recognizing that these issues often result in performance degradations, we propose a novel Spatial View Attention Convolutional Neural Network (SVA-CNN). SVA-CNN leverages an innovative and carefully designed set of multi-view representation learning, a combination of heterogeneous attention mechanisms and CNN-based operations to automatically extract and weight multiple granularities and fine-grained representations. Rigorously evaluating SVA-CNN against prevailing text classification methods on five large-scale benchmark datasets indicates its ability to outperform extant deep learning based classification methods in both performance and training time for document classification, sentiment analysis, and thematic identification applications. To facilitate model reproducibility and extensions, SVA-CNN's source code is also available via GitHub. (c) 2020 Elsevier Inc. All rights reserved.
资助项目National Key Research and Development Program of China[2019YFB2102200] ; ministry of health of China[2017ZX10303401-002] ; ministry of health of China[2017YFC1200302] ; natural science foundation of China[61902320] ; natural science foundation of China[71472175] ; natural science foundation of China[71602184] ; natural science foundation of China[71621002] ; national science foundation[CNS-1850362] ; national science foundation[OAC-1917117] ; fundamental research funds for the central universities[31020180QD140]
WOS研究方向Computer Science
语种英语
出版者ELSEVIER SCIENCE INC
WOS记录号WOS:000596057300017
资助机构National Key Research and Development Program of China ; ministry of health of China ; natural science foundation of China ; national science foundation ; fundamental research funds for the central universities
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/42817]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心
通讯作者Liang, Yunji; Zheng, Xiaolong
作者单位1.Indiana Univ, Kelley Sch Business, Operat & Decis Technol Dept, Bloomington, IN 47405 USA
2.Northwestern Polytech Univ, Sch Comp Sci, Xian, Shaanxi, Peoples R China
3.Univ Chinese Acad Sci, Beijing, Peoples R China
4.Chinese Acad Sci, State Key Lab Management & Control Complex Syst, Inst Automat, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Liang, Yunji,Li, Huihui,Guo, Bin,et al. Fusion of heterogeneous attention mechanisms in multi-view convolutional neural network for text classification[J]. INFORMATION SCIENCES,2021,548:295-312.
APA Liang, Yunji.,Li, Huihui.,Guo, Bin.,Yu, Zhiwen.,Zheng, Xiaolong.,...&Zeng, Daniel D..(2021).Fusion of heterogeneous attention mechanisms in multi-view convolutional neural network for text classification.INFORMATION SCIENCES,548,295-312.
MLA Liang, Yunji,et al."Fusion of heterogeneous attention mechanisms in multi-view convolutional neural network for text classification".INFORMATION SCIENCES 548(2021):295-312.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace