CORC  > 自动化研究所  > 中国科学院自动化研究所
Learnable Gated Convolutional Neural Network for Semantic Segmentation in Remote-Sensing Images
Guo, Shichen2,3; Jin, Qizhao1; Wang, Hongzhen1; Wang, Xuezhi2; Wang, Yangang2; Xiang, Shiming1
刊名REMOTE SENSING
2019-08-01
卷号11期号:16页码:22
关键词semantic segmentation CNN deep learning remote sensing gate function multiscale feature fusion
DOI10.3390/rs11161922
通讯作者Wang, Xuezhi(wxz@cnic.cn)
英文摘要Semantic segmentation in high-resolution remote-sensing (RS) images is a fundamental task for RS-based urban understanding and planning. However, various types of artificial objects in urban areas make this task quite challenging. Recently, the use of Deep Convolutional Neural Networks (DCNNs) with multiscale information fusion has demonstrated great potential in enhancing performance. Technically, however, existing fusions are usually implemented by summing or concatenating feature maps in a straightforward way. Seldom do works consider the spatial importance for global-to-local context-information aggregation. This paper proposes a Learnable-Gated CNN (L-GCNN) to address this issue. Methodologically, the Taylor expression of the information-entropy function is first parameterized to design the gate function, which is employed to generate pixelwise weights for coarse-to-fine refinement in the L-GCNN. Accordingly, a Parameterized Gate Module (PGM) was designed to achieve this goal. Then, the single PGM and its densely connected extension were embedded into different levels of the encoder in the L-GCNN to help identify the discriminative feature maps at different scales. With the above designs, the L-GCNN is finally organized as a self-cascaded end-to-end architecture that is able to sequentially aggregate context information for fine segmentation. The proposed model was evaluated on two public challenging benchmarks, the ISPRS 2Dsemantic segmentation challenge Potsdam dataset and the Massachusetts building dataset. The experiment results demonstrate that the proposed method exhibited significant improvement compared with several related segmentation networks, including the FCN, SegNet, RefineNet, PSPNet, DeepLab and GSN.For example, on the Potsdam dataset, our method achieved a 93.65% F1 score and 88.06% IoU score for the segmentation of tiny cars in high-resolution RS images. As a conclusion, the proposed model showed potential for object segmentation from the RS images of buildings, impervious surfaces, low vegetation, trees and cars in urban settings, which largely varies in size and have confusing appearances.
资助项目Strategic Priority Research Program of the Chinese Academy of Sciences[XDA19020103] ; National Key Research and Development Project[2017YFB0202202] ; National Natural Science Foundation of China[91646207]
WOS关键词CLASSIFICATION
WOS研究方向Remote Sensing
语种英语
出版者MDPI
WOS记录号WOS:000484387600085
资助机构Strategic Priority Research Program of the Chinese Academy of Sciences ; National Key Research and Development Project ; National Natural Science Foundation of China
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/27326]  
专题中国科学院自动化研究所
通讯作者Wang, Xuezhi
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Comp Network Informat Ctr, 4 Zhongguancun Nansi St, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Guo, Shichen,Jin, Qizhao,Wang, Hongzhen,et al. Learnable Gated Convolutional Neural Network for Semantic Segmentation in Remote-Sensing Images[J]. REMOTE SENSING,2019,11(16):22.
APA Guo, Shichen,Jin, Qizhao,Wang, Hongzhen,Wang, Xuezhi,Wang, Yangang,&Xiang, Shiming.(2019).Learnable Gated Convolutional Neural Network for Semantic Segmentation in Remote-Sensing Images.REMOTE SENSING,11(16),22.
MLA Guo, Shichen,et al."Learnable Gated Convolutional Neural Network for Semantic Segmentation in Remote-Sensing Images".REMOTE SENSING 11.16(2019):22.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace