Medical lesion segmentation by combining multimodal images with modality weighted UNet | |
Zhu, Xiner1; Wu, Yichao1; Hu, Haoji1; Zhuang, Xianwei1; Yao, Jincao2,3; Ou, Di2,3; Li, Wei2,3; Song, Mei2,3; Feng, Na2,3; Xu, Dong2,3 | |
刊名 | MEDICAL PHYSICS |
2022-04-07 | |
关键词 | attention deep neural networks medical image segmentation multimodality fusion |
ISSN号 | 0094-2405 |
DOI | 10.1002/mp.15610 |
通讯作者 | Hu, Haoji(haoji_hu@zju.edu.cn) ; Xu, Dong(xudong@zjcc.org.cn) |
英文摘要 | Purpose Automatic segmentation of medical lesions is a prerequisite for efficient clinic analysis. Segmentation algorithms for multimodal medical images have received much attention in recent years. Different strategies for multimodal combination (or fusion), such as probability theory, fuzzy models, belief functions, and deep neural networks, have also been developed. In this paper, we propose the modality weighted UNet (MW-UNet) and attention-based fusion method to combine multimodal images for medical lesion segmentation. Methods MW-UNet is a multimodal fusion method which is based on UNet, but we use a shallower layer and fewer feature map channels to reduce the amount of network parameters, and our method uses the new multimodal fusion method called fusion attention. It uses weighted sum rule and fusion attention to combine feature maps in intermediate layers. During training, all the weight parameters are updated through backpropagation like other parameters in the network. We also incorporate residual blocks into MW-UNet to further improve segmentation performance. The comparison between the automatic multimodal lesion segmentations and the manual contours was quantified by (1) five metrics including Dice, 95% Hausdorff Distance (HD95), volumetric overlap error (VOE), relative volume difference (RVD), and mean-Intersection-over-Union (mIoU); (2) Number of parameters and flops to calculate the complexity of the network. Results The proposed method is verified on ZJCHD, which is the data set of contrast-enhanced computed tomography (CECT) for Liver Lesion Segmentation taken from Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Hangzhou, China. For accuracy evaluation, we use 120 patients with liver lesions from ZJCHD, of which 100 are used for fourfold cross-validation (CV) and 20 are used for hold-out (HO) test. The mean Dice was 90.55 +/- 14.44%$90.55 \pm 14.44\%$ and 89.31 +/- 19.07%$89.31 \pm 19.07\%$ for HO and CV tests, respectively. The corresponding HD95, VOE, RVD, and mIoU of the two tests are 1.95 +/- 1.83 and 2.67 +/- 3.35 mm, 13.11 +/- 15.83 and 13.13 +/- 18.52%$13.13 \pm 18.52 \%$, 12.20 +/- 18.20 and 13.00 +/- 21.82%$13.00 \pm 21.82 \%$, and 83.79 +/- 15.83 and 82.35 +/- 20.03%$82.35 \pm 20.03 \%$. The parameters and flops of our method is 4.04 M and 18.36 G, respectively. Conclusions The results show that our method performs well on multimodal liver lesion segmentation. It can be easily extended to other multimodal data sets and other networks for multimodal fusion. Our method is the potential to provide doctors with multimodal annotations and assist them with clinical diagnosis. |
资助项目 | Zhejiang Provincial key RD Program of China[2021C01119] ; University Cancer Foundation via the Sister Institution Network Fund at the University of Texas MD Anderson Cancer Center ; National Natural Science Foundation of China[82071946] ; National Natural Science Foundation of China[U21B2004] ; Zhejiang Provincial Natural Science Foundation of China[LSD19H180001] ; Zhejiang Provincial Natural Science Foundation of China[LZY21F030001] ; Project of Zhejiang Medical and Health Scinece and Technology Plan[2021KY099] ; Project of Zhejiang Medical and Health Scinece and Technology Plan[2022KY110] |
WOS关键词 | CONVOLUTIONAL NEURAL-NETWORKS |
WOS研究方向 | Radiology, Nuclear Medicine & Medical Imaging |
语种 | 英语 |
出版者 | WILEY |
WOS记录号 | WOS:000778973500001 |
资助机构 | Zhejiang Provincial key RD Program of China ; University Cancer Foundation via the Sister Institution Network Fund at the University of Texas MD Anderson Cancer Center ; National Natural Science Foundation of China ; Zhejiang Provincial Natural Science Foundation of China ; Project of Zhejiang Medical and Health Scinece and Technology Plan |
内容类型 | 期刊论文 |
源URL | [http://ir.hfcas.ac.cn:8080/handle/334002/128647] |
专题 | 中国科学院合肥物质科学研究院 |
通讯作者 | Hu, Haoji; Xu, Dong |
作者单位 | 1.Zhejiang Univ, Coll Informat Sci & Elect Engn, Yuquan Campus Zhejiang Univ,38 Zheda Rd, Hangzhou 310027, Zhejiang, Peoples R China 2.Univ Chinese Acad Sci, Canc Hosp, Zhejiang Canc Hosp, 1 East Banshan Rd, Hangzhou 310022, Peoples R China 3.Chinese Acad Sci, Inst Basic Med & Canc IBMC, Hangzhou, Peoples R China |
推荐引用方式 GB/T 7714 | Zhu, Xiner,Wu, Yichao,Hu, Haoji,et al. Medical lesion segmentation by combining multimodal images with modality weighted UNet[J]. MEDICAL PHYSICS,2022. |
APA | Zhu, Xiner.,Wu, Yichao.,Hu, Haoji.,Zhuang, Xianwei.,Yao, Jincao.,...&Xu, Dong.(2022).Medical lesion segmentation by combining multimodal images with modality weighted UNet.MEDICAL PHYSICS. |
MLA | Zhu, Xiner,et al."Medical lesion segmentation by combining multimodal images with modality weighted UNet".MEDICAL PHYSICS (2022). |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论