Surgical instrument segmentation for endoscopic vision with data fusion of CNN prediction and kinematic pose | |
Fangbo Qin1,2; Yangming Li3; Yun-Hsuan Su4; De Xu1,2; Blake Hannaford4 | |
2019-08 | |
会议日期 | 2019-5-20 |
会议地点 | Montreal, Canada |
DOI | 10.1109/ICRA.2019.8794122 |
英文摘要 | The real-time and robust surgical instrument segmentation is an important issue for endoscopic vision. We propose an instrument segmentation method fusing the convolutional neural networks (CNN) prediction and the kinematic pose information. First, the CNN model ToolNet-C is designed, which cascades a convolutional feature extractor trained over numerous unlabeled images and a pixel-wise segmentor trained on few labeled images. Second, the silhouette projection of the instrument body onto the endoscopic image is implemented based on the measured kinematic pose. Third, the particle filter with the shape matching likelihood and the weight suppression is proposed for data fusion, whose estimate refines the kinematic pose. The refined pose determines an accurate silhouette mask, which is the final segmentation output. The experiments are conducted with a surgical navigation system, several animal-tissue backgrounds, and a debrider instrument. |
会议录出版者 | IEEE |
语种 | 英语 |
URL标识 | 查看原文 |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/25772] |
专题 | 精密感知与控制研究中心_精密感知与控制 |
通讯作者 | Blake Hannaford |
作者单位 | 1.中国科学院自动化研究所 2.中国科学院大学 3.Rochester Institute of Technology, Rochester, NY 14623, USA. 4.University of Washington, Seattle, WA 98195-2500, USA |
推荐引用方式 GB/T 7714 | Fangbo Qin,Yangming Li,Yun-Hsuan Su,et al. Surgical instrument segmentation for endoscopic vision with data fusion of CNN prediction and kinematic pose[C]. 见:. Montreal, Canada. 2019-5-20. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论