SAU-Net:基于U-Net和自注意力机制的医学图像分割方法
SAU-Net:Medical Image Segmentation Method Based on U-Net and Self-Attention
查看参考文献26篇
文摘
|
基于深度学习的生物医学图像分割由于其精度的提高,可以更好地辅助医生做精确的诊断.目前主流的基于U-Net的分割模型通过多层卷积进行局部特征的提取,缺失了全局信息,使分割过于局部化而产生误差.本文通过自注意力机制和分解卷积策略对U-Net模型进行改进,提出一种新的深度分割网络SAU-Net,使用自注意力模块增加全局信息,将原U-Net中的级联结构改为逐像素相加,减小维度,降低计算量;提出一种快速简洁的分解卷积方法,将传统卷积分解为两路一维卷积,并加入残差连接强化上下文信息.在BRATS和Kaggle两个脑肿瘤数据集上进行的实验结果表明,SAU-Net在参数量和Dice系数上都有更优的性能. |
其他语种文摘
|
Biomedical image segmentation based on deep learning can better help doctors make an accurate diagnosis due to its enhanced accuracy.At present,the U-Net-based mainstream segmentation model extracts local features through multi-layer convolutions,which lacks global information and leads to over-localized results with errors.This paper improves the U-Net model through the self-attention mechanism and decomposition convolution and proposes a new deep segmentation network called SAU-Net.The model uses the self-attention module to increase global information,and changes the cascade structure in the original U-Net to pixel-by-pixel addition in order to reduce the dimension and cut down the calculation cost.A fast and concise decomposition convolution method is proposed which integrates the traditional convolution into a two-way one-dimensional convolution,and the residual connection is added to enhance the context information.The experimental results conducted on the two brain tumor datasets of BRATS and Kaggle show that SAU-Net has better performance in terms of parameters and the Dice coefficients. |
来源
|
电子学报
,2022,50(10):2433-2442 【核心库】
|
DOI
|
10.12263/DZXB.20200984
|
关键词
|
自注意力
;
分解卷积
;
医学图像分割
;
深度学习
;
U-Net
|
地址
|
青岛科技大学信息科学技术学院, 山东, 青岛, 266061
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
0372-2112 |
学科
|
自动化技术、计算机技术 |
基金
|
山东省高等学校青创人才引育计划"人工智能与医学影像分析创新团队"建设项目
|
文献收藏号
|
CSCD:7318662
|
参考文献 共
26
共2页
|
1.
Altaf F. Going deep in medical image analysis: Concepts, methods, challenges, and future directions.
IEEE Access,2019,7:99540-99572
|
CSCD被引
6
次
|
|
|
|
2.
Ronneberger O. U-Net: Convolutional Networks for Biomedical Image Segmentation.
Lecture Notes in Computer Science,2015:234-241
|
CSCD被引
600
次
|
|
|
|
3.
Zhou Z W. UNet++: A nested U-net architecture for medical image segmentation.
Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support,2018,11045:3-11
|
CSCD被引
138
次
|
|
|
|
4.
Kamnitsas K. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation.
Medical Image Analysis,2017,36:61-78
|
CSCD被引
80
次
|
|
|
|
5.
Ghafoorian M. Non-uniform patch sampling with deep convolutional neural networks for white matter hyperintensity segmentation.
2016 IEEE 13th International Symposium on Biomedical Imaging,2016:1414-1417
|
CSCD被引
1
次
|
|
|
|
6.
Hu J. Squeeze-and-excitation networks.
IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,42(8):2011-2023
|
CSCD被引
770
次
|
|
|
|
7.
Xie Y P. Spatial clockwork recurrent neural network for muscle perimysium segmentation.
International Conference on Medical Image Computing and Computer-Assisted Intervention,2016:185-193
|
CSCD被引
1
次
|
|
|
|
8.
Chen J X. Combining fully convolutional and recurrent neural networks for 3D biomedical image segmentation.
NIPS'16: Proceedings of the 30th International Conference on Neural Information Processing Systems,2016:3044-3052
|
CSCD被引
1
次
|
|
|
|
9.
Tseng K L. Joint sequence learning and cross-modality convolution for 3D biomedical segmentation.
2017 IEEE Conference on Computer Vision and Pattern Recognition,2017:3739-3746
|
CSCD被引
1
次
|
|
|
|
10.
Vaswani A. Attention is all You need.
NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems,2017:6000-6010
|
CSCD被引
16
次
|
|
|
|
11.
Wang X L. Non-local neural networks.
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,2018:7794-7803
|
CSCD被引
76
次
|
|
|
|
12.
Zhang H. Selfattention generative adversarial networks.
Proceedings of the 36th International Conference on Machine Learning,2019:12744-12753
|
CSCD被引
1
次
|
|
|
|
13.
Huang Z L. CCNet: criss-cross attention for semantic segmentation.
2019 IEEE/CVF International Conference on Computer Vision (ICCV),2019:603-612
|
CSCD被引
13
次
|
|
|
|
14.
Fu J. Dual attention network for scene segmentation.
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2019:3141-3149
|
CSCD被引
4
次
|
|
|
|
15.
Zhang F. ACFNet: Attentional class feature network for semantic segmentation.
2019 IEEE/CVF International Conference on Computer Vision(ICCV),2019:6797-6806
|
CSCD被引
1
次
|
|
|
|
16.
Sinha A. Multi-scale self-guided attention for medical image segmentation.
IEEE Journal of Biomedical and Health Informatics,2021,25(1):121-130
|
CSCD被引
26
次
|
|
|
|
17.
Oktay O. Attention U-Net: Learning where to look for the pancreas.
1st Conference on Medical Imaging with Deep Learning (MIDL 2018),2018:4861068
|
CSCD被引
1
次
|
|
|
|
18.
Li R R.
Connection Sensitive Attention U-NET for Accurate Retinal Vessel Segmentation,2019
|
CSCD被引
2
次
|
|
|
|
19.
Qi K H. X-Net: Brain stroke lesion segmentation based on depthwise separable convolution and long-range dependencies.
International Conference on Medical Image Computing and Computer-Assisted Intervention,2019:247-255
|
CSCD被引
1
次
|
|
|
|
20.
Zhang H. RSANet: Recurrent slice-wise attention network for multiple sclerosis lesion segmentation.
International Conference on Medical Image Computing and Computer-Assisted Intervention,2019:411-419
|
CSCD被引
2
次
|
|
|
|
|