基于边缘信息增强的前列腺MR图像分割网络
摘 要
目的 前列腺图像精确分割对评估患者健康和制定治疗方案至关重要。然而传统U-Net模型在前列腺MR (magnetic resonance)图像分割上存在过拟合、边缘信息丢失等问题。针对上述问题,提出一种改进的U-Net 2D分割模型,旨在增强边缘信息、降低噪声影响,进而提高前列腺分割效果。方法 为缓解过拟合现象,新模型通过对标准U-Net架构进行修改,将普通卷积替换为深度可分离卷积,重新设计编码器和解码器结构,降低模型参数量;为保存分割结果中的边缘信息,新模型通过ECA (efficient channel attention)注意力机制对U-Net解码器特征进行优化,以放大并保存小尺度目标的信息,并提出边缘信息模块和边缘信息金字塔模块,恢复并增强边缘信息,以缓解频繁下采样带来的边缘信息衰退以及编码器和解码器特征之间的语义差距问题;利用空洞空间金字塔池化(atrous spatial pyramid pooling,ASPP)模块对特征进行重采样,扩大感受野,以消除特征噪声。结果 在PROMISE12(prostate MR image segmentation 2012)数据集上验证模型的有效性,并与6种基于U-Net的图像分割方法进行对比,实验证明其分割结果在Dice系数(Dice coefficient,DC)、HD95(95% Hausdorff distance)、召回率(recall)、Jaccard系数和准确度(accuracy)等指标上均有提高,DC较U-Net提高了8.87%,HD95较U-Net++和Attention U-Net分别降低了12.04 mm和3.03 mm。结论 提出一种基于边缘信息增强的前列腺MR图像分割网络(attention mechanism and marginal information fusion U-Net,AIM-U-Net),其生成的分割图像具有丰富的边缘信息和空间信息,其主观效果和客观评价指标均优于其他同类方法,为提高临床诊断的准确度提供帮助。
关键词
Prostate MR image segmentation network with edge information enhancement
Zhang Die1, Huang Hui1, Ma Yan1, Huang Bingcang2, Lu Weiping2(1.College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201400, China;2.Department of Radiology, Gongli Hospital of Shanghai Pudong New Area, Shanghai 200120, China) Abstract
Objective Prostate cancer, which is an epithelial malignancy that occurs in the prostate, is one of the most common malignant diseases. Early detection of potentially cancerous prostate is important to reduce the prostate cancer mortality. Magnetic resonance imaging(MRI) is one of the most commonly used imaging methods for detecting prostate in clinical practice and commonly used for the detection, localization, and segmentation of prostate cancer. Formulating suitable medical plans for patients and postoperative record is important. In computer-aided diagnosis, extracting the prostate region from the image and further calculating the corresponding characteristics are often necessary for physiological analysis and pathological research to assist clinicians in making accurate judgments. The current methods of MRI prostate segmentation can be divided into two categories:traditional and deep-learning-based methods. The traditional segmentation method is based on the analysis of the features extracted from the image with knowledge of image processing. The effect of this kind of method depends on the performance of the extracted features, and this method sometimes requires manual interaction. In recent years, deep learning technology has been widely applied to image segmentation with the continuous development of computer technology. Unlike visible light images, medical images have special characteristics:large grayscale range, unclear boundaries, and the human organs have a relatively stable distribution in the human body. Considering these characteristics of medical images, a fully convolution-based U-Net was first proposed in 2015, which is a neural network model for solving the problem of medical image segmentation. Compared with other networks, U-Net has obvious advantages for medical image segmentation, but it still has some weaknesses that must be overcome. On the one hand, the dataset of medical images is not huge, but the traditional U-Net model has numerous parameters, which can easily lead to network overfitting. On the other hand, during feature extraction, the edge information of the image is lost. Furthermore, the small-scale information of the target object is difficult to save. The feature map obtained by U-Net's skip connections usually contains noise, resulting in low model segmentation accuracy. To solve the above problems, this paper proposes an improved U-Net 2D prostate segmentation model, that is, AIM-U-Net, which can enhance the edge information between tissues and organs. AIM-U-Net can also reduce the influence of image noise, thereby improving the effect of prostate segmentation. Method To solve the overfitting problem, we redesign the encoder and decoder structure of the original U-Net, and the ordinary convolution is replaced with deep separable convolution. Deep separable convolution can effectively reduce the number of parameters in the network, thereby improving the computational efficiency, generalization ability, and accuracy of the model. In addition, we optimize the decoder features through the efficient channel attention module to amplify and retain information on small-scale targets. Moreover, edge information can provide fine-grained constraints to guide the feature extraction during segmentation. The features of shallow coding units retain sufficient edge information due to their high resolution, while the features extracted by the deep coding unit capture global feature information. Therefore, we designed the edge information module(EIM) to integrate the shallow features of the encoder and the high-level semantic information to obtain and enhance the edge information. Therefore, the obtained feature map has rich edge information and advanced semantic information. The EIM has two main functions. First, it can provide edge information and guide the segmentation process in the decoding path. Second, the edge detection loss of early convolutional layers is supervised by adopting a deep supervision mechanism. Moreover, the features extracted from different modules have their own advantage. The features of the deep coding unit can capture the global high-level discriminant feature information of the prostate, which is extremely helpful for the segmentation of small lesions. The multi-scale feature of the decoding unit has rich spatial semantic information, which can improve the accuracy of segmentation. The fusion information obtained by the EIM has rich edge information and advanced semantic information. Therefore, we design an edge information pyramid module(EIPM), which comprehensively uses the above different information by fusing the edge information, the deep features of the coding unit, and the multi-scale features of the decoding unit, so that the segmentation model can understand the image more comprehensively and improve the accuracy and robustness of segmentation. The EIPM can guide the segmentation process in the decoding path by fusing multi-scale information and can supervise the region segmentation loss of the decoder's convolutional layer using the deep supervision mechanism. In the neural network segmentation task, the feature map obtained by feature fusion usually contains noise, decreasing the segmentation accuracy. To solve this problem, we use the atrous spatial pyramid pooling(ASPP) to process the enhanced edge feature map obtained by the EIPM, and the obtained multi-scale features are concatenated. ASPP resamples the fusion feature map through dilation convolution with different dilation rates, which can capture multi-scale context information, eliminate the noise of multi-scale features, and obtain a more accurate prostate representation. Hence, the segmentation result is obtained by 1 × 1 convolution with one output channels, whose dimension is the same as that of the input image. Finally, to accelerate the convergence speed of the network, we design a deep supervision mechanism to improve the convergence speed of the model and realize deep supervision mechanism through 1 × 1 convolution and activation function. Regarding the loss function of the whole model, we used a hybrid function of Dice loss and cross entropy loss. The total loss of the model includes the final segmentation loss, the edge segmentation loss, and the four region segmentation losses. Result We use the PROMISE12 dataset to verify the effectiveness of the model and compare the result with those of six other medical image segmentation methods based on U-Net. The experimental results show that the segmented images are remarkably improved in Dice coefficient(DC), 95% Hausdorff distance (HD95), recall, Jaccard coefficient(Jac), and accuracy. The DC is 8. 87% higher than that of U-Net, and the HD95 value is 12. 04 mm and 3. 03 mm lower than those of U-Net++ and Attention U-Net, respectively. Conclusion The edge of the segmented prostate is more refined using our proposed AIM-U-Net than that using other methods. AIM-U-Net can extract more edge details of the prostate by utilizing the EIM and the EIPM and effectively suppress similar background information and the noise surrounding the prostate.
Keywords
|