融合跨阶段深度学习的脑肿瘤MRI图像分割
摘 要
目的 磁共振成像(magnetic resonance imaging,MRI)作为一种非侵入性的软组织对比成像方式,可以提供有关脑肿瘤的形状、大小和位置等有价值的信息,是用于脑肿瘤患者检查的主要方法,在脑肿瘤分割任务中发挥着重要作用。由于脑肿瘤本身复杂多变的形态、模糊的边界、低对比度以及样本梯度复杂等问题,导致高精度脑肿瘤MRI图像分割非常具有挑战性,目前主要依靠专业医师手动分割,费时且可重复性差。对此,本文提出一种基于U-Net的改进模型,即CSPU-Net (cross stage partial U-Net)脑肿瘤分割网络,以实现高精度的脑肿瘤MRI图像分割。方法 CSPU-Net在U-Net结构的上下采样中分别加入两种跨阶段局部网络结构(cross stage partial module,CSP)提取图像特征,结合GDL (general Dice loss)和WCE (weighted cross entropy)两种损失函数解决训练样本类别不平衡问题。结果 在BraTS (brain tumor segmentation)2018和BraTS 2019两个数据集上进行实验,在BraTS 2018数据集中的整体肿瘤分割精度、核心肿瘤分割精度和增强肿瘤分割精度分别为87.9%、80.6%和77.3%,相比于传统U-Net的改进模型(ResU-Net)分别提升了0.80%、1.60%和2.20%。在BraTS 2019数据集中的整体肿瘤分割精度、核心肿瘤分割精度和增强肿瘤分割精度分别为87.8%、77.9%和70.7%,相比于ResU-Net模型提升了0.70%、1.30%和1.40%。结论 本文提出的跨阶段局部网络结构,通过增加梯度路径、减少信息损失,可以有效提高脑肿瘤分割精度,实验结果证明了该模块对脑肿瘤分割任务的有效性。
关键词
Cross-stage deep-learning-based MRI fused images of human brain tumor segmentation
Xia Feng1, Shao Haijian1,2, Deng Xing1,2(1.School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang 212003, China;2.Key Laboratory of Complex Engineering System Measurement and Control, School of Automation, Southeast University, Ministry of Education, Nanjing 210009, China) Abstract
Objective Human brain tumors are a group of mutant cells in the brain or skull. These benign or malignant brain tumors can be classified based on their growth characteristics and influence on the human body. Gliomas are one of the most frequent forms of malignant brain tumors, accounting for approximately 40% to 50% of all brain tumors. Glioma is classified as high-grade glioma (HGG) or low-grade glioma (LGG) depending on the degree of invasion. Low-grade glioma (LGG) is a well-differentiated glioma with a prompt prognosis. High-grade glioma (HGG) is a poorly differentiated glioma with a in qualified prognosis. Gliomas with varying degrees of differentiation are appeared following the varied degrees of peritumoral edema, edema types, and necrosis. the boundary of gliomas and normal tissues is often blurred. It is difficult to identify the scope of lesions and surgical area, which has a significant impact on surgical quality and patient prognosis. As a non-invasive and clear soft tissue contrast imaging tool, magnetic resonance imaging (MRI) can provide vital information on the shape, size, and location of brain tumors. High-precision brain tumor MRI image segmentation is challenged due to the complicated and variable morphology, fuzzy borders, low contrast, and complicated sample gradients of brain tumors. Manual segmentation is time-consuming and inconsistent. The International Association for Medical Image Computing and Computer-Aided Intervention (MICCAI)'s Brain Tumor Segmentation (BraTS) is a global medical image segmentation challenge concentrating on the evaluation of automatic segmentation methods for human brain tumors. There are four types of automatic brain tumor segmentation algorithms as mentioned below:supervised learning, semi-supervised learning, unsupervised learning, and hybrid learning. Supervised-learning-based algorithm is currently the effective method. Various depth neural network models for computer vision problems, such as Visual Geometry Group Network (VGGNet), GoogLeNet, ResNet, and DenseNet, have been presented in recent years. The above deep neural network model proposes a novel approach to the problem of MRI brain image segmentation, and it significantly advances the development of deep learning-based brain tumor diagnosis methods. As a result, deep learning method is to develop the task of automatic segmentation of brain tumor MRI images. Method Our research integrates low resolution information and high resolution information via the U-Net structure of the all convolutional neural network. An improved cross stage partial U-Net(CSPU-Net) brain tumor segmentation network derived from the U-Net network achieves high-precision brain tumor MRI image segmentation. The basic goal of the cross stage partial (CSP) module is to segment the grass-roots feature mapping into two sections as following:1) Dividing the gradient flow to extend distinct network paths, and then 2) fusing the two portions based on horizontal hierarchy. The conveyed gradient information can have huge correlation discrepancies by alternating series and transition procedures. To extract image features, CSPU-Net adds two types of cross stage partial network structures to the up and down sampling of the U-Net network. The number of gradient routes is increased using the splitting and merging technique. The drawbacks of utilizing explicit feature map replication for connection are mitigated, enhancing the model's feature learning capabilities. To overcome the imbalance issue of sample class, two loss functions, general dice loss and weighted cross-entropy are combined. The cross stage partial structure is final compared to ResU-Net, which adds a residual block, to in identify the effectiveness of cross stage partial structure as ResU-Net in the brain tumor segmentation test. Result Our experimental results of the CSPU-Net model in the context of the BraTS 2018 and BraTS 2019 datasets has its priority. The BraTS 2018 dataset yielded 87.9% accuracy in whole tumor segmentation, 80.6% accuracy in core tumor segmentation, and 77.3% accuracy in enhanced tumor segmentation, respectively. This method enhances the segmentation accuracy of brain tumor MRI images by 0.80%, 1.60%, and 2.20% each. In the BraTS 2019 dataset, the whole tumor segmentation accuracy is 87.8%, the core tumor segmentation accuracy is 77.9%, and the improved tumor segmentation accuracy is 70.7%, respectively. This method enhances the segmentation accuracy of brain tumor MRI images by 0.70%, 1.30%, and 1.40%, respectively in comparison of the traditional improved ResU-Net. Conclusion This research provides a cross-stage deep learning-based 2D segmentation network for human brain tumor MRI images. Using cross stage partial network structure in U-Net up and down sampling, the model enhances the accuracy of brain tumor segmentation via gradient path expansion and information loss deduction. The demonstrated results illustrate that our model has its potentials on BraTS datasets on 2D segmentation models development and demonstrates the module's efficiency in the brain tumor segmentation task.
Keywords
|