肺部肿瘤跨模态图像融合的并行分解自适应融合模型
周涛1,2, 刘珊1, 董雅丽1, 白静1,2, 陆惠玲3(1.北方民族大学计算机科学与工程学院, 银川 750021;2.北方民族大学图像图形智能处理国家民委 重点实验室, 银川 750021;3.宁夏医科大学理学院, 银川 750004) 摘 要
目的 跨模态像素级医学图像融合是精准医疗领域的研究热点。针对传统的像素级图像融合算法存在融合图像对比度不高和边缘细节不能较好保留等问题,本文提出并行分解图像自适应融合模型。方法 首先,使用NSCT(non-subsampled contourlet transform)提取原图像的细节方向信息,将原图像分为低频子带和高频子带,同时使用潜在低秩表示方法(latent low-rank representation, LatLRR)提取原图像的显著能量信息,得到低秩部分、显著部分和噪声部分。然后,在低频子带融合方面,NSCT分解后得到的低频子带包含原图像的主要能量,在融合过程中存在多对一的模糊映射关系,因此低频子带融合规则采用基于模糊逻辑的自适应方法,使用高斯隶属函数表示图像模糊关系;在高频子带融合方面,NSCT分解后得到高频子带系数间有较强的结构相似性,高频子带包含图像的轮廓边缘信息,因此高频子带采用基于Piella框架的自适应融合方法,引入平均结构相似性作为匹配测度,区域方差作为活性测度,设计自适应加权决策因子对高频子带进行融合。结果 在5组CT(computed tomography)肺窗/PET(positron emission tomography)和5组CT纵膈窗/PET进行测试,与对比方法相比,本文方法融合图像的平均梯度提升了66.6%,边缘强度提升了64.4%,边缘保持度提升了52.7%,空间频率提升了47.3%。结论 本文方法生成的融合图像在主客观评价指标中均取得了较好的结果,有助于辅助医生进行更快速和更精准的诊疗。
关键词
Parallel decomposition adaptive fusion model: cross-modal image fusion of lung tumors
Zhou Tao1,2, Liu Shan1, Dong Yali1, Bai Jing1,2, Lu Huiling3(1.School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China;2.Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan 750021, China;3.School of Science, Ningxia Medical University, Yinchuan 750004, China) Abstract
Objective Cross-modal medical image fusion has been developed as a key aspect for precision diagnosis. Thanks to the development of precision technology, positron emission tomography (PET) and computed tomography (CT) can be mainly used for lung tumors detection. The high-resolution of CT images are beneficial to bone tissues diagnosis, but the imaging effect of the lesions is poor, especially the information of tumors-infiltrated cannot be displayed clearly. The PET images can show soft tissues clearly, but the imaging effect of the bone tissues is weakened. Medical images fusion technology can integrate the anatomical and functional information of lesions region and locate lung tumors more accurately. To resolve the problem of low contrast and poor edge detail retention in traditional pixel-level image fusion, the paper develop a decomposition-paralleled adaptive fusion model. Method 1) Use non-subsampled contourlet transform (NSCT) to extract multi-directional details information, 2) use latent low-rank representation (LatLRR) to extract key feature information. The following four aspects are taken into consideration in terms of low frequency sub-bands fusion: first, image fusion is a mapping process of gray value many-to-one, and there is uncertainty existing in the mapping process. Next, the noise in the image is caused by artificial respiration, blood flow, and the overlap between organs and tissues. The noise confuses the contour features in the image and magnifies the ambiguity of the image. Third, the original image is decomposed by NSCT to obtain low-frequency and high-frequency sub-bands. The low-frequency sub-band retains the main energy information like the contour and background of the original image, and the uncertain relationship of the mapping is also retained. Therefore, it is required to design reasonable fusion rules to deal with the mapping relationship. Fourth, fuzzy set theory based fusion rules represent the whole image with a fuzzy matrix and a certain algorithm is used to solve the fuzzy matrix, which solves the fuzzy problem effectively in the image fusion process. The Gaussian membership function can apparently describe the low-frequency sub-band contextual fuzzy information. Therefore, the Gaussian membership function is used as the adaptive weighting coefficient of the low-frequency sub-band, and the fuzzy logic-based adaptive weighting fusion rule is adopted. The fusion rules of the high-frequency sub-bands are considered in the following aspects as well. First, considering that the high-frequency sub-bands contain the contours and edge details of the tissues and organs of the original image, they have structural similarity, and there are strong coefficients between coefficients. Second, structural similarity index measure (SSIM) is a measure of the similarity between two images, which reflects the correlation better between high-frequency sub-band coefficients. Therefore, the averaged structural similarity index is used to measure the coefficient correlation between the two high-frequency sub-bands. Third, the range of the lesion region of lung tumors is no more than one hundred pixels in common. The region-based fusion rules can more complete the characteristics of the lesion region. The regional variance can represent the variation degree of gray value in local regions. The larger of the variance is, the richer of the information reflecting the details of the image are. Therefore, the region variance is selected as the basis for calculating the image activity. The high frequency sub-bands are composed of contour edge information of the image. Therefore, the high frequency sub-bands use the fusion in terms of the Piella framework. In this method, the averaged structure similarity is introduced as the matching method, and the regional variance is used as the activity method, and an adaptive weighting decision factor is designed to fuse the high frequency sub-bands. Finally, the effectiveness of our algorithm is verified via comparative experiments. Result The paper focus on five groups of CT pulmonary window/PET and the five groups of CT mediastinal window/PET are tested. The experiment of compressed sensing-integrated NSCT is carried out and six objective evaluation indexes are selected to evaluate the quality of fused images. The experimental results show that our average gradient, edge intensity and spatial frequency of fused images are improved by 66.6%, 64.4% and 80.3%, respectively. Conclusion To assist quick-response clinical activity and accurate diagnosis and treatment more, our research method is potential to improve the contrastive result of fused images and retain edge details effectively.
Keywords
image fusion Piella framework non-subsampled contourlet transform (NSCT) latent low-rank representation (LatLRR) positron emission tomography/computed tomography (PET/CT)
|