全细节增强多曝光图像融合
摘 要
目的 曝光融合算法,即将多幅不同曝光时间的图像融合得到一幅曝光度良好的图像,可能在最终的输出图像中引入光晕伪影、边缘模糊和细节丢失等问题。针对曝光融合过程中存在的上述问题,本文从细节增强原理出发提出了一种全细节增强的曝光融合算法。方法 在分析了光晕现象产生原因的基础上,从聚合的新角度对经典引导滤波进行改进,明显改善引导滤波器的保边特性,从而有效去除或减小光晕;用该改进引导滤波器提取不同曝光图像的细节信息,并依据曝光良好度将多幅细节图融合得到拍摄场景的全细节信息;将提取、融合得到的全细节信息整合到由经典曝光融合算法得到的初步融合图像上,最终输出一幅全细节增强后的融合图像。结果 实验选取17组多曝光高质量图像作为输入图像序列,本文算法相较于其他算法得到的融合图像边缘保持较好,融合自然;从客观指标看,本文算法在信息熵、互信息与平均梯度等指标上都较其他融合算法有所提升。以本文17组图像的平均结果来看,本文算法相较于经典的拉普拉斯金字塔融合算法在信息熵上提升了14.13%,在互信息熵上提升了0.03%,在平均梯度上提升了16.45%。结论 提出的全细节增强的曝光融合算法将加权聚合引导滤波用于计算多曝光序列图像的细节信息,并将该细节信息融合到经典曝光融合算法所得到的一幅中间图像之上,从而得到最终的融合图像。本文的处理方法使最终融合图像包含更多细节,降低或避免了光晕及梯度翻转等现象,且最终输出图像的视觉效果更加优秀。
关键词
Overall detail-enhanced multi-exposure images fusion
Chen Bin1,2, Tan Xincheng1, Wu Shiqian1,2(1.School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430081, China;2.Institute of Robotic and Intelligent System, Wuhan University of Science and Technology, Wuhan 430081, China) Abstract
Objective Image quality is constrained of over-exposure or under-exposure due to the higher dynamic range is much than image capture/display devices in real scene. There is two problem-solving as mentioned below:The first one is to obtain high dynamic range (HDR) images using high dynamic imaging devices. Then, the tone mapping algorithm is used to compress the HDR images to deduct dynamic range (LDR) images for displaying on LDR display devices; The second method, which is named as exposure fusion, fuses multi-exposure levels images to obtain the LDR image straight forward. Compared to the first one, the second exposure fusion method is not costly subjected to HDR capture devices and has its lower computational complexity. However, exposure fusion techniques may introduce some artifacts (e.g., halos, edges blurring and details losing) in final generated images. Method Our two crucial tasks facilitate these issues and improve the performance of the exposure fusion. First, an improved guided image filtering, which is referred to as weighted aggregating guided image filtering (WAGIF), demonstrated and extracted fine details based on multiple exposed images. The employed average aggregation is indiscriminate for all patches in guided image filtering (GIF), in some of which the filtering output of a pixel located on the edges is close to the mean which is far beyond the expected value, and yields blurring edges. A novel weighted aggregation strategy for GIF is issued to improve the edge-preserving performance. Different from the average aggregation related to the original GIF, multi-windows predictions of the interest pixel are aggregated via weighting instead of averaged value. Moreover, the weights-oriented are assigned in terms of mean square errors (MSE). Then, the WAGIF and the original GIF are compared and the experimental results demonstrate that our WAGIF achieves sharper edges and cut halo artifacts intensively. Next, an overall WAGIF-based detail-enhanced exposure fusion algorithm is illustrated. Our exposed multi-images fusion approach first extracts detailed layers via WAGIF simultaneously. These detailed layers are then synthesized to obtain a fine detail layer. Finally, the overall detailed layer is integrated to a medium fused image. Similar to the image decomposition technique in single image detail enhancement and tone mapping, each image input image sequence, which contains several of exposed images, is decomposed to a benched layer and a WAGIF-based detailed layer. Then, all detail layers of input images are synthesized to a single detail map via a specified transition function which guarantees that a pixel has a larger weight in a qualified exposed image than an under/over-exposed images. Furthermore, a detailed extraction conducted reflects weak edges or invisible under/over-exposed regions. Hence, the WAGIF-based details extraction tends to some information loss in under/over-exposed regions while each input image is decomposed each. A targeted task is employed to generate a single aggregation weight map in terms of the relationships between all input images, and then the generated weight map is applied for all images decomposition. Result Our overall detailed-enhanced exposure fusion algorithm is validated on 17 sections of classical multi-exposure image sequences and the experimental results are compared with more fusion approaches, like weighted guided image filtering (WGIF) and gradient dynamic guided image filtering (GDGIF). The optioned quantitative evaluation metrics contain information entropy (IE), mutual information entropy (MIE), and averaged gradient (AG). "Madison" and "Memorial" sequences are adopted in our algorithm. The "Madison" sequences based algorithm has an average increase of 0.19% in MIE, 0.58% in MIE, and 13.29% in AG, respectively. Our "Memorial" sequences based algorithm has an average improvement of 0.13% in IE, 1.06% in MIE, and 16.34% in AG. Furthermore, our algorithm has better edge-preserving performance and its fused images priority qualitatively. Conclusion Our demonstrated illustrations can obtain qualified preserve edges and details. Quantified evaluation metrics like IE, MIE and AG are further conducted for multi-algorithms comparison each prioritized evaluation.
Keywords
multi-exposure image fusion detail extraction guided image filtering(GIF) weighted aggregation halo artifacts
|