节点文献

像素级多聚焦图像融合算法研究

Pixel-level Multi-focus Image Fusion Algorithm

【作者】 孙巍

【导师】 王珂;

【作者基本信息】 吉林大学 , 通信与信息系统, 2008, 博士

【摘要】 在变换域方面,提出了两种基于Q-Shif双树复数小波变换(Q-Shif DT-CWT)的融合算法。针对低频系数和高频系数的不同特点,算法一分别采用邻域梯度取大(NGMS)和模值取大(MVMS)融合准则进行系数融合。在算法一的基础上,算法二采用合成图像模值取大(SI-MVMS)准则对高频系数进行融合。两种融合算法提高了系数选取的准确性,其中算法二融合图像质量更高。在空间域方面,提出了基于非下采样Contourlet变换(NSCT)的空间域融合算法。该算法采用NSCT提取源图像细节信息,通过合成图像绝对值取大(SI-AVMS)准则得到融合决策图来“指导”源图像中像素点的选取。算法利用NSCT良好的细节表现力,克服了传统空间域融合算法在细节表现力上的不足。由于不存在反变换,避免了对源图像信息的破坏。在彩色多聚焦图像融合方面,提出了基于NSCT的空间域彩色多聚焦图像融合算法。该算法根据亮度分量的融合情况“指导”源彩色图像的三个分量中像素点的选取,其中亮度分量采用基于NSCT的空间域融合算法进行融合。该算法避免了传统融合算法容易出现的颜色失真和模糊现象。

【Abstract】 Multi-focus image fusion provides the means to integrate the corresponding clear regions acquired from the registered images into a composite image. Therefore, the goal of Multi-focus image fusion algorithm is to extract the clear regions correctly. Multi-focus image fusion is usually performed at one of the three different processing levels: pixel-level, feature-level and decision-level. Pixel-level multi-focus image fusion is the subject matter of this thesis. Some solutions to the problems caused by existing image fusion algorithms are introduced in space domain and transform domain.In transform domain, the Discrete Wavelet Transform (DWT) suffers from two main disadvantages, i.e. lack of shift invariance and poor directional selectivity. The Q-Shift Dual-Tree Complex Wavelet Transform (Q-Shift DT-CWT), being of approximate shift invariance and good directional selectivity, can better represent detail features of images. Therefore, the registered images are firstly processed with Q-Shift DT-CWT. Then the low frequency coefficients and the high frequency coefficients are fused with fusion rule based on neighborhood and rules based on pixel separately because of their different characteristics. The low frequency coefficients, which reflect the contour features of images, vary slowly, so the neighborhood gradient maximum selectivity (NGMS) scheme is used. The experiment results show that the accuracy of NGMS scheme, which reflects the ratio of selected pixels from the clear regions to the all selected pixels, is 81.89%-90.93%. The high frequency coefficients, which reflect edge and detail features, vary fast, so the module value maximum selectivity (MVMS) scheme is used in Algorithm One. Nevertheless the MVMS scheme sometimes damages the consistency of the high frequency coefficients. Therefore the“synthesis image”is introduced in Algorithm Two, in which the high coefficients are fused with synthesis image module value maximum selectivity (SI-MVMS) scheme and are processed by consistency verification. The experiment results show that the Algorithm One avoids the artifacts and ringing artifacts caused by incorrect pixel selectivity in wavelet based fusion algorithms. Compared with reference [8],[11],[35],[38], the RMSE of the fused image Clock with Algorithm One is reduced by 39.85%-87.81%, the WFQI and EFQI of the fused image Disk with Algorithm One are increased by 0.74%-8.48% and 1%-17.27% separately. Compared with Algorithm One, the RMSE of the fused image with Algorithm Two is reduced by 10.15%, which reflects that Algorithm Two preserves the contour and detail features of images more effectively.In space domain, the conventional fusion algorithms are scant of detail feature representation. Therefore the registered images are processed with nonsubsampled contourlet transform (NSCT), and the acquired high frequency coefficients are used to decide the pixel selectivity from the registered images. In order to overcome the one-sidedness of the single high frequency sub-band image, the decision map is attained by fusing the all high frequency sub-band images through synthesis image absolute value maximum selectivity (SI-AVMS) scheme. Then the decision map is verified by consistency verification. Finally, the fused image is completed according to the verified decision map. The inverse transform does not exit in the proposed fusion algorithm, which avoids the loss of information in the registered images caused during the process of transform and inverse transform that exits in the conventional fusion algorithms in transform domain. The proposed fusion algorithm selects the pixels from the registered images directly, which preserves the information of the registered images well. The introduction of NSCT solves the incapability in presentation of detail features exits in the conventional fusion algorithms in space domain. The experimental results show that the multi-focus image fusion algorithm with NSCT in space domain improves the qualities of fused images. Compared with reference [8],[11],[35],[38],[72], the RMSE of the fused image Clock and Barbara are reduced by 53.06%-90.08% and 26.05%-67.6% separately. Moreover, the effects of the number of transform levels and sub-band images on the quality of the fused images are studied.For the problems caused by color multi-focus image fusion algorithm, solutions are put forward in three aspects. Firstly, for the color space model, because R, G, B components, being of high correlation, are incapable of representing the gray information. The registered images are fused in IHS or YUV space. Secondly, for the components to be fused, considering that the fusion schemes, which fuse the three components separately, destruct the original proportion of the three components and result in color distortion, only the luminance component in IHS or YUV space is processed. The proposed fusion scheme, which selects the three components of one pixel simultaneously from the registered images, keeps the proportion of the three components in one pixel, avoids the color distortion and reduces the complexity of the fusion algorithm. Thirdly, for the fusion scheme, the multi-focus image fusion algorithm with NSCT in space domain is used to fuse the luminance components, which improves the accuracy of pixel selectivity. The experimental results show that the proposed algorithm, which is called color multi-focus image fusion algorithm with NSCT in space domain, is capable of extracting the pixels in the clear regions correctly and avoids the color distortion.

  • 【网络出版投稿人】 吉林大学
  • 【网络出版年期】2008年 11期
  • 【分类号】TP391.41
  • 【被引频次】32
  • 【下载频次】1111
  • 攻读期成果
节点文献中: 

本文链接的文献网络图示:

本文的引文网络