节点文献

基于多尺度分解的多传感器图像融合算法研究

Research on Multi-Sensor Image Fusion Algorithm Based on Multiscale Decomposition

【作者】 叶传奇

【导师】 王宝树;

【作者基本信息】 西安电子科技大学 , 计算机应用技术, 2009, 博士

【摘要】 图像融合是信息融合的一个重要分支,也是图像理解和计算机视觉中的一项重要技术。图像融合是将同一场景的多幅图像进行综合以得到关于该场景更为全面、更为准确描述的信息处理过程。图像融合可以为进一步的图像处理,如图像分割、目标检测与识别、战损评估与理解等提供更有效的信息。目前,图像融合技术已广泛应用于遥感、军事、机器人以及医学处理等领域。本论文主要研究了基于多尺度分解的多源图像融合算法。针对现有大多融合算法没有考虑图像固有特性的问题,本论文对图像传感器的成像机理、源图像的成像特性等先验信息进行了综合分析,围绕冗余小波变换及无下采样Contourlet变换等多尺度几何分析工具,提出了多种与源图像特性相适应的图像融合算法。本文的主要研究工作和创新成果如下:1.针对正交离散小波变换中由于移变性而产生振铃效应的问题,提出了一种基于冗余小波变换的灰度多聚焦图像融合算法。在该算法中,根据离焦光学系统具有低通滤波特性,可按照源图像中的高频细节信息来判断源图像中的聚焦区域与离焦区域这一理论依据,在冗余小波变换域引入了区域向量范数和局部对比度量测算子,并分别制定了基于区域向量范数的低频系数融合策略和基于局部对比度的高频系数融合策略。该算法较好保留了源图像中的有用信息,能有效消除振铃效应,得到整幅图像均聚焦清晰的图像。2.结合无下采样Contourlet变换所具有的多尺度性、多方向性以及平移不变性等优良特性,提出了基于无下采样Contourlet变换的图像融合框架,并根据红外图像和可见光图像各自的成像特性,提出了两种基于无下采样Contourlet变换的红外与可见光融合算法。在基于窗口选择的融合算法中,提出了以局部能量和局部方差作为量测算子的低频子带系数融合策略及以局部方向对比度作为量测算子的高频方向子带融合策略,该算法有效融合了红外图像中的热目标信息及可见光图像中的丰富光谱信息。在基于区域分割的融合算法中,采取了区域融合的思想,并定义了区域能量比和区域清晰比量测算子,用以表征区域特征信息,指导无下采样Contourlet变换域融合系数的选取。该算法将具有关联性的多个像素作为一个整体参与到融合过程,与基于像素及基于窗口选择的融合算法相比具有更佳的融合性能。3.通过对遥感图像融合中出现的光谱失真问题的分析,提出了一种基于区域相关系数的无下采样Contourlet变换域多光谱与全色图像的融合算法。在该算法中,按照区域融合的思想,定义了区域相关系数测度算子,将源图像按空间特性划分为相关度不同的区域,根据区域相关度的不同分别制定不同的融合策略。该算法在空间分辨率和光谱特性两方面达到了良好的平衡,融合后的多光谱图像在减少光谱失真的同时,有效增强了空间分辨率,且保留了原多光谱图像中的显著特征信息。4.针对SAR与全色图像的融合,提出了一种基于SAR图像成像特性的融合算法。在该算法中,以区域信息熵和区域均值比作为联合量测算子,将SAR图像区分为粗糙区域、平滑区域以及高亮点目标区域,并针对不同区域采取不同的融合策略。融合图像既有效加入了全色图像中难以辨识的SAR目标信息,又有效保持了全色图像的空间分辨率。

【Abstract】 Image fusion is an important part of multi-sensor information fusion. It is also an important and useful technique for image understanding and computer vision. Image fusion is the process by which multiple images of the same scene are combined to generate more complete and accurate description of the scene than any of the individual source images. The fused image can provide useful information for further computer processing, for example, image segmentation, object recognition, object detection, battle damage evaluation and understanding, and so on. The technique of image fusion has been widely used in many fields such as remote sensing, military application, robot engineering, medical imaging, and so on.This dissertation mainly aims at the research of multisensor image fusion algorithms based on the multiscale decomposition. In order to solve the issues of existed fusion algorithms which do not take the intrinsic characteristic of the source images into account, the priori information such as the imaging mechanism of image sensors and the imaging characteristic of the source images has been deeply analyzed in this dissertation. Several image fusion algorithms that adapt to the characteristic of the source images have been proposed based on the multiscale geometric analysis tools such as the redundant wavelet transform and the nonsubsampled contourlet transform.The main contributions of this dissertation are summarized as follows:1. Aiming at the ringing effect arising from shift-variance in the orthogonal discrete wavelet transform, a novel gray-scale multifocus image fusion algorithm based on redundant wavelet transform is proposed. According to its imaging principle, the defocused optical imaging system can be characterized as a lowpass filtering. Therefore, in multifocus images, a pixel or region in focus or out of focus can be determined by its corresponding high frequency information. On the basis of the above theoretic evidence, the region vector normal and the local contrast are introduced in the redundant wavelet transform domain, and the selection principles based on the region vector normal and the local contrast are presented for the low frequency subband coefficients and the high frequency subband coefficients respectively. The algorithm can preserve the useful information of the source images and overcome the ringing effect from the final merged image reconstructed by orthogonal discrete wavelet transform. It can get clear focus in the whole fused image.2. Combining with the excellent characteristics including multiscale, multi-direction and shift-invariant in the nonsubsampled contourlet transform, an image fusion framework based on the nonsubsampled contourlet transform is proposed. In the light of imaging characteristic of infrared and visible images respectively, two algorithms for fusion of infrared and visible images based on the nonsubsampled contourlet transform are proposed. One is a window-based algorithm, in which a selection principle based on the local energy and local variance for the low frequency subband coefficients and a selection scheme based on the local directional contrast for the high frequency subband coefficients are presented. It combines the hot object information of the infrared image with the rich spectrum information of the visible image together. The other one is based on region segmentation, in which the fusion idea of region division is introduced. Two measurements named ratio of region energy and ratio of region sharpness are presented to characterize the regional salience information. They used to guide the selection of the fusion coefficients in the nonsubsampled contourlet transform domain. The algorithm takes the relative pixels as a whole to participate in the fusion process so that it has better fusion performance than the pixel-based algorithm and the windows-based algorithm.3. After analyzing the problem of spectral distortion in the fused remote sensing image, a novel fusion algorithm for multi-spectral and panchromatic images based on the region correlation coefficient in the nonsubsampled contourlet transform domain is proposed. According to the fusion idea of region division, the measurement named region correlation coefficient is presented. The source images firstly are split into different regions with various spatial characteristics, and then different fusion rules are employed according to the degree of correlation between the multi-spectral image and the panchromatic image. The algorithm has a good balance between the spectral information and the spatial information. The fused multi-specral image can reduce spectral distortion and improve spatial information at the same time. Especially, it has preserved the salience feature information of the original multi-spectral image.4. Focusing on the fusion of SAR and panchromatic images,a novel fusion algorithm based on the imaging characteristic of the SAR image is presented. Two measurements named region information entropy and ratio of region mean are presented in the nonsubsampled contourlet transform domain so that the SAR image can be split into roughness regions, smoothness regions and highlight point target regions. The algorithm performs the different fusion rules for each particular region independently. The fused image not only joins the target information in the SAR image that is difficult to identify in the panchromatic image but also preserves the spatial resolution of the panchromatic image.

  • 【分类号】TP391.41
  • 【被引频次】67
  • 【下载频次】2731
  • 攻读期成果
节点文献中: 

本文链接的文献网络图示:

本文的引文网络