节点文献

像素级多传感器图像融合方法研究

Researsh on Multi-Sensor Image Fusion Method at Pixel Level

【作者】 李郁峰

【导师】 冯晓云;

【作者基本信息】 西南交通大学 , 电气系统控制与信息技术, 2013, 博士

【摘要】 近年来随着图像传感器和图像处理技术的快速发展,图像融合的实用性也在不断增强,并逐渐从国防应用领域拓展到其他国民应用领域。目前在遥感、态势感知、侦查、全天候监控、医学诊断、武器装备、机器人等众多应用领域越来越多地使用了多传感器、多光谱图像,图像融合技术呈现出了更加广阔的应用前景,凸显了图像融合研究的重大意义和紧迫性。本文主要针对场景态势监控、目标检测识别等应用领域的红外与可见光等类型多传感器图像融合需求,从有利于增强对场景的理解、有利于快速准确检视识别目标的角度出发,旨在像素级融合层面上深入研究并获得一些有效的处理和分析方法。文中主要以设计能有效增强源图像中目标特征并获得良好视觉效果的融合新方法、以及能满足图像融合系统实时性需求的快速融合算法为主要目标,文中主要研究内容如下:1、对图像融合技术中常用的多尺度变换予以了较全面的总结,并从信号稀疏表示理论的角度审视多尺度分析方法及其在图像融合中的优势和不足;重点针对多尺度变换的冗余性与移不变性对融合效果的影响,对不同类型多尺度变换的16种融合算法进行了定量的实验对比和分析研究,总结给出了多尺度变换方法在图像融合算法设计中应注意的一些特点。2、现有多尺度融合算法重视细节系数的融合规则而对近似系数一般采用简单的均值或加权融合规则。近似系数代表了源图像的主要能量分布,简单的近似系数融合规则会造成融合图像中目标的亮度和对比度降低,导致强度较高的源图像抑制或淹没另一方的目标特征和纹理细节,最终影响融合图像的视觉效果和目标特征的可探测性。针对此问题,提出了一种应用亮度重映射的近似系数融合规则,实验结果表明,综合考虑源图像近似系数的强度和对比度等特征,能有效增强弱势源图像中目标特征和纹理细节等信息的融合,明显提高融合图像的动态范围和目标特征的强度。3、受传感器物理特性限制或自然条件影响,源图像经常表现为对比度低、灰度值范围狭窄、视觉效果模糊等情况,从而造成融合图像质量的下降。针对此问题,将数学形态学与尺度空间理论相结合构造了多尺度top-hat变换,提出了基于多尺度top-hat变换的增强融合算法。该算法使用多尺度top-hat变换提取源图像亮、暗细节特征,并依据不同应用需求灵活地融合各尺度亮、暗特征得到融合图像。实验结果表明,算法在融合过程中同步增强目标和细节特征,使融合图像中目标与背景对比度、纹理细节等特征优于其在源图像中的表现,并能根据应用需求获得具有不同增强效果的融合图像。4、针对实时系统需求,提出了加权和乘积相结合的互调制快速融合算法。该算法对两幅源图像分别使用基于对应像素能量比值所确定的系数进行放大,然后分别加上由图像统计参数得到的偏移项,最后将两部分相乘并规范化即得到融合图像。实验结果表明,算法综合了加法和乘法调制的优点,简单快速,实时性好,并且参数自适应,是一种非线性的互调制融合过程。算法的融合质量和效率优于小波、金字塔等融合算法,适用于多传感器图像如红外与可见光图像融合、医学图像融合等。5、总结了近15年来多传感器图像染色融合的研究成果,给出了染色融合算法的一般框架。在此基础上,提出了基于YCbCr颜色空间的夜视微光与红外图像染色融合算法。算法使用互调制融合方法构造Y分量并直接由源图像构造Cb.Cr分量,快速构造出色彩丰富、对比度强烈的伪彩色图像;应用颜色传递技术后,获得了细节丰富、目标背景对比度高、符合场景自然颜色分布的假彩色图像。染色融合过程结合了伪彩色和假彩色两次染色,可满足不用应用需求。因使用了互调制快速融合并直接构造颜色分量,算法效率高且参数自适应,可满足实时性需求。文中对融合算法的研究主要围绕目标特征增强和算法实时性两个主要目标,构造多尺度top-hat变换并应用于图像融合,实现了融合过程中同步增强;提出的互调制快速融合算法能满足实时性要求较高的应用场合;提出了结合互调制融合的快速染色融合算法,融合过程结合了伪彩色和假彩色两次染色,可满足不用应用需求。上述研究成果在态势感知、夜视监控、目标检测与跟踪等多传感器图像融合技术研究和应用领域具有重要的理论和实用价值。

【Abstract】 With the development of sensor technology and image processing technology in recent years, the practical applicability of image fusion also constantly enhanced. Image fusion has also been extensively used in many areas from defense applications to civilian purposes. In many applications systems, such as remote sensing, situational awareness, intelligence gathering, all-weather surveillance, medical diagnostics, military, and robotics etc. the widespread use of multi-sensor and multi-spectral images has increased the importance of image fusion. Image fusion technique has been showing even more broader applications prospects at present.In this paper, the research work is focused on multi-sensor image fusion theory and algorithms such as infrared and visible image which be wide spread used in situation awareness, surveillance, target detection and tracking applications. Comprehensively take advantage of progress and achievement in image analysis and image understanding technology research field, the paper conducted more deeply investigation to obtain the effective processing and analyzing methods for multi-sensor image fusion at pixel level. Thus, the paper mainly aimed at to find better ways to fuse multi-sensor images which can effectively enhance the target feature synchronously in fusion process and obtain a good visual effect fused image, as well as to meet for real-time image fusion system needs. The main research work and achievements are as follows.1. Multi-scale transforms commonly used in image fusion methods have been reviewed and analyzed comprehensively. Its advantages and disadvantages have also been analyzed form the perspective of signal sparse representation. Then, the paper investigates shift dependency of various Multi-scale transforms and analyzes its effects on image fusion performance by quantitative and qualitative methods. We conduct experiments by combining8popular multi-scale transforms such as pyramid, wavelet and multiscale geometric analysis methods etc. with two popular fusion rules. By analyzing and comparing the experimental results, the paper proposed some guidance for using Muti-scale based fusion schemes.2. Most proposed fusion algorithms based on multi-scale transform attached more importance to design more delicate fusion rules for the detail coefficients, but generally use simple rules such as mean or weighted average to combine the approximate coefficients. However, due to the approximate coefficients represent the energy distribution of the source image in spatial domain, a simple approximation coefficients fusion rule would reduce the brightness and contrast of the fused image, which led to the source image with higher strength suppress or annihilate the others target characteristics and texture detail. Thus the visual effect and target feature detectability of the fused image would also be reduced. To solve this problem, this paper presents the approximate coefficient fusion rule based on the brightness remapping under using curvelet transform as multi-scale transform method. The experiment result shows that, beacause of taking into account the intensity and contrast characteristics of the source images. The proposed fusion rule can effectively increase target characteristics and exture detail in the weakness source image, and significantly improve the fused image’s dynamic range and target feature intensity.3. Limited by sensor physical properties or impacted by natural conditions, imagery performance often present as low contrast, narrow intensity or blurry visual effect, which in turn reducing quality of the fused image. To efficiently enhance fused images in fusion process, the paper proposed a novel image fusion algorithm using multi-scale top-hat transform. Multi-scale bright and dim salient features of the source images are extracted iteratively through top-hat transform using structuring elements with the same shape and increasing sizes. Then these multi-scale bright and dim features are combined by fusion rule. The enhanced fused image is obtained by weighting the bright and dim features according to specific requirements. Experimental results on infrared and visible images and other multi-sensor images fusion from different applications using different fusion algorithms verified that the proposed algorithm could efficiently and synchronously fuse and enhance the salient features of source images, and produce better visual effects and target detection or identification capabilities. In addition, according different application requirements, the proposed algorithm could pruduce different enhanced fuison result.4. In order to meet the requirements of real-time fusion system, the paper proposed a novel fast mutual modulation fusion (FMMF) algorithm for multi-sensor images. First, the two source images were magnified by factors that derived from the ratio of the corresponding pixel energy respectively; Then an offset entry that obtained by computing statistical parameters of source images add to it; Finally, after the previous results are multiplied and normed, the fused image is obtained. Fusion process consists of the addition and multiplication, which is a nonlinear combine process. Experimental results show that FMMF algorithm is simple and fast and its performance and efficiency is superior to those based on pyramid and wavelet.5. The paper reviews the past15years research in the field of night vision multi-sensor image coloration (render night vision image in color) and reveals the general coloration model. On this basis, a new coloration method using fast multi-modulation fusion (FMMF) and color transfer is designed for low-light and infrared image pairs. The coloration process is based on YCbCr color space. Fist, fused image using fast multi-modulation fusion to merge the source images information be assigned to the Y channel; then the Cb and Cr channel is combined using Toet’s method which extract the common component from source images. Finally, the false-color image is obtained by using color transfer technology to the prior pseudo-color YCbCr image. Experiments show that the result of our method is more salient information, higher color contrast, more natural color appearance than others. Due to the use of fast multi-modulation fusion, the coloration process is efficient and the parameters are adaptive, the proposed method meets the real-time applications.The researsh work of this paper aimed at to enhance fused image, as well as to meet for real-time image fusion system needs. The paper proposed a novel image fusion algorithm using multi-scale top-hat transform to enhance the target feature synchronously in fusion process; the proposed fast mutual modulation fusion (FMMF) algorithm can be used for real-time system; the paper reveals the general coloration model then proposed a new coloration method using fast multi-modulation fusion (FMMF) and color transfer for low-light and infrared image pairs. These fusion methods have important theoretical and practical values in research and application areas such as situation awareness, all-weather surveillance, target detection and tracking applications etc.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络