节点文献

基于小波域局部特征的图像去噪与融合

Image Denoising and Fusion Based on Local Features in Wavelet Domain

【作者】 汤清信

【导师】 焦李成;

【作者基本信息】 西安电子科技大学 , 电路与系统, 2013, 博士

【摘要】 本论文的研究目标是分析和发掘图像在空间域、变换域的局部特征并将其应用于小波域的图像去噪和图像融合。本文的工作主要针对小波域图像去噪与尺度间系数对齐问题相关的图像细节模糊问题、Bandlet变换的自底向上蛮力搜索图像几何正则性的低效问题、基于红外图像增强低亮度光学图像时引入的漂白现象,以及小波域定义的多种对比度(显著性测度)的关系及其对相关融合算法性能的影响问题。研究的内容涉及到提出解决这些问题的概念、定义和算法,通过必要的仿真实验和实际图像的应用实验,验证了新方法的性能,并和现有的其它同类方法进行了比较和分析。本文的主要研究成果为:1.针对现有的基于图像相似性指标UIQI和SSIM中在计算对比度相似性的时候并未考虑图像局部背景亮度的问题,提出了一种综合性图像质量测度指标,该指标着重强调了主观评价和客观评价的全面考量,特别是考虑到人类视觉心理对于细节保持和图像对比度的感知特点,由此得到的评价指标比原来的UIQI和SSIM指标与人类视觉感知的特点更为一致。2.针对原有Bandlet变换中自底向上蛮力搜索图像正则几何、存在大量无效搜索量的问题,结合了图像作为二维函数的全变差值与目标边界间的积分关系,设计了一种自顶向下的几何搜索策略,并用以改进原Bandlet变换算法。新的改进算法可以避免多种区域的几何搜索操作,如匀质或常值区域、零值区域等,从而可以提高Bandlet变换算法的时间性能;3.在应用小波等多分辨分析去除图像噪声时,人们基于小波变换系数尺度间和尺度内的相关性,建立了多种模型,最常见的是Markov链、Markov树、高斯混合分布等统计模型。而基于尺度间点态预测和尺度内插值的非统计模型则另辟蹊径,完全不必考虑小波变换系数的统计特性——借助于尺度间的(大值)系数位置预测、通过求解一个确定性方程组即可完成基于SURE原理的阈值法去噪。然而这种尺度间的预测至少存在两个方面的问题,一是经过逐级下采样操作后,细尺度大系数在粗尺度下可能会消失;二是由于Gibbs效应影响,尺度间的大系数未必能够互相对应,细尺度下的大值系数未必对应于粗尺度下同一位置的大值系数,反之亦然。这个算法缺陷的直接后果就是在去噪过程中丢失弱纹理细节。借鉴于Bandlet变换定义图像局部几何流的思想,通过设定小波域尺度间方向几何流预测的方法,本文解决了这一弱纹理细节去噪后被弱化以致模糊的问题;该方法的关键在于,尽管个别像素点的系数可能会在粗尺度由于下采样消失、在不同的尺度间失去原来的位置对应关系,但是,这样的像素点所附着的图像几何元却依然存在,其位置和尺度间对应关系并不随尺度变化而变化。换言之,该项工作利用具有尺度间不变性的方向几何流特征间的对应预测关系解决了去噪后纹理等细节模糊的问题4.针对红外图像中像素值亮度(直方图)分布的局部特征,通过拉伸直方图上暗区像素的差异、同时压缩明亮区域像素的差异,并用于增强与之相对应的光学图像,从而可以消除增强图像和融合图像中存在的漂白效应,使得到的融合图像具有光学域的特点便于人类视觉系统感知,又能够避免漂白效应混淆目标特征及其边界。产生漂白效应的像素值与红外图像中低温目标产生的暗像素有关,发现这一个事实是构造出本算法的关键。5.在分析比较了多个图像融合研究中,基于对比度(显著性测度)值选择源图像的变换系数作为融合图像的变换系数,是近年来多分辨变换域图像融合领域的热点。然而,现有的对比度定义中,有的没有对图像特征的局部背景亮度信息给予足够的重视而未加考虑,有的在计算对比度的时候没有讨论有关的区域窗口尺寸的设置影响,有的则完全忽视了图像的多分辨分析的低通逼近系数的融合也需要考虑对比度因素,从而使得融合后的结果图像亮度减弱、红外图像的信息占优、丢失纹理等细节、边缘模糊化等等。为了解决这些问题,并对用于图像融合的多分辨变换域的对比度进行全面、深入的讨论和分析,我们在移不变小波域提出了一个基于区域窗口内变换系数统计特征的对比度定义,并用其构造了两个新的图像融合方案,以使得源图像中具有较强对比度的图像特征信息进入融合图像。实验结果证实了本文提出的方案融合的图像的优异性能及其稳定性。

【Abstract】 This thesis is aimed at the analysis and development of local image features in thespatial and transform domain and their applications in image denoising and imagefusion using a wavelet transform. All works that have been done are supposed to dealwith several problems related to the detail blurring in the denoised images caused bycoefficients alignment across scales of wavelet transform, deficiency of the brute-forcesearching from bottom to top for the desired regular image geometry throughout aBandlet transform, the bleaching effect seen in the enhanced low light visual image by aexponential function of a registered infrared image, the discussion of those availabledefinitions of contrast in a wavelet domain and their influence on the performance ofimage fusion. The solutions to these problems are presented here that include concepts,definitions and relative algorithms. Experimental results are also shown to verify thegood performance of the proposed algorithms and to compare them to those results fromother relative methods. The main contributions can be summarized as follows:1. Being image quality indexes, both UIQI and SSIM miss the local backgroundlightness. This will lead to deficiency when these two indexes are used to computeobjective quality measures of images, as they will violate from the perception of humanvision systems concerning the real contrast. To fix this, a new universal image qualityindex is proposed, with emphasis on the agreement of subjective and objectiveevaluation and the HVS sensitivity of contrast changes, i.e., the perception of details.2. In the original second generation Bandlet transform, a brute-force search iscarried out from bottom to top for regular image geometries, which leads to someunnecessary computations. In the light of the relation between the total variation and thelength of object borders, a novel search strategy from top to bottom is devised to avoidunnecessary image partition and searching for geometries. The key idea lies in the valueof the total variation of an area; in the case of zero total variation, there is no geometryat all, so no segmentation and geometry search is needed; as for those homogeneousareas, no benefit will be added through further partition of an area. In this way, the timecomplexity can be decreased since the wavelet coefficients are sparse expression ofimages, these homogeneous areas and zero areas can be guaranteed.3. There are several statistical models used when noisy images are denoised withwavelet methods, such as Marko chains, Markov trees and mixed Gaussian distributionare typical stochastic processes to express the coefficients relation between neighbourscales and among the same scale. By contrast, an interesting concrete method can predict coefficients across scales without any respect of the statistical distribution ofwavelet coefficients. A threshold function based on SURE principle can be devised andinserted into a linear systems to fulfill the denoising by simply solve the linear systems.However, there are two problems exist. Firstly, a coefficient with big value in a finescale may disappear in a coarse scale because of the sub-sampling. Secondly, acoefficient of big value in a coarse scale may not be corresponding to a coefficient ofbig value in a fine scale, and vice versa. In either case, the pixel-wise prediction willlead to blur image features after thresholding and denoising. Inspired by the directionalgeometric flow used in a Bandlet transform, we propose to predict coefficients acrossscales based on the directional geometric flow, which is assumed scale-invariant afterthe thresholding process. This flow-wise prediction will fix the missing coefficient andblur texture details by interpolation along geometry directions.4. An intensity transformation function of infrared images is presented and used forcontext enhancement of visual images, upon which a new image fusion method in theshift-invariant wavelet domain is developed. The function behaves like a sigmoidfunction and shifts and expands the range of dark pixels of infrared images. Theseadjustments according to the local histogram characteristics can avoid artificial brightpixels introduced in the later enhancing of visual images and the bleaching effect in thefinal fused images owing to the exponential map of very dark pixels of the infraredimages. The key idea is the fact that the bleaching effect are owning to the very darkpixels of the infrared image through the used exponential function.5. It is a hot research area that fusing images by selecting coefficients according toa certain contrast (saliency measure) through a multiresolution analysis. However, mostknown definitions of contrast (saliency measure) miss the importance of localbackground lightness, or compute the contrast without considering a sliding window ofproper size. In fact, even the approximate coefficients need a contrast to have a betterfusion results instead of simply averaging source ones. These usually lead to decreasedlightness, prevailing of information from infrared image, and loss of details in theresulted images. By comparison and analysis of those available contrast (saliencymeasure) definitions, a universal contrast is devised and used to develop a new fusionalgorithm to allow the feature from source images enter into the fused image with moresharper contrast. Experimental results verify its fusion performance and its stability.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络