节点文献

夜间视频增强的关键技术研究

Research on Key Techniques of Nighttime Video Enhancement

【作者】 饶云波

【导师】 陈雷霆;

【作者基本信息】 电子科技大学 , 计算机应用技术, 2012, 博士

【摘要】 视频信息越来越多的被人们用来识别和判断事物,解决实际的问题。夜间监控视频由于天气条件、亮度条件、捕获设备等因素,导致视频不清晰甚至异常模糊,不利于监控,不能满足应用的需要。针对上述问题,本文从基于视频自身的增强技术(Video-based self-enhancement)和基于帧融合的增强技术(Frame-basedfusion enhancement)两个层面及融合过程中的相关技术问题对夜间监控视频进行研究。首先分析目前视频增强(Video enhancement)的相关技术,提出了视频增强算法的统一框架;然后提出多种夜间视频增强算法;最后解决了视频增强过程中存在的相机运动问题。本文的主要创新点如下:(1)对视频增强处理的相关技术进行研究并且分析目前的视频增强算法,提出视频增强算法的分类:①基于夜间视频自身的增强,②基于帧融合的视频增强;分析视频增强算法的优点和缺点基础上,提出视频增强算法的评估方式;基于视频融合增强算法的分析,提出一种夜间视频融合增强的统一模型,并提出模型的基本算法。(2)针对目前基于帧融合的视频增强技术存在的缺陷,提出一种基于帧融合的夜间视频增强算法,该算法利用白天背景亮度融合到夜间视频帧亮度。主要贡献是:使用增强Term方式有效地增强夜间背景和运动物体,弥补了目前算法存在的缺陷;设计一种高斯低通滤波器解决了视频增强后运动物体区域与边界不协调的问题。(3)针对夜间视频增强过程存在的图像混淆和运动物体区域内的比例不一致问题,提出一种基于帧亮度补偿的夜间视频增强算法。主要贡献是:按照白天的亮度背景和夜间视频帧亮度的比例方式,提出了一种高亮度背景补偿到夜间亮度的算法来增强夜间视频;为了消除夜间运动物体区域内比例不均匀的问题,提出了一种运动物体内区域比例平均(Motion region ratio average)的方法。(4)传统的增强是基于灰度图像处理,如果直接将灰度图像增强算法推广到彩色视频图像增强中,会造成色彩的不协调,从而破坏自然的彩色平衡,使得增强后的图像色调不自然。针对这一问题,本文提出一种基于遗传算法(GeneticAlgorithm, GA)的夜间视频的对比增强算法,提出的算法基于视频帧的亮度层进行处理,很好地解决了色彩不协调问题。(5)针对非抽样Contourlet变换(NonsubSampled Contourlet Transform, NSCT)具有平移不变性、抑制图像噪声等特点,提出一种基于NSCT融合的夜间视频增强算法。利用白天亮度背景融合到夜间视频帧亮度中,集中解决了两个关键的问题:①为了增强夜间视频,本文提出了一种基于非抽样Contourlet变换融合相同场景的白天亮度背景与夜间视频帧的算法。②为了提高运动物体在夜间增强视频中的清晰度,本文提出一种夜间视频增强算法,该算法能有效地恢复夜间视频帧的颜色,使得增强的夜间运动物体更清晰。(6)为了有效地增强黑暗的夜间视频,带有相同场景的高质量白天背景信息经常用来增强夜间视频帧,然而由于相机运动问题,白天的背景场景与夜间的视频场景经常不完全相同,导致增强的结果中运动物体与背景场景不一致。针对这一问题,提出全局运动估计(Global Motion Estimation,GME)解决白天与夜间场景不一致的问题,即相机运动的问题。同时为了改进传统夜间视频增强算法存在的缺陷,提出一种夜间视频增强算法,该算法能有效地恢复在不同场景下运动物体与背景场景不一致的情况,且增强后的运动物体更清晰。

【Abstract】 Video information is used to recognition and identifies objects in daily, addressactual application problems. However, the captured nighttime videos are often too darkor non-clear for monitoring purposes due to the extremely weather condition, poorlighting conditions and the relatively low-cost cameras used, and nighttime videos don’tfit surveillance and satisfaction applications. In order to address above problems, weresearch nighttime video enhancement techniques from self-enhancement andillumination-based frame fusion and related techniques of fusion enhancement. In thisdissertation, we firstly analyze video enhancement related techniques, and proposegeneral framework of nighttime video enhancement, then analyze video enhancementtechniques, we propose several algorithms of nighttime video enhancement, at last, wepropose GME algorithm to resolve camera motion problem which is exiting nighttimeenhancement preprocessing.The main contributions in this dissertation are summarized as follows:(1) We present an overview of video enhancement processing and analysisalgorithms used in these applications. The existing techniques of video enhancementcan be classified into two categories:①Self-enhancement,②Illumination-based framefusion enhancement. More specifically, based on discussing the advantages anddisadvantages of these algorithms, evaluation approaches of nighttime videoenhancement algorithms are proposed. Illumination-based enhancement of nighttimevideo analysis, a general framework of nighttime video enhancement is proposed andalso analyzes the proposed framework techniques.(2) We analyze several problems of existing techniques for nighttime videoenhancement. In this dissertation, an enhancement algorithm for nighttime videosurveillance applications based on illumination fusion is proposed, which fuses videoframes from daytime backgrounds and nighttime video. The main contributions of theproposed algorithm are summarized as follows: the proposed algorithm uses an additiveenhanced “Term” with foreground object extraction to enhance nighttime videos andobjects, to make up what existing algorithms have problems. To avoid light-inversion and sensitivity problems and to reduce ghost patterns introduced by illumination ratiovariations, a constrained low-passed filter is proposed in enhanced nighttime videosprocess.(3) We discuss several problems of the existing techniques for nighttime videoenhancement. We propose a novel and effective nighttime video enhancement algorithmfor video surveillance applications by using illumination compensation which fusesvideo frames from high quality daytime backgrounds and low quality nighttime video.For further improving the perceptual quality of the moving objects, an algorithm basedon object region ratio average is also proposed.(4) The traditional image enhancement algorithm of intensity-based is applicated tocolor videos, which enhanced color will not to garmonize with original video at all anddestroy nature color balance. In order to address this problem, we propose an efficientcontrast enhancement algorithm based on genetic algorithm (GA). The proposedalgorithm illumination-based is processed to address color garmonize problem.(5) Due to non-subsampled contourlet transform (NSCT) has translation invariantproperty and can control noise in a certain extent, we propose NSCT-based nighttimevideo enhancement algorithms. The proposed algorithm use daytime backgroundillumination fusing nighttime video frame illumination to enhance nighttime videos. Inthis work, we focus on address two problems:①the proposed NSCT-based algorithmfuse the same scene of daytime background and nighttime video frames.②based thisanalysis, for further improving the perceptual quality of the moving object, we proposean improved framework for nighttime video enhancement which can efficiently recoverthe unreasonable enhanced results dues to imperfect moving objects extraction.(6) In order to enhance nighttime video, usually we use external daytime orhigh-quality images of the same scene to help enhance the nighttime videos, however,the surveillance camera may often have tiny motions which results in scene differencesbetween daytime and nighttime videos. In these cases, the previous methods may oftenlose static illumination and create unreasonable results. Based on this, we propose aglobal-motion-estimation-based scheme to address the problem of scene differencesbetween daytime and nighttime videos. At the same time, we further propose animproved framework for nighttime video enhancement which can efficiently recover theunreasonable enhanced results due to scene difference.

节点文献中: