节点文献

人脸图像的自适应美化与渲染研究

Adpative Faical Image Beautification and Rendering

【作者】 梁凌宇

【导师】 金连文;

【作者基本信息】 华南理工大学 , 信息与通信工程, 2014, 博士

【摘要】 人脸图像美化与渲染是新兴的计算摄影学领域中的研究热点。它不但关注图像品质的提升,而且更关注对人脸图像的某种内容或属性的处理,如对人脸皮肤的亮度、光滑度或颜色的增强等。它希冀经其处理的图像能符合人们视觉感知习惯和具吸引人的视觉效果,能具扩展传统的摄影与图像处理系统的能力。它无论在如摄影、数字娱乐等日常生活中,还是在如广告设计、电影制作等专业领域中都有着广泛的应用前景。目前,使用现有的人脸图像工具来处理人脸美化与渲染,一般存在着需繁复的人工操作,便利性与效率性不高,且受到人的视觉感知能力与专业技能的限制等欠缺。在人脸图像的美化与渲染时,也因任务、目标各异而出现涉及多类型模型的构建与使用,且当前仍没有一种成熟的理论能统一地描述与分析相关模型间的关系与特性等问题。此外,人脸图像还具有因受光照、姿态与背景等多种因素影响而显示出表观变化十分复杂的特点。针对上述情况与问题,本文运用了计算机视觉、图像处理领域的前沿理论与方法,以及统计学习、偏微分方程、变分理论和数值分析等分析工具,吸收了认知心理学、社会心理学和艺术学等学科的相关研究成果,对人脸图像美化与渲染中相关的边缘保持平滑与编辑传播模型进行了分析,通过引入控制理论的自适应思想,构建了能根据图像与任务特点自动调节自身特性的自适应边缘保持能量最小化模型,以提升技术框架的效率性、准确性、容错性和稳定性。本文还以该模型为理论与技术基础,分别对人脸图像的美化、真实感渲染与非真实感渲染技术中的人脸皮肤美化、光照迁移与水墨画特色渲染等问题进行了系统的研究,有针对性地解决上述问题,且取得了较好的成效。本文的研究与工作,为今后进一步深化图像理解研究和实现更复杂的图像编辑或渲染奠定了良好的基础,亦为构建更智能化的图像后期处理系统或工具提供了技术支撑。本文的主要工作和创新为以下几个方面:第一、本文提出了一个自适应边缘保持能量最小化模型,它不但为实现自适应的边缘保持平滑滤波和编辑传播提供了有力的理论支撑,而且也为构建自适应的人脸美化与渲染系统或框架提供了技术基础和解决问题的方法。本文通过对与本研究相关的一些具有代表性的图像处理模型进行梳理,运用计算机视觉、图像处理领域的前沿理论与方法,以及非参数点估计、变分法和非线性滤波等分析工具,对这些相关模型的结构、输入特征、输出特性以及参数设定等进行了深入地探讨。为整合这些相关模型的功效,本文提出和构建了具一般性的边缘保持能量最小化模型(简称“一般边缘保持模型”),其既包含了上述相关模型的功效,又具备了人脸图像美化与渲染所要求的基本功能,还为进一步构建新的具边缘保持平滑滤波和编辑传播功能的模型提供了理论基础。在此基础上,通过构造或设置自适应的数据项权重、模型参数和导向特征空间,本文又构建了一个称之为自适应边缘保持能量最小化模型(简称“自适应边缘保持模型”)。这一模型不但比“一般边缘保持模型”有更好的稳定性、容错性、扩展性和灵活性,而且还具易操作、自适应性,能产生更高效能的边缘保持平滑与编辑传播效果。实验结果表明,“自适应边缘保持模型”的技术框架具有效性与实用性。第二、在人脸图像的皮肤美化研究方面,本文在“自适应边缘保持模型”的理论框架下,提出和构建了一种称之为“区域感知蒙板”的新型图像编辑工具。这种“区域感知蒙板”不仅能自动地实现皮肤区域选择与自动设置非均匀的局部编辑程度,而且能准确地拟合复杂的区域边界与产生自然的区域过渡。在此基础上,又构建出自适应人脸皮肤美化技术框架,并在图层增强中,提出了一种数据与知识双驱动的美化参数优化方法,实现了依平均脸与心理学先验而进行人脸美化组合参数的自动设定。通过对人脸美化组合参数与亮度、光滑度和颜色蒙板的协调、整合,有效地实现在一个统一的技术框架里,同时实现对人脸图像的亮度、光滑度和颜色三种重要皮肤属性的自动美化,大大地提高了人脸皮肤美化的效能及应用范围。实验结果表明,本文构建的模型和设计的技术框架可以同时处理不同人脸的光照、表情、性别、背景、年龄、姿态和种族等变化的图像,并能获得与商用系统(如PicTreat, Portrait+和Portraiture)相媲美甚至更优的人脸皮肤美化效果。第三、在人脸图像的光照迁移研究方面,本文在“自适应边缘保持模型”的理论框架下,构建了一个具有非均匀模型参数的自适应边缘保持平滑模型,实现了具非均匀特性的人脸区域内的光照模板生成与消色光照迁移。为进一步解决复杂背景下的光照迁移问题,本文又构建了一个具有自适应传播参数的编辑传播模型,有效地把人脸内部的光照信息平滑自然地扩散到其背景区域,并可同时迁移亮度、阴影和颜色信息等,从而实现复杂背景下的消色与彩色光照模板的生成以及相应的光照迁移效果。这种新型的图像编辑工具,具能直接根据人脸图像生成光照模板和实现光照迁移的效能,无需专用设备,使用方便;还具可以实现复杂背景下的彩色光照迁移,扩展基于图像的人脸真实感光照渲染技术的适用范围等优点。本文还通过运用Retinex理论和商图像理论,以及数学推导和演绎,论证了构建基于单张参考人脸图像模板生成的可行性,并提出了基于单张参考人脸图像的自适应光照模板的光照迁移技术框架。实验结果表明,自适应光照模板能够在具有不同表观特点的真实人脸图像、灰度人脸图像、非真实感人脸图像或手绘图像中获得良好的光照迁移效果,使光照迁移技术在渲染效果和适用范围上得到了有效的提升和扩展,具广泛的实用性。第四、在人脸图像的水墨画特色渲染研究方面,本文重点研究了水墨画特色渲染中的水墨画特色扩散模拟与不同水墨画风格生成的问题。本文提出了一种新的基于图像的水墨画特色扩散方法,通过对模型特征、模型参数与导向特征空间的设置,实现了具自适应性的不同抽象程度、扩散范围与扩散模式的水墨画渲染效果。在水墨画特色渲染的风格方面,通过构建一个新的水墨画特色渲染的技术框架,对图像抽象程度、水墨画特色扩散模式以及宣纸背景的颜色与不同特点的纹理进行组合,实现了具有不同的宣纸背景纹理特性、不同抽象程度、不同的水墨画特色扩散模式和不同风格的水墨画背景等不同风格的水墨画特色渲染效果,以及独特的非真实感人脸图像渲染效果。实验结果表明,本文的技术方法在不同物体或人脸中能产生良好的水墨画特色的渲染效果,而且与其他非真实感渲染方法相比具有一定的独特性。

【Abstract】 Facial image beautification and rendering are two rapidly developing computationalphotography techniques, which involed with manipulation of attribute or content of an image(like the enhancement of faical skin lighting, smoothness and color), while the classic imageprocessing techniques aim to enhance the quality of an image. Using image-basedmanipulation techniques, a novel image is synthesized by samples captured from the realworld rather than recreating the entire physical world, which can enhance or extend thecapabilities of digital photography. The development of facial image beautification andrendering has led to many useful applications in our daily life (like post-production ofphotography or entertainment) and industry (like advertisement or movie production).However, existing methods of faical beautifcaiton and rendering may require tedious and timeconsuming hand-crafted operations. Furthermore, good visual effects are hard to produce byhand-crafted manipulation due to the limitations of human visual perception and skills.Therefore, it is fascinating to construct an automatic system for faical image beautificationand rendering.It is challenging to build an automatic system of faical image beautification andrendering. Variations of facial images are caused by many factors, such as illumination,viewpoint and background. Facial image beautification and rendering are involved withassorted mathematical models, but there is no mature unified framework to analyze the relatedmodels effectively. To produce an image in a natural manner, we may also take the visualperception principles of human into considerations for system construcation. This thesisdevelops an adaptive edge-preserving energy minimization model which can automaticallyadjust its model properties according the input images or the manpulation tasks. Using thismodel, we can analysize and construct novel edge-proserving smoothing or edit propagationmodels under a unified framework and devolep an automatic image manipulation system withgreat reliablity, accuracy, error tolerance, and stability. Based on the adaptive edge-preservingenergy minimization model, we explore the specific problems of facial skin beautification,faical relighting and ink-painting rendering, respectively. The contributions of the thesis are asfollows:First, we develop a general adaptive edge-preserving energy minimization framework toimprove performance of edge-preserving smoothing and edit propagation methods, and toachieve adaptive facial image beautification and rendering. A general edge-preserving energy minimization (GEEM) model is presented to understand the connections and properties of thebilateral filtering, anisotropic diffusion and weighted least squares filter using nonparametricpoint estimaton and calculus of variation. To overcome the shortages of the general GEEMmodel, an adaptive edge-preserving energy minimization (AEEM) model is proposed, whichhas adaptive fidelity term, model parameters and high-dimenstional guided feature space. TheAEEM model can derive a novel model with better edge-preserving smoothing or editpropagation effects, which further improve the performace of the specific automatic system offacial skin beautification, face relighting or ink-painting rendering.Second, we propose a novel image editing tool called adaptive region-aware mask andconstruct a unified framework for facial skin beautifcaion, which can enhance the skinlighting, smoothness and color automatically. A region-aware mask is generated based onAEEM, which is integrated with faical structure and apperace features, adaptive modelparameter and guided feature space construced by lighting and color feature. Using aregion-aware mask, we can automatically select the editing skin regions and perform anunhomogenenous local adjustment automatically with great precision, especially for the theregions with complex boundaries. The proposed skin beautification framework contains threemajor steps, image layers decomposition, region-aware mask generation and image layersmanipulation. Under this framework, a user can perform facial beautification simply byadjusting the skin parameters. Furthermore, the combinations of parameters can be optimizedautomatically, depending on the average face assumption and related psychologicalknowledge. We performed both qualitative and quantitative evaluation for our method usingfaces with different genders, races, ages, poses, and backgrounds from various databases. Theexperimental results demonstrate that our technique is superior to previous methods andcomparable to commercial systems, for example, PicTreat, Portrait+, and Portraiture.Third, we present a novel automatic lighting template generation method to relight faceswith complex backgound. Based on the principles of Retinex theory and quotient image, aface relighting framework with single reference image is presented, where the lightingtemplate is the key component. Face relighting within the skin region is performed using alighting template, which is generated by an adaptive edge-preserving smoothing modelderived from AEEM with adaptive smoothness parameter. To address the problem of religtingin complex background, the lighting within skin region is diffused to the background in asmooth manner using an edit propagation model derived from AEEM with adaptivepropagation paramenter.Fourth, we propose an image-based ink-painting rendering framework with a novel ink diffusion simulation method, which can mimic diverse ink painting styles. We construct aspecific edit propagation model derived from AEEM with edge detectors and guided featurespace to simulate ink diffusion. Different ink diffusion effects with different abstraction,diffusion scope, and diffusion patterns are obtained by adjusting the model feature, parametersand guided feature. The proposed ink-painting rendering framework, which consists of linefeature extraction, adaptive ink diffusion and absorbent paper background simulation, cangenerate distinctive ink painting styles by different combination of image abstraction, inkdiffusion patterns and absorbent paper background.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络