节点文献

汉语语音同步的真实感三维人脸动画研究

Chinese Speech Synchronized Realistic 3D Facial Animation

【作者】 周维

【导师】 汪增福;

【作者基本信息】 中国科学技术大学 , 模式识别与智能系统, 2008, 博士

【摘要】 具有真实感的语音同步人脸动画是当今计算机图形学领域的一个热点问题。它在人机交互、娱乐、影视制作和虚拟现实等方面有着非常多的应用。在过去的三十年中,相关领域取得了长足的发展与进步,但仍存在许多问题亟待解决。其中,如何获得具有高真实感的语音同步人脸动画是一个富于挑战性的课题。该课题涉及个性化人脸的运动学和动力学建模和表示、协同发音机制的建模和表示以及语音驱动人脸动画的主客观评估等诸问题。本文从以下几个方面对语音驱动人脸动画这一富于挑战性的研究课题进行了重点研究。首先,本文在Waters肌肉模型的基础上提出了一种新的嘴唇肌肉模型。针对Waters模型过于简单,不能对复杂的面部动作进行有效建模的问题,本文参考面部解剖学的相关研究成果,提出了一种能够精确描述嘴唇运动的新的嘴唇肌肉模型。该模型将嘴唇的整体运动分解为若干个子运动,并通过各子运动之间的线性组合来表示嘴唇的整体运动。为了合成说话人脸,首先在嘴唇上标记出一些特征点并通过这些特征点获取一组用来描述嘴唇运动的参数。在此基础上,建立嘴唇的运动模型。然后,利用上述嘴唇运动模型和与之相关联的线性肌肉模型,合成各种说话口型。实验结果表明,该嘴唇模型计算代价低、实用性强,是一种有效的嘴唇模型。利用该模型可以合成具有一定真实感的口型动画。其次,在汉语普通话三音子模型和协同发音相关研究成果的基础上,本文提出了一种上下文相关的可视语音协同发音模型。该模型将基于规则集的方法和基于学习的方法进行结合,充分利用两种方法的优点来获得具有真实感的人脸语音动画。我们的模型关注于汉语普通话协同发音的视觉效果。为了得到关键的合成口型,建立了可视语音的协同发音规则集。各音子的相应视位权重可由量化的规则集计算得出。在此基础上,可以合成对应于各音子的口型序列。然后,利用基于学习的方法,从所有的可能选择中获得合成的两音子间的过渡口型,最终得到具有真实感的人脸语音动画。此外,本文还提出了一种新的与语速相关的嘴唇运动模型。在连续语流状态下语速对嘴唇运动的速度和幅度都有很大的影响。研究表明,一些说话人在保持运动速度相对不变的条件下通过增大嘴唇运动幅度来达到增加语速的效果,而另一些人则在保持嘴唇运动幅度不变的前提下通过增大运动速度来实现提高语速的目的。也有一些人通过同时调节嘴唇的运动幅度和运动速度两种参数实现对语速的控制。这表明,在不同的语速条件下,不同人的唇动策略有所不同。基于上述背景知识,本文提出了一种新的具有高度自然度和个性化特征的、与语速相关的嘴唇运动模型。这里,将嘴唇肌肉区域看作一个独立的粘弹性系统,根据EMG信号与语速以及肌肉收缩力之间存在的观测数据得到皮肤肌肉组织和语速以及肌肉收缩力之间的定量关系。依据该嘴唇运动模型,我们构建了一个汉语普通话人脸动画系统。最后,为了对所构建的语音同步人脸动画系统的质量进行评估,本文提出了一种用于汉语可视语音动画质量评估的系统化方法。该方法主要由两种测试步骤构成:可接受性测试与可理解性测试。在可接受性测试中,使用了诊断的可接受性测量方法,并添加进了测试和客观评估相结合的方法。在可理解性评估中,提出了一种新的可视汉语改进押韵测试方法。在该方法中,通过引入“惩罚”与“原谅”因子以模拟人们对于说话人脸的感知。综合两种测试方法可以得到对所提出的三维人脸语音动画系统的总体评估。在前述研究的基础上,我们设计并实现了一个汉语三维人脸语音动画演示系统。该演示系统可以根据所输入的语音和特定人的三维人脸模型生成具有真实感的个性化说话人脸动画。

【Abstract】 Realistic synchronized speech facial animation is a heated issue in the field of Computer Graphics and has a lot of applications in Human-Computer Interfaces, Entertainment, Film & Television Production, and Virtual Reality, etc. In the past 30 years, great progress and developments have been made in speech animation. However, at present, speech animation still has a lot of problems. Therefore, how to obtain synchronized speech driven realistic facial animation is a challenge subject which concerns so many problems including the kinematic and dynamic modeling and representation of individualized face, the mechanism of co-articulation and the acoustic and perceptual evaluation of realistic synchronized speech facial animation.In this paper, we study the synchronized speech facial animation from the following aspects.Firstly, based on the Waters’ muscle model, a novel lip muscle model is proposed in this paper. Establishing muscle model for human facial animation is a simple and useful approach. However, too simple muscle model, like Waters’ muscle model, can not describe some complicated moving facial expressions naturally. So we proposed a new lip muscle model, which perfects the description of the complicated lips’ muscle movements which are not accurate in the Waters’ model. According to facial anatomy, the global lip movement is divided into a few sub-movements. These sub-movements are the basic units for the description of the global lip movement. The reconstruction of the lip movement is based on the linear combination of the sub-movements. In the application of modeling talking face, several feature points are marked to get a group of lips parameters. All kinds of lip shapes are synthesized by using the proposed lip muscle model and the adjacent linear muscle model. The experimental results show that the proposed model is practical in view of its low computational cost and ability of producing all kinds of realistic synthesized lip shapes.Secondly, based on the previous researches on Chinese mandarin triphone model and co-articulation, a context-dependent visual speech co-articulation model is proposed in this paper. This approach combines the advantages of rule-based and learning-based methods to get realistic speech animation. Our presented model focuses on the visual effect of Chinese mandarin co-articulation. In order to get the key synthesized lip shapes in continuous speech, the rule set of the visual speech co-articulation is constructed and the phones’ corresponding visemes weights are calculated by the quantized rule set. We synthesize a sequence of phones’ corresponding lip shapes by using our muscle-based facial model. To produce realistic speech animation, a learning-based approach is used to acquire optimal synthesized transition lip shapes between two phones from all possible selections.Thirdly, a novel lip movement model related to speech rate is proposed in this paper. In continuous speech, speech rate has a strong effect on the velocity and amplitude of lip movement. At different speech rates, different people select different strategies of lip movement. For increased rate, some speakers decrease amplitude but maintain the velocity of the movement; others increase velocity while maintaining amplitude; and others make adjustments in both parameters. Therefore, according to the above research background, a novel lip movement model related to speech rate, which has high degree of individuality and naturalness, is proposed. According to the former researches, there exists a closed relation between EMG signal and speech rate as well as a relation between EMG signal and muscle force. Also, the area which covers lip muscle can be considered as an independent viscoelastic system. So the model is constructed based on the research results on the viscoelasticity of skin-muscle tissue and the quantitative relationship between lip muscle force and speech rate. In order to show the validity of the model, we have applied it to our Chinese speech animation system.Finally, in order to evaluate the quality of the synthesized speech animation system, a systemic evaluation approach of visual Chinese speech animation is proposed in this paper. Basically the approach consists of two main tests: acceptability test and intelligibility test. In acceptability test, the diagnostic acceptability measure approach has been used and the objective evaluation ingredient has been added. In intelligibility test, a novel approach called Visual Chinese Modified Rhyme Test, which is based on the previous Chinese Modified Rhyme Test in synthesized speech evaluation and focuses Chinese speech animation, has been proposed in this paper. At the same time, the factors of "punishment" and "forgiveness" are introduced to simulate the people’s perception. At last, the synthesized evaluation result of the 3D speech animation system is concluded in this paper.According to the above researches, a Chinese Synchronized Speech Animation Demo System is constructed and a natural and realistic talking head is synthesized in this demo system.

  • 【分类号】TP391.41
  • 【被引频次】9
  • 【下载频次】450
节点文献中: