节点文献

基于协同交互的表情识别和情感体验建模方法研究

Research on Facial Expression Recognition and Affective Experience Modeling Based on Cooperative Interaction

【作者】 徐超

【导师】 冯志勇;

【作者基本信息】 天津大学 , 计算机应用技术, 2010, 博士

【摘要】 随着智能计算的发展,人们逐渐习惯于通过人机交互解决社会生活中遇到的问题。计算智能体如果可以了解和掌握人类内在的情感体验,就可以提高人机交互的合理性和协同性,实现生动、无缝的交互。考虑到情感计算与现实研究的复杂性,本文以面部表情的实时分析研究为基础,探讨被观测主体的外在面部表情表达与内在情感体验之间的联系机制,取得了以下主要研究成果:提出基于张量的坐标空间变换方法,将面部特征矢量从像素空间转换到参数空间,统一矢量维度,降低数据处理的复杂度,提高实时分析的效率和准确率。理论证明了进行张量空间转换的合理性,实现了面部特征矢量的几何和物理特性在空间变换的过程中保持不变。实验验证了该方法可以有效的提高面部表情识别的精度和实时分析的效率。从智能认知的角度,定义组织环境和个体面部表情分析模型。组织环境中的参与主体受组织规范约束,增强了分析研究的可操作性和可信性;个体模型对组织环境中特定的主体进行面部表情识别分析,通过模型间的协同交互迭代传递主体的面部表情聚类结构,完善模型自身的分析能力。由协同交互改进的面部表情识别算法收敛更快,并给出全局优化的面部表情聚类分布。在个体模型的基础上,建立面部表情的贝叶斯网络,实现对参与主体面部表情的动因分析和预测推理。引入协同交互改进面部表情网络的结构学习算法,提高了面部表情网络的拓扑结构质量。在构建的面部表情网络中,可以实时分析影响主体面部表情表达的环境动因,以及预测主体下一时刻最有可能表达的面部表情状态。该方法为通过面部表情分析情感体验做了基础准备。基于上述研究,设计了通过面部表情研究情感体验一致性分布的分析模型。该模型用以发现主体是否隐藏了应该表达的情感特征,其面部表情表达与内在的情感体验是否一致。模型将面部表情特征和面部表情状态定义为情感特征证据,结合协同依赖的评估算法实现对情感体验的协调度一致性分析,给出主体的情感体验分布图。该分布图是最具“个性化的”面部表情分析和情感计算认知结果。综上所述,该研究工作在较高层次上对人类外在的面部表情表达和内在的情感体验状态进行了深入的探索,并将基于面部表情分析的情感计算应用到人机交互中,提高智能计算机的“情感”认知能力,是建立和谐人机交互的基础。

【Abstract】 With the rapid development of intelligent computing, humans are accustomed to solve problems with Human-Computer Interaction (HCI). It is an important issue to understand and master human’s internal affective experience, because it can improve the rationality and collaboration of HCI and achieve the vivid and seamless interaction. However, considering the complexity of affective computing, we study the inner mechanism between the external facial expression and the internal affective experience based on real-time analysis of facial expressions, and achieve the meaningful results as follows:In this paper, a novel approach to facial expression analysis in parameter space with metric tensor is presented with a summary of methodologies. Based on metric tensor and its differential operators, the facial features are transformed to construct unified formalizations from the image pixel space to the parameter space with their geometric and physical characteristics preservation. Besides, three groups of experiments are conducted on standard facial expression databases and on real-time analysis to evaluate the effect of parameter space. It is suggested that the parameter space has the characteristics to lower the data formalizations demands and to improve the precision of facial expression recognition, and it can perform real-time facial expression analysis with distinction.From the view of cognitive intelligence, organizational environment and person-independent model for facial expression analysis are proposed. Since the participants are bounded by organizational norms, it enhances the operability and reliability of analysis. A person-independent model is responsible for one participant in the organizational environment, and it can improve the analytical skills through the models’cooperative interactions and the iterative transmission for the cluster structure of facial expressions. With the cooperative interactions, the person-independent model can improve the algorithm convergence faster and give the global optimal clustering distribution of facial expressions.According to person-independent model and Bayesian networks, facial expression networks are proposed to achieve the environment index factors analysis and predicting inference. Also, cooperative interactions are conducted for the structure learning of facial expression networks, and the new algorithm can improve the quality of the networks topology. Within the facial expression networks, not only can the main environmental factor causes of facial expression be analyzed, but also the model can predict what kind of facial expression would be most likely expressed next time. With such a framework of facial expression networks, the researchers can focus on the affective experiences modeling on the basis of facial expression.Based on the above studies, we design an affective experience model to analyze the coherence distribution. The model can analyze whether the participant hides its emotion or whether the facial expression and affective experience are of inner consistency. As the person-independent model can easily collect the facial features and effectively achieve the facial expression recognition and prediction, facial features and facial states are defined as affective characteristics evidences. As well as the algorithms of collaborative reliance evaluation, the model can analyze the affective experience and represent the coherence distribution diagram in space. With such a manner, the distribution is considered as the most individual intelligent ability for facial expression analysis and affective computing.In summary, this paper studies how to analyze the internal affective experience through the external facial expression based on which the affective computing can be improved and perfected. This paper also provides the basic significance for harmonious HCI.

  • 【网络出版投稿人】 天津大学
  • 【网络出版年期】2010年 10期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络