节点文献

仿人头像机器人人工情感建模与实现的研究

Research on Modelling and Realization of Artificial Emotion for Humanoid Head Portrait Robot

【作者】 孟庆梅

【导师】 曲建俊; 吴伟国;

【作者基本信息】 哈尔滨工业大学 , 机械设计及理论, 2009, 博士

【摘要】 随着仿人机器人应用领域的不断扩展,人类对仿人机器人具有“情感”的需求越来越高。仿人头像机器人是仿人机器人研究领域中实现人机情感交互的重要方向。情感能够提高机器人的便利性和可信度,同时可以向使用者提供机器人的内部状态、目标和意图等反馈信息。目前国外对该方向研究还处于不断探索和完善阶段,而在国内关于该方向的研究主要集中在情感模型理论研究,对于机器人的情感模型应用研究成果报道较少。本课题以仿人头像机器人相关技术为手段,构建仿人头像机器人软硬件研究平台,通过理论、仿真和实验对仿人头像机器人人工情感建模与实现进行深入研究。本文的研究为未来人类与机器人共存,实现与人类的交流奠定基础。从情感心理学的认知角度出发,建立仿人头像机器人的人工情感模型,使人工情感通过情感表达的数学模型实现。在模型中确定系数矩阵反映个性对情感的影响程度,并根据情感和情感转换实现对人工情感的描述。其中人工情感的情感交互采用基于有限状态机理论建立分层式的扩展有限状态机情感交互模型。情感交互模型的行为主要由底层的状态机完成,通过设定状态参数变量集合为机器人行为控制提供依据。在模型中,通过情感的状态转换表和状态转换矩阵研究情感有限状态机的情感转换情况;根据有限状态机与马尔可夫链的关系,计算情感有限状态机模型的平稳概率分布;通过状态转换图中的情感转换路径表达状态机的情感行为。在人工情感模型的基础上进行了情感仿真,通过仿真验证了人工情感模型的正确性,并为实验研究提供依据。针对情感的外界刺激,研究视觉和听觉信号的提取与识别。在视觉信号中,主要针对人脸的面部表情进行特征提取和识别。通过积分投影和基于知识的方法提取面部表情特征值,采用模糊神经网络的方法进行面部表情的识别。其中,在构建模糊神经网络结构中,采用模糊聚类的方法进行中心参数的调整,采用最小二乘法来调整连接权值;在语音信号的提取与识别中,采用连续混合高斯隐马尔可夫模型完成。通过短时平均能量和短时过零率来实现语音信号的端点检测,以12阶Mel倒谱系数和短时能量作为语音识别的静态特征参数,采用它们的一阶差分作为动态特征参数。在仿人头像机器人“H&Frobot-II”基础上构建具有视、听觉功能的仿人头像机器人“H&Frobot-III”。该机器人包括机器人本体、运动控制系统、感知传感器系统三部分。机器人本体实现眼球、眼睑、下颚等部件运动,基于面部表情编码系统(FACS),通过控制柔性皮肤上的特征点运动实现机器人的面部表情和动态口形。在机器人上构建CCD传感器和语音识别单片机实现机器人的视觉和听觉功能。仿人头像机器的人机情感交互实验研究。在仿人头像机器人的情感再现实验中,得出机器人6种基本面部表情(正常、微笑、吃惊、厌恶、悲伤和生气)和机器人的动态口形,验证了仿人头像机器人的情感表达能力。在情感再现实验的基础上进行机器人的情感人机交互实验,通过与人类的语音和面部表情的交流,机器人做出相应的反应。实验结果表明仿人头像机器人的人机情感交互能力,验证了人工情感模型的正确性和有效性。对仿人头像机器人人工情感建模与实现的研究结果表明人工情感模型能够较好的满足人机交互的需求。本论文的研究结果为机器人的“仿人”特性的进一步深入研究和实际应用提供参考价值。本文的研究内容是在国家“863计划”项目“具有表情智能的仿人全身机器人系统集成化设计及基础技术验证”(2006AA04Z201)及哈尔滨工业大学跨学科交叉基金“具有六种面部表情及视觉的类人头像机器人与行为研究”( HIT. DM.2002.0.6)共同资助下完成的。

【Abstract】 With the development of the humanoid robot research application, human have more and more need for humaoid robot with emotion. The study of humanoid head robot is an important trend to realize human-computer interaction in the field of robotics. Emotion can improve the autonomy and flexibility of robot, and offer the user for feedback, such as internal state, goal and intention. At present, the research in artificial emotional model is at developing stage abroad. In China, the research is focused on theory of artificial emotional model research. There are few publications about application of the emotional model. In the present study, the software and hardware platform of humanoid head portrait robot were developed with some related techniques of personal robots. The modelling and realization of artificial emotion for humanoid head portrait robot was deeply researched through theory, simulation and experiment. The research is foundation for realizing huamn-machine interaction and coexistence of humans and robots.In the view of cognition of the psychology, artificial emotion model was presented for humanoid head portrait robot. The mathematical model of emotional expression was built to describe the emotional character. In the model, the coefficient matrix was studied to reflect the effect of emotion and personality. The emotion and emotional transiton were to descibe artificial emotion. Based on the theory of finite state machine (FSM), the hierarchical expanded finite state machine (EFSM) emotional interaction model was built. The main work in the model was finished on the bottom EFSM. In the EFSM, the variable set V was defined to offer the data for the behavior of robot. In the study of FSM, the transition table and transition matric were studied to show the emotion transition. Based on the relation FSM and Markov, the stationary probability distribution of FSM were calculaed. The emotional behavior of FSM was obtained by the transition route in the state transition graph. The simulation model was built on the basis of the artificial emotional model. The theory was verified by the simulation, which was taken as the basis for experimental study.Aimed to the extra-simulation, the extraction and recognition for visual and speech sign were researched in the human head robot system. In the visual sign, projection and knowledge were used to extract expression feature in the gray image. Fuzzy neural network (FNN) was used to recognize facial expression. In the structure of fuzzy neural network, fuzzy clustering method was adopted to adjust the center parameter, and Least square method was to adjuest the connection weights. In the speech sign, Continuous hide Markov model (CHMM) was to recognize speech words, short-time energy and short-time average zero-crossing rate to segment speech sign were adopted. A 12-order MFCC and short-time energy were defined as the static characteristic parameters, and first order difference of these parameters were defined as dynamic characteristic parameters.The humanoid head portrait robot“H&Frobot-III”with audio-visual function was developed based on the humanoid head portrait robot“H&Frobot-II”. The robot includes robot body, control system and sensor system. The robot body could realize the movement of eyeball, eyelid and lip. The facial expression and basic lip shape of robot were realized by controlling the movement of feature points in the flexible skin based on facial action coding system(FACS). In the sensor system, a CCD camera and a SPCE061A chip were use to realize robot’s visual and audtive function.The relative experiments were researched for emotional interaction with human and robot. Six basic facial expressions(normal, smile, surprise, digust, sad, angry) and dynamic lip were obtained by the emotional representation experiment. In these experiments, the robot was verified to express emotion through dialogue and facial expression. Based on these experiments, emotional interaction experiment was researched, the robot could respond to people’s voice and facial expression. The experiment results show that robot can realize the human-machine interaction and the validity of artifiacial emotional model.Theoretical and experimental research results in this paper indicate that artificial emotion model can satisfy the need of interaction for human-robot very well.This study is a part of the research project named“Integration Design and Basic Technology Verification in the Full Body Robot System with Expression Intelligence”grant number(2006AA04Z201). Supported by the National High-Tech and Research and Development Program of China. It is also a part of project of“Interdisciplinary Foundation of HIT:“Research on Humanoid Robot with Six Facial Expression and Visual”grant number(HIT.DM.2002.0.6).

节点文献中: 

本文链接的文献网络图示:

本文的引文网络