节点文献

智能人体动画若干关键技术研究

Intelligent Techniques for Character Animation

【作者】 肖俊

【导师】 庄越挺;

【作者基本信息】 浙江大学 , 计算机科学与技术, 2007, 博士

【摘要】 随着光学运动捕获设备的应用,高效、快捷的获取真实感三维人体运动数据已成为现实并被成功应用于计算机动画、视频游戏、影视特技、运动仿真及虚拟现实等众多领域,且近年来已经出现较大规模的商用/研究用人体运动捕获数据库。随着运动数据获取手段日趋成熟,计算机动画研究人员的研究重点逐渐转移到如何对已有的三维人体运动捕获数据进行分析、管理与重用上,并开始关注如何开发新的技术用于自动化、智能化的三维人体动画创作。本文的工作就是探索三维人体运动捕获数据的自动化、智能化分析、处理与重用方法,并在此基础之上研究三维人体动画创作的智能技术。本文提出了人体运动数据分割、运动数据抽象、基于关键帧的三维运动数据检索、风格化人体运动数据自动编辑与生成、基于捕获数据集与运动脚本的人体运动合成引擎、基于运动规划的人体动画自动生成等技术。为了将较长运动捕获序列中所包含的不同类型运动数据分割开来,首先将原始空间中结构复杂(在原始高维空间中,运动数据样本点分布复杂,形成扭曲甚至折叠的高维流形)的人体运动数据通过非线性流形降维技术投影到结构相对简单的低维流形上,进而采用聚类算法对其进行自动分割。为了对原始运动数据序列进行抽象表示,提出骨骼夹角八元组特征表示方法,进而提出一种改进的分层曲线简化算法提取关键帧数据。实验结果表明该算法不仅能够对原始运动数据序列进行压缩和抽象表示,同时还能保持相似运动序列关键帧集合之间的一致性,并且在运动数据压缩、基于关键帧的运动数据检索、合成等方面有着进一步的应用。在三维人体运动数据管理方面,提出一种基于关键帧的三维运动检索技术,通过在运动数据序列关键帧集合之间构建距离矩阵对其进行相似度比较,能够实现基于内容的三维人体运动数据检索。该算法的特点在于计算简单、运算效率高,不需要事先建立索引结构,适用于增量式的三维人体运动数据库。在三维人体运动数据重用方面,提出一个自动框架对风格化人体运动数据进行实时、定量的生成与编辑。主成分分析理论(Principle Component Analysis,PCA)用来将人体运动数据映射到子空间,在最大程度保持原始数据特征的条件下降低了计算复杂度。风格化运动生成与编辑算法在PCA子空间中得到应用并产生出新的风格化人体运动数据。为了解决实际人体运动存在多风格融合的问题,还提出一种新颖的方法用于多风格人体运动数据的生成与编辑。为了基于用户指令高效、快速合成三维人体动画序列,本文提出一个基于运动捕获数据集的真实感三维人体运动合成引擎,通过定义标准XML格式的运动脚本,引擎能够从运动捕获数据库中获取相应的运动数据片断合成最终的三维人体运动序列。在这个灵活的引擎框架下,用户可以自己提供特定的运动数据集合并定义相关的运动元素表,实验表明该引擎架构可以用于计算机游戏、动画制作系统、运动仿真及虚拟现实等应用中。为了生成特定虚拟场景中的角色动画,提出了一个基于运动规划的人体动画自动生成框架。给定一个虚拟场景,动画师手工指定角色运动的起点与终点,系统可以自动或者交互式的为动画角色规划运动路径和进行行为选择,将结果保存为运动脚本形式,然后基于脚本信息从已有运动捕获数据库中提取相应行为片断并合成为最终动画序列。最后在第九章中,对本文的研究工作进行了总结和展望。

【Abstract】 Due to the popularity of optical motion capture system, more and more realistichuman motion data can be acquired easily. In recent years, large and highly detailedhuman motion database is commercially available and widely used in variousapplications such as video games, animation films, sports simulation and virtualreality. Therefore, many researchers have been focused on how to edit, manipulate,reuse the existing motion data, and develop new techniques for producing humananimation automatically and intelligently.The work of this thesis is to explore automatic and intelligent method for analyzing,managing and reusing motion capture data, and try to develop some automatic andintelligent approaches to produce 3D human animations efficiently. This thesispresents the following algorithms, including automatic motion segmentation,keyframe extraction from human motion sequence, motion retrieval based onkeyframes, automatic synthesis and editing of motion styles, a script engine forrealistic human movement generation based on MoCap data, and automatic humanmovement generation based on motion programming.A novel method was proposed to get primitive actions from long MoCap sequenceefficiently. Original motion sequences lie on a high-dimensional manifold which ishighly folded and twisted, so it is difficult to cluster the similar poses together to formdistinct primitive actions. Here we use a non-linear dimensionality reductiontechnique to map original motion sequences into low-dimensional manifold, and thenclustering techniques are applied to segment primitive actions apart.We propose a keyframe extraction method based on a novel layered curvesimplification algorithm for motion capture data. Bone angles are employed as motionfeatures and keyframe candidates can be selected based on them. After that, thelayered curve simplification algorithm will be used to refine those candidates and thekeyframe collection can be gained. The experiments demonstrate that our method cannot only compress and summarize the motion capture data efficiently, but also keepthe consistency of keyframe collection between similar human motion sequences,which is of great benefit to further motion data retrieval or editing.Chapter 5 introduces a novel motion retrieval approach based on keyframes. Whenmotion retrieval command is issued, the distance matrix is constructed betweenkeyframe set of query example and that of a motion from database. Then thesimilarity between them is calculated based on this distance matrix. Comparing tomost existing content-based motion retrieval approaches our method possesses bettertime efficiency performance without dependence on the precomputed indexingstructure and preset parameters, which is a preference for the incremental motiondatabase.We propose a framework for automatic, real-time and quantitative synthesis andediting of human motion styles. In this framework Principle Component Analysistheory is used to map original styled human motions into subspaces, which can reduce computational complexity while reserving the intrinsic properties of original data.Synthesis and editing methods are applied in such subspaces and then motions withnew styles can be reconstructed. As realistic human motions may have multiple styles,we also present a novel method to synthesize and edit motions with multiple styles.Chapter 7 proposes a script engine framework for realistic human movementgeneration based on well-organized MoCap database. Users can make or edit motionscripts which describe the human movement type, order, and details. Then the scriptsequence are decomposed into sequential commands which are used to retrieve propermotion clips from MoCap database and generate final movement sequence.Furthermore, users can define their own motion elements table and scripts in thisflexible script engine framework according to various MoCap data sets. Theexperiment result shows that this script engine framework can achieve goodperformance and can be used as human motion engine in various applications, such ascomputer game, animation production, sports simulation and virtual reality.Chapter 8 proposes a framework to program the movements of characters andgenerate navigation animations in virtual environment. Given a virtual environment, avisual user interface is provided for animators to interactively generate motion scripts,describing the characters’ movements in this scene and finally used to retrieve motionclips from MoCap database and generate navigation animations automatically. Thisframework also provides flexible mechanism for animators to get varied resultinganimations by configurable table of motion bias coefficients and interactive visualuser interface.In chapter 9, we give a conclusion of this thesis and discuss the future work.

  • 【网络出版投稿人】 浙江大学
  • 【网络出版年期】2007年 06期
  • 【分类号】TP391.41
  • 【被引频次】35
  • 【下载频次】1085
  • 攻读期成果
节点文献中: 

本文链接的文献网络图示:

本文的引文网络