节点文献

基于关键帧的视频内容描述方法的研究

Description of Video Content Based on Key Frame

【作者】 张丽坤

【导师】 孙建德;

【作者基本信息】 山东大学 , 通信与信息系统, 2013, 硕士

【摘要】 随着多媒体技术的飞速发展,关于如何对视频内容进行描述已成为研究的热点之一,而结合人类视觉系统的视频内容描述方法也越来越受到研究者们的关注,并将其应用于视频检索、智能监控、视频压缩、视频复制检测等领域。同时,随着智能监控系统的广泛应用,对视频内容的描述在监控视频的事件检测方面的应用更加突出。因此,对视频内容的正确描述及在事件检测方面的应用已经成为国内外研究的热点。本文是在基于符合人眼视觉注意机制的时空域关注模型的基础上,应用视觉关注转移机制,提出了一种基于关键帧的视频内容描述方法,并将其应用到智能监控视频的事件检测中,并同时对人脸检测和追踪算法进行了研究。本文首先介绍了视频内容描述的基本方法和研究现状;然后介绍了视觉关注模型的基本知识,接着重点介绍了一种新的时空域关注模型的构建过程;并在此基础上根据视觉关注转移机制提取表征视频内容的关键帧;接着将基于关键帧的视频内容描述方法应用于事件检测领域,最后研究了人脸检测和追踪的算法,以便以后将人脸信息作为更高级的语义特征用于关注模型的构建方面。本文的主要创新和贡献在以下几个方面:(1)构建一种新的时空域关注模型。该模型在实验室原有研究成果的基础上,加上视频的时域信息,用以时域关注为主的权重将时域关注模型和空域关注模型进行融合,构建出符合人眼视觉注意机制的时空域关注模型。(2)提出一种基于视觉关注转移的事件检测算法。该算法从人的视觉关注特性出发,将视觉关注的转移作为事件检测的依据,根据时空域关注模型提取视频帧中的受关注区域,根据连续帧中的最受关注区域的变化来确定人眼视觉关注点的转移,形成视频关注节奏,根据关注节奏的变化强度来选取关键帧,通过关键帧表明事件发生的时刻,从而触发对受关注事件的提示,并结合人眼的视觉注意机制选择关键帧中的受关注区域作为对象,用基于meanshift的追踪算法对对象进行追踪,标定受关注对象在前、后续帧中的位置,并对遗留物和被带走物体进行标定和突出显示,从而达到有效遏制危险情况发生的目的。(3)提出了一种基于AdaBoost和CAMSHIFT的人脸检测和追踪算法,该算法在现有人脸检测和追踪算法的基础上,对人脸追踪过程进行了改进,提出了用累加直方图作为追踪依据的追踪算法,并在追踪过程中不断调整搜索窗口的位置和大小,追踪过程中出现的肤色跟背景颜色相近或是距离变化而引起人脸范围变化的问题得到了很好的解决。

【Abstract】 With the rapid development of multimedia technology, the description of video content has become a hot research topic. Nowadays, the application of Human Visual System (HVS) on describing video content attracts more and more research interests and it is applied widely in video retrieval, intelligent surveillance, video compression, video copy detection and so on. Meanwhile, intelligent surveillance system also has urgent requirements on the description of video content, especially on event detection of surveillance videos. Therefore, how to exactly describe the events in surveillance video is one of the highlights in these related fields.In this paper, we proposed a method of description of video content based on key frame, in which we use visual attention shift mechanism based on spatial-temporal visual attention model that meets HSV. It is used in event detection in surveillance video. At the same time, we do some research about the face detection and face tracking. In this paper, we first introduce the state of the art of the description of video content and then introduce the theory of visual attention model. In the following, we emphasize the new method we propose spatial-temporal visual attention model. Based on the spatial-temporal visual attention model, we extract the key frames according to the human visual attention mechanism to describe the video content and apply them to the event detection of surveillance videos. Finally, we research for the face detection and face tracking so that we can put faces as high level feature to improve our spatial-temporal visual attention model in the future research.The main innovations and contributions in this paper are as follows:(1) Form a new spatial-temporal visual attention model. The model adds thetemporal information of video based on our lab’s results. The temporal and spatialvisual attention models are fused by the weight which is determined by temporalattention model to form the final spatial-temporal visual attention model that meets (2) We propose a visual attention shift-based event detection algorithm for intelligent surveillance, in which the temporal and spatial visual attention regions are detected to obtain the visual saliency map, and then the visual attention rhythm is derived from the visual saliency map temporally. According to the visual attention rhythm, the key frames are selected out to label the occurrence of the events. At the same time, the likely to be concerned objects in the key frames are exacted and tracked in the former and latter frames.(3) A face detection and tracking algorithm based on AdaBoost and CAMSHIFT is proposed. The algorithm improves the face tracking, in which uses accumulating histogram as the evidence of tracking and constantly change the size and position of the target window. We resolve the problem that if the color of faces is similar to the background color, the faces are easy to lose tracking.

  • 【网络出版投稿人】 山东大学
  • 【网络出版年期】2013年 11期
节点文献中: