节点文献

线性特征抽取研究及其在人脸识别中的应用

Research on Linear Feature Extraction and Its Application in Face Recognition

【作者】 严慧

【导师】 杨静宇;

【作者基本信息】 南京理工大学 , 计算机应用技术, 2011, 博士

【摘要】 人脸识别技术是生物特征识别领域中的一个研究热点,其主要任务是根据人脸图像中的有效信息进行个体的身份识别。本文以代数统计为研究工具,在流形学习和子空间学习的基础上,提出了新的线性特征抽取方法,并将其应用于人脸识别,将其与现阶段的主流方法进行了比较,验证了本文方法的有效性。本文的主要工作和创新之处如下:1.深入研究二维主成分分析中的相关性问题,提出最小化相关性的二维主成分分析算法。二维主成分分析是一种直接利用图像矩阵构造散布矩阵的方法,在2003年后得到广泛应用及推广,一批基于“二维”的新方法应运而生。本文指出在二维主成分分析中,特征向量的分量之间是相关的,并给出此相关性的数学表达,进一步提出了最小化相关性的二维主成分分析。该方法改进了二维主成分分析的目标函数,在最大化特征向量间总体散度的同时,最小化特征向量各分量间的相关性。2.基于张量脸模型的方法是为了解决在各种外在因素(光照、角度、表情等)影响下的人脸识别问题,但是其目标函数的解决较复杂,且存在多个未知参数。于是,我们提出用矩阵分解来解决张量脸模型。新方法的优点有三:一是我们的方法只含较少的未知参数,参数的可选择范围也更大,而且实验表明通过矩阵分解得到的闭式解更加稳定和有效;二是在样本训练阶段,我们的方法计算复杂度小于其他的张量脸方法;三是我们的方法扩展了张量脸模型的应用范围,如将其应用于图像重构。3.用非负矩阵分解的算法解决有关二维数据特征抽取的问题。二维主成分分析是一种基于整体脸的方法,保留了人脸部件之间的拓扑关系;而非负矩阵分解是基于局部特征的识别方法,通过提取局部信息来实现分类。将两种算法的优点融合在一起,提出非负二维主成分分析。因此,该方法解决了传统非负矩阵分解未加强分类的问题。此外,该方法在矩阵分解之前不需要将图像矩阵转换为图像向量,从而保留了数据本身的图像结构。4.同时考虑数据分布的局部性和非局部性,提出无监督的差分鉴别特征提取方法。局部保持投影只考虑了投影后的局部性,而忽视了非局部性。针对这个问题,我们的方法引入非局部散布矩阵,提出无监督的差分鉴别特征提取方法,通过最大化非局部和局部之间的散度差来寻找最优变换矩阵,并成功应用于人脸识别。该方法同时引入非局部和局部的信息,揭示隐含在高维图像空间中的非线性结构;采用差分的形式求解最优变换矩阵,避免“小样本”问题;对LPP中的邻接矩阵进行了修正,更准确地描述样本之间的邻近关系。5.考虑了数据分布的潜在流形结构,采用新的图像距离度量方法以及新的鉴别特征抽取思想,提出基于中心距离的鉴别特征抽取方法。该方法是一种有监督的线性特征抽取方法,它首先计算样本之间的中心距离,并以此距离作为依据寻找类内和类间共计K个近邻;然后,该方法寻找线性投影轴,使得降维后的同类近邻间的距离尽可能地近,而类间近邻间的距离尽可能地远。

【Abstract】 Face recognition technique is a hot branch in the field of biometrics. The aim of face recognition is to recognize individual identities according to the effective information in facial images. In this paper, based on manifold learning and subspace learning we use algebra statistics as our research tools to propose some novel linear feature extraction for face recognition. Furthermore, we compare these proposed approaches with current popular face recognition methods and verify the effectiveness of our approaches. The main work and innovation of this dissertation includes:1. Through investigation and research on correlation in two-dimensional principal component analysis (2DPCA), we propose an improved 2DPCA based on minimized correlation.In 2DPCA, the covariance matrix is constructed with the 2D image matrices directly. Since 2003,2DPCA has been widely applied and numerous novel methods based on two dimensions have emerged. This paper finds that arbitrary components belonging to any feature vector are correlated and presents the corresponding mathematical expressions. Based on this, we put forward our improved 2DPCA, which modifies the objective function of original 2DPCA. Our algorithm maximizes the total scatter of the projected samples, and meanwhile it minimizes the correlation within features.2. Tensor model based face recognition aims to address the problem of face recognition with various factors (lighting, viewpoint, expression, etc.). However, tensor model is not a preferable choice because the solution of its objective function is complex and its objective function has many unknown parameters. Thus we propose tensor model based methods using matrix factorization.Firstly, this proposed method has fewer unknown parameters and the parameters can be chosen from a wider range. Comprehensive experiments on face image database validate the steadiness and effectiveness of our closed solution. Secondly, at the phase of samples training, our computation complexity is lower than that in previous tensor model based methods. Lastly, we further apply the tensor model based methods to image reconstruction.3. Applying the idea in the non-negative matrix factorization (NMF), we solve a problem in two-dimension based feature extraction.2DPCA is an algorithm based on the "whole face" and preserves the topology of facial components; while NMF is an algorithm based on localized features and extracts local information. In this paper, we integrate the merits of 2DPCA and NMF and further propose a novel method called Non-negative 2DPCA. Compared with original NMF, the proposed method enhances the discriminating ability. Moreover, in the new method, images do not need to be transformed into a vector prior to feature extraction and thus the facial structures are preserved.4. Taking both "locality" and "nonlocality" of data distributions into consideration, we propose a novel unsupervised difference discriminant feature extraction.Locality preserving projections (LPP) only considers the information of the "locality" and ignores that of the "nonlocality". To solve this problem, a novel unsupervised difference discriminant feature extraction is presented. This method extracts an optimal transformation matrix based on maximal nonlocal and local scatter difference and has been successfully applied to face recognition. The proposed method:1) takes into account both the local and nonlocal quantities and seeks the nonlinear structure hiding in high-dimensional data.2) adopts the difference form in order to avoid small sample size problem.3) modifies the adjacency matrix in LPP so that the proposed method can catch the neighborhoods relationship exactly.5. Taking into consideration the intrinsic manifold of data distribution, we adopt new image distance measure to extract discriminant features, and then propose a novel discriminant feature extraction based on center distance.This proposed method, as a supervised linear feature extraction, computes the center distance between any two samples. Based on this distance, it searches the K neighborhoods in the same class and in the different classes. Then, it computes the projection axes that can pull the neighborhoods in the same class as closely as possible and push the neighborhoods in the different classes as far as possible.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络