节点文献

基于核方法的雷达高分辨距离像目标识别技术研究

Radar HRRP Target Recognition Based on Kernel Methods

【作者】 陈渤

【导师】 保铮;

【作者基本信息】 西安电子科技大学 , 信号与信息处理, 2008, 博士

【摘要】 雷达高分辨距离像(HRRP)可以反映目标散射点在纵向距离上的分布情况,提供了目标重要的结构信息,并且相对于基于雷达目标像(包括SAR及ISAR像)的目标识别技术,基于HRRP的目标识别不要求目标相对于雷达平台有一定的转角,因而更易获取,对雷达具有更大的适应性,因此基于HRRP的雷达目标识别技术将受到更为广泛的关注和研究。近十几年来,基于核函数的方法也已经成功地用于解决机器学习领域的各种问题。基于核函数的算法相当于线性算法的一种非线性版本,它通过一个非线性函数将输入向量预先投影到一个高维空间中,我们只需要利用核函数计算各向量的内积便可在该高维空间中应用模式分析的各种算法,不仅提高了算法性能也节约了大量的运算时间。而在HRRP识别当中,由于目标的机动性,目标之间存在着复杂的非线性关系。因此,本论文主要围绕着“十五”国防预研项目“目标识别技术”(项目编号:413070501)和“十一五”国防预研项目“雷达高分辨距离像特征提取及识别”(项目编号:51307060601)的研究任务,针对高分辨距离像目标识别,从基于核函数的特征提取、分类器设计以及核函数优化算法等三个方面展开了较为深入的研究,另外,论文也对目标方位角划分和距离像的特征选择技术进行了研究。本论文的主要内容概括如下:1、简单回顾了核方法的研究背景,介绍了核函数的基本概念和性质。并且,为了展示基于核函数的方法相对线性方法的优越性,我们针对雷达HRRP的目标姿态敏感性、平移敏感性和闪烁效应的特点改进了核主分量分析(Kernel PCA),从而将其应用于雷达自动目标识别任务中。2、线性判别分析( Linear Discriminant Analysis,LDA)是被广泛应用于模式识别领域中的一种线性降维方法,但是它对数据有较强的限制,例如各类数据均为协方差矩阵相同但均值向量不同的多元正态分布,并且每类数据都是单聚类结构。为了克服这些局限性,近来子类判别分析(Subclass Discriminant Analysis,SDA)已经被提出。我们提出一种基于核函数的子空间判别分析(Kernel SDA),简称为KSDA。同时,我们给出了一个新的SDA表示公式从而避免了在特征空间中的一些复杂而且不直观的推导过程。3、提出一种简单而有效的方法通过减少在决策函数中出现的支撑向量的个数来加速其决策过程。尽管支撑向量机已经成功地应用于数据分类与回归等领域,但鉴于在决策阶段测试数据必须和所有支撑向量结合构成核函数的原因,与其他方法相比,在获得相似的性能时其测试速度相对较慢。实际上,投影数据只是位于高维核空间的一个子空间中,所以我们能够找到一组基向量来表示所有的支撑向量,而这些基向量的个数通常小于支撑向量的个数。4、针对核函数优化问题进行了研究。这部分工作主要分三个内容:(1)提出一种针对雷达一维高分辨距离像的核函数优化算法。该算法基于对模-1距离高斯核和模-2距离高斯核的融合,结合两种核函数的不同特性,不仅优化了核函数,同时对HRRP的闪烁效应有抑制作用;(2)核函数是否与数据分布结构相“匹配”控制着基于核函数方法的性能,这个结论已被广泛认同。理想情况下,数据在期望的核函数定义的特征空间中能够线性可分,因此Fisher线性可分性准则可以作为一种核优化规则。然而在许多应用中,即使在核空间转换后数据仍未达到线性可分,例如多模结构数据,因此在这种情况下非线性分类器更能体现出其优越的性能,并且此时Fisher准则已不是核优化准则的最好选择。受此启发,我们提出了一种新的核优化算法,该方法依靠局部核Fisher准则,在特征空间最大化局部类可分性以增大特征空间中各类间的局部距离,从而使得非线性分类器在核函数定义的特征空间中的的分类性能得到改进;(3)另外,Fisher准则仅在各类数据均为协方差矩阵相同但均值向量不同的多元正态分布,并且每类数据都是单聚类结构的假设条件下,才是最优的。由于这个假设的限制,在一些应用中Fisher准则显然已不是核优化准则的最好选择。为了解决这一问题,近来许多改进的判别分析已经被提出。因此,为了将这些判别准则应用于核优化中,基于依赖数据的核函数形式,我们提出了一个统一的核优化框架,该优化框架能使用任何可以写为样本对的形式判别准则作为代价函数。在该优化框架下,如果需要应用不同的判别准则,仅修改对应的伴随矩阵即可,而不需要任何特征空间中复杂的公式推导。5、首先从流形几何的角度来学习雷达高分辨距离像(HRRP)中的非线性结构。然后,针对目标的姿态敏感性,在假设HRRP位于一低维流形基础上,利用HRRP流形的弯曲率,提出了一种自适应分割角域的方法。6、提出一种新的特征选择的方法。特征选择是机器学习领域中较难的一个组合任务,同时也具有较高的实用价值。我们的工作是,针对k近邻分类器提出一种大边界特征加权算法。该算法主要通过最小化一个代价函数学习相应的特征权值。代价函数的目的是利用一个较大的边界分离异类样本,同时拉近同类样本间的距离,并且使用尽可能少的特征。相应的优化问题可以使用线性规划算法有效得到最优解。

【Abstract】 Target high-resolution range profile (HRRP) represents the projection of the complex returned echoes from the target scattering centers onto the radar line-of-sight (LOS), which contains the informative target structure signatures. Furthermore, different from the target recognition using radar target images including SAR and ISAR images, it is unnecessary for HRRP target recognition to require a rotation angle between the target and radar. It means that obtaining target HRRP is easier and the technique is more applicable to many types of radar. Consequently, radar HRRP target recognition has received intensive attention from the radar automatic target recognition (RATR) community. Kernel methods have been successfully applied in solving various problems in machine learning community. Kernel methods are algorithms that, by replacing the inner product with an appropriate positive definite function (kernel function), implicitly perform a nonlinear mapping of input data to a high dimensional feature space. The attractiveness of such algorithms stems from their elegant treatment of nonlinear problems and their efficiency in high-dimensional problems. Due to the non-cooperative and maneuvering, the different targets should be nonlinear separable. Therefore, from the three aspects, i.e. kernel feature extraction, the classifier designing and kernel optimization, this dissertation provide our researches for HRRP target recognition, which are supported by Advanced Defense Research Programs of China (No. 413070501 and No. 51307060601) and National Science Foundation of China (No. 60302009).The main content of this dissertation is summarized as follows.1. The first part summarily reviews the background of kernel methods and introduces the fundamental theories and characteristics. Moreover, in order to demonstrate the better performance of kernel methods than linear methods, an improved kernel principle component analysis (Kernel PCA) is proposed to deal with the target-aspect, time-shift and amplitude-scale sensitivity of HRRP samples.2. LDA is a popular method for linear dimensionality reduction, which maximizes between-class scatter and minimizes within-class scatter. However, LDA is optimal only in the case that all the classes are generated from underlying multivariate Normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. In order to overcome the limitations of LDA, recently subclass discriminant analysis (SDA) is proposed. We develop SDA into Kernel SDA (KSDA) in the feature space, which can result in a better subspace for the classification task since a nonlinear clustering technique can find the underlying subclasses more exactly in the feature space and nonlinear LDA can provide a nonlinear discriminant hyperplane. Furthermore, a reformulation of SDA is given to avoid the complicated derivation in the feature space.3. The third section focuses on simply and efficiently reducing the number of support vectors (SV) in the decision function of support vector machine (SVM) to speedup the SVM decision procedure. SVM is currently considerably slower in test phase than other approaches with similar generalization performance, which restricts its application to real-time tasks. Because in practice the embedded data just lie into a subspace of the kernel-induced high dimensional space, we can search a set of basis vectors (BV) to express all the SVs approximately, the number of which is usually less than that of SVs.4. The forth section is contributed on the optimization of kernel functions. The main work concerns the following three aspects: (1) we develop a kernel optimization method based on fusion kernel for High-resolution range profile (HRRP). In terms of the fusion of l1-norm and l2-norm Gaussian kernels, our method combines the different characteristics of them so that not only is the kernel function optimized but also the speckle fluctuations of HRRP are restrained; (2) ideally it is expected that the data is linearly separable in the kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a cost function to optimize the kernel function. However, the data may not be linearly separable even after kernel transformation in many applications, e.g., the data may exist as multimodally distributed structure, in this case, a nonlinear classifier is preferred, and obviously Fisher criterion is not a suitable choice as kernel optimization rule. Motivated by this issue, we propose a localized kernel Fisher criterion, instead of traditional Fisher criterion, as the kernel optimization rule to increase the local margins between embedded classes in kernel induced feature space, which results in a better classification performance of kernel-based classifiers; (3) Fisher criteria is optimal only in the case that all the classes are generated from underlying multivariate Normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. Due to the assumptions, Fisher criteria obviously is not a suitable choice as a kernel optimization rule in some applications. In order to solve this problem, recently many improved discriminant criteria (DC) have been also developed. Therefore, to apply these discriminant criteria to kernel optimization, in this paper based on a data-dependent kernel function we propose a unified kernel optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions. Under the kernel optimization framework, if one would like to employ different discriminant criteria, what to do only is to change the corresponding affinity matrices without having to resort to any complex derivations in feature space.5. In the fifth part, the manifold geometry in radar HRRP is firstly explored. And then according to the characteristics of target pose sensitivity, a method of adaptively segmenting the aspect sectors is proposed for HRRP recognition through evaluating the curvature of HRRP manifold.6. Finally, we propose a novel feature selection algorithm. The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance. In this paper we present a large margin feature weighting method for k -nearest neighbor (kNN) classifier. The method learns the feature weighting factors by minimizing a cost function, which aims at separating points in different classes by a large margin, pulling closer together points from the same class and using as few features as possible. The consequent optimization problem can efficiently be solved by Linear Programming.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络