节点文献

雷达高分辨距离像目标识别的拒判算法和特征提取技术研究

Study of Rejection Algorithm and Feature Extraction Technique on Radar HRRP Target Recognition

【作者】 柴晶

【导师】 保铮;

【作者基本信息】 西安电子科技大学 , 信号与信息处理, 2010, 博士

【摘要】 雷达高分辨距离像(HRRP)是散射点子回波在雷达视线方向上投影的向量和,它能够反映出散射点目标在雷达视线方向上的几何结构信息,且相对于合成孔径雷达(SAR)图像和逆合成孔径雷达(ISAR)图像而言,具有易于获取和存储量小等优点,因而在雷达自动目标识别(RATR)领域受到了广泛关注。本论文主要围绕着国防预研及国家自然科学基金的相关项目,结合距离像识别的工程应用背景,从库外目标拒判和特征提取两个主要方面展开对RATR的相关理论和技术的研究。论文的主要内容包括下述五个部分,其中第一部分为绪论,第二部分主要涉及库外目标拒判问题,第三、四、五部分主要涉及特征提取问题。一、首先介绍了高分辨距离像的物理特性,然后结合RATR的工程应用,讨论了库外目标拒判问题的应用背景,分析了该问题和一般意义上的模式识别问题的主要区别,以及解决该问题所面临的主要困难。二、针对库外目标拒判问题,提出了一种人工生成库外目标训练样本的方法,为后续的分类器设计提供了数据基础。针对支持向量域描述(SVDD)算法核函数形式过于简单的缺点,将SVDD算法由单个核函数扩展到多个核函数线性组合的形式,并根据对组合系数自由度的不同限制,分别得到了Multikernel-SVDD1算法和Multikernel-SVDD2算法两种扩展版本。SVDD算法、Multikernel-SVDD1算法和Multikernel-SVDD2算法可以分别通过求解二次规划(QP)、二阶锥规划(SOCP)和半正定规划(SDP)问题来获取全局最优解。仿真实验的结果表明:(1)由于采用了更加复杂的核函数形式,Multikernel-SVDD1算法和Multikernel-SVDD2算法取得了比SVDD算法更优的拒判性能;(2)由于多个核矩阵的组合系数具有更高的自由度,Multikernel-SVDD2算法取得了比Multikernel-SVDD1算法更优的拒判性能。SVDD算法、Multikernel-SVDD1算法和Multikernel-SVDD2算法旨在寻求高维核空间的超球体分类边界,它们的区别仅仅在于核空间的不同。不同于上述超球体边界,本章提出了三种采用近邻边界的分类算法,即最近邻(NN)分类器、平均K近邻(A-KNN)分类器和加权K近邻(W-KNN)分类器来处理上述拒判问题。仿真实验结果表明,就雷达HRRP库外目标拒判问题而言,采用近邻边界要优于采用超球体边界。通过比较三种近邻算法,我们发现W-KNN分类器的性能要优于NN分类器和A-KNN分类器,并指出造成这种结果的原因在于W-KNN分类器能够在应用较多的信息量的同时保持较强的局部学习能力。三、提出了一种大间隔最近局部均值(LMNLM)算法。该算法通过一个线性变换,将原始欧式距离空间投影到马氏距离空间,并在投影后空间的最近局部均值(NLM)分类器的边界中引入了大的分类间隔,以期望改进NLM分类器的推广性能。通过对所获得的马氏距离矩阵进行特征值分解,可以恢复出投影矩阵,从而实现对HRRP数据的特征提取。LMNLM可以表述为一个半正定规划(SDP)问题,而SDP问题的凸性保证了全局最优解的存在。实验结果表明LMNLM可以同时降低数据维数和提高数据可分性,对多模分布且存在大量噪声和冗余分量的HRRP数据尤为适用。四、线性判别分析(LDA)是一种典型的基于全局准则的特征提取算法,在模式识别领域有着广泛的应用。由于全局准则对多模分布数据的不适用性,研究者们提出了一些基于局部准则的相关算法,例如边界Fisher分析(MFA)和局部判别嵌入(LDE),来处理多模分布数据的特征提取和分类问题。本章中,我们从鲁棒性和灵活性两方面入手,分析指出全局算法具有较强的鲁棒性和较弱的灵活性,而局部算法与之相反,其鲁棒性较弱而灵活性较强。结合训练数据采样程度对识别影响的分析,提出了组合判别分析(CDA)来折衷考虑鲁棒性和灵活性,并将其成功应用到雷达HRRP目标识别领域。五、分析了线性判别分析(LDA)的四个缺陷:(1)同一类别的样本需要服从高斯分布特性;(2)投影向量的个数受到限制;(3)不同的差分向量在构建散射矩阵时受到同等对待,它们对识别的不同影响没有得到体现;(4)没有考虑投影向量的范数对识别的影响。针对上述四个缺陷,首先提出了一种新的特征提取算法,局部均值判别分析(LMDA),来弥补前三个缺陷带来的不利影响,接着提出了一个广义的重加权(GRW)框架来弥补最后一个缺陷的不利影响。LMDA和GRW可以分别采用广义特征值分解和线性规划(LP)来求解,它们的结合应用可以大大提高数据的可分性,基于人工数据、公用数据以及雷达HRRP数据的实验结果充分表明了它们在提高分类精度方面的有效性。

【Abstract】 Radar high-resolution range profile (HRRP) denotes the sum vector of projections of the complex returned echoes from target scattering centers onto the radar line-of-sight (LOS), and it may reflect the structural information of scattering targets in the direction of LOS. Compared with synthetic aperture radar (SAR) images or inverse synthetic aperture radar (ISAR) images, HRRPs are more easily accessible and require much smaller storage, and hence, attract general attentions in the radar automatic target recognition (RATR) community. By considering the engineering background of HRRP recognition, this dissertation gives our researches on theories and techniques of RATR, mainly focusing on following two aspects, i.e., outlier target rejection and feature extraction, which are supported by Advanced Defense Research Programs of China and National Science Foundation of China. This dissertation consists of five sections, in which Section 1 gives a brief introduction of this dissertation, Section 2 refers to the outlier target rejection problem, Section 3, Section 4 and Section 5 refer to the feature extraction problem.1、In Section 1, a brief analysis of the physical property of HRRPs is discussed, firstly. Next, we introduce the background of the outlier target rejection problem based on real engineering considerations, and then analyze the main difference of this problem from traditional pattern recognition problems and give the main difficulties for sovling this problem.2、In Section 2, we propose a method to artificially generate outlier training samples, which provides a data supportment for following classifier design procedure. Based on the drawback that support vector domain description (SVDD) has a too simple kernel form, we extend SVDD from single kernel to the form with linear combination of multiple kernels, and thus obtain two extended versions of SVDD, i.e., Multikernel-SVDD1 and Multikernel-SVDD2, according to different degrees of freedom on combining coefficients. SVDD, Multikernel-SVDD1 and Multikernel-SVDD2 can be solved with global optimal solutions, by oprating quadratic programming (QP), second-order cone programming (SOCP) and semidifinite programming (SDP) problems, respectively. Experimental results show that:(1) due to the adoption of more complicated kernel formation, Multikernel-SVDD1 and Multikernel-SVDD2 have better rejection performance than SVDD; (2) due to more degrees of freedom on the combinational coefficients of multiple kernel matrices, Multikernel-SVDD2 has better rejection performance than Multikernel-SVDD1. SVDD、Multikernel-SVDD1 and Multikernel-SVDD2 aim at seeking hypersphere boundaries in the high-dimensional kernel spaces, and their difference just lies in that they perform in different kernel spaces. Different from above hypersphere boundary, in this section, we propose three new algorithms, i.e., the nearest neighbor (NN) classifier, the average K nearest neighbors (A-KNN) classifier and the weighted K nearest neighbors (W-KNN) classifier, all of which adopt neighboring boundary to deal with the rejection problem. The experimental results show that adopting the neighboring boundary outperforms adopting the hypersphere one for the radar HRRP outlier rejection problem. By comparing above three neighboring algorithms, we find that W-KNN outperforms both NN and A-KNN, perhaps due to W-KNN can utilize more information and preserve strong local learning ability simultaneously.3、A large margin nearest local mean (LMNLM) algorithm is proposed in Section 3. LMNLM projects the initial Euclidean distance space to the Mahalanobis one by a linear transformation, and then introduces large classification margins to the nearest local mean (NLM) classifier in the projected space, with the expectation the generalization ability of NLM classifier can be improved. By oprating generalized eigenvalue decomposition on the obtained Mahalanobis matrix, we can recover the projection matrix and realize the feature extraction task on HRRP data. LMNLM can be expressed as a SDP problem, which assures the accessibility of global optimal solutions. The experimental results show that LMNLM can reduce data’s dimensionality and enhance data’s discriminability simultaneously, which makes it especially suitable for the multimodal distributed and noisy/redundant components corrupted HRRP data.4、Linear discriminant analysis (LDA) is a represental feature extraction algorithm optimized by a global criteria and widely utilized in the pattern recognition field. To make up for the unsuitability of global criteria to multimodal distributed data, researchers propose some local criteria related algorithms, like marginal fisher analysis (MFA) and local discriminant embedding (LDE), to treat with the feature extraction and classification of multimodal distributed data. In this section, we make an analysis on algorithms from two aspects, i.e., robustness and flexibility, and conclude that global algorithms have stronger robustness and weaker flexibility, in contrast with local algorithms’weaker robustness and stronger flexibility. According to the analysis of the effection of training data’s sampling extent on classifications, we propose a new algorithm, namely, combinatorial discriminant analysis (CDA), to seek a proper tradeoff between robustness and flexibility, and then successfully apply it to the radar HRRP target recognition community.5、In Section 5, we show that LDA has four drawbacks:(1) homogeneous samples should be Gaussian distributed; (2) the number of available projection vectors is limited; (3) different discrepant vectors are treated equivalently, and their different effections on classification do not attract necessary attentions; (4) the effection of the norm of projection vectors on classification is neglected. Based on above analysis, we propose a new feature extraction algorithm, namely, local mean discriminant analysis (LMDA), to make up for the disadvantages caused by first three drawbacks, and a generalized re-weighting (GRW) framework to make up for the disadvantage of the fourth drawback. LMDA and GRW can be solved by operating generalized eigenvalue decomposition and linear programming (LP), respectively. The combination of LMDA and GRW can significantly enhance data’s discriminability, which is justified by extensive experiments on synthetic data, benchmark data, and radar HRRP data, respectively.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络