节点文献

记忆功率放大器预失真技术研究

Study on Predistortion Technique for Nonlinear Power Amplifier with Memory

【作者】 陈凯亚

【导师】 廖成;

【作者基本信息】 西南交通大学 , 电磁场与微波技术, 2008, 博士

【摘要】 本文研究了通信系统中记忆非线性功率放大器的预失真线性化技术。采用神经网络和支持向量机对预失真器进行黑箱建模,实现对记忆非线性功率放大器的线性化。首先综述了现代通信的发展趋势,阐明以记忆非线性为特点的功率放大器线性化技术研究既是现实的需求,又符合现代通信发展的方向。针对Wiener型结构的记忆非线性功率放大器,提出了分离预失真方法,分别对放大器的动态模块和非线性模块求逆,然后构成Hammerstein预失真器,实现对放大器的线性化。该方法由于将复杂系统的辨识问题转化为简单系统的辨识问题,有利于提高算法的速度和精度。若充分利用已有的简单系统的高效辨识算法,还可进一步提高分离方法的性能。采用直接学习结构对神经网络进行训练,较之传统的非直接学习结构,该方法中用于训练的输入样本统计特性更接近实际应用的输入信号,因而训练得到的神经网络预失真器在推广能力上更有优势,仿真结果验证了这一点。首次将支持向量机用于对预失真器进行“黑箱”建模。分析了局部核函数、全局核函数和组合核函数的特点,仿真比较了选取不同核函数情况下,支持向量机预失真器的性能,结果显示兼具内插和外推能力的组合核函数更适用于对预失真器建模,同神经网络预失真器的仿真对比显示,支持向量机预失真器的性能更优。Keerthi的改进SMO回归算法的偏置是由上、下阈值取中得到,因此偏置的准确程度取决于优化结束后的上、下阈值是否满足优化条件。分析了上、下阈值可能不满足优化条件的原因,从回归问题的原问题出发,导出了求取偏置的优化问题,通过分析偏置的变化范围,证明了该优化问题为一个一维凸函数的优化问题,采用黄金分割算法来解该优化问题,使支持向量机预失真器的性能更优。针对回归问题,提出了SVM的管道压缩的模型,首先利用大ε不敏感函数下的回归函数来预测小ε不敏感函数下的支持向量,然后再采用同支持向量相对应的样本作为训练样本,使得问题的规模降低,达到提高支持向量机预失真器建模速度的目的。文章的最后,概括了我们对于记忆非线性功率放大器的预失真技术所做的工作,并展望了下一步的研究方向和重点。

【Abstract】 The predistortion linearization technique is investigated for the nonlinear power amplifier with memory in communication system. Neural network and SVM(Support Vector Machine) are adopted to model the predistorter to linearize the nonlinear power amplifier with memory.We summarize the development trends of the modern communication system first, and then expound that it not only meets the realistic requirement but also agrees with the development direction of modern communication to study the predistortion technique for the nonlinear amplifier with memory.Separation predistortion method is proposed to linearize the power amplifier with special structure like Wiener. The method identifies the inverse systems of memory subsystem and nonlinear subsystem separately, and then putts them together to construct a Hammerstein system which is the accurate predistorter for the Wiener amplifier. Due to it converts an identifying problem for complex system into one for simple system, the method is benefit to both speeding up course of parameters identifying and acquiring predistorter model with great precision. If more efficient algorithms for the two simple systems are introduced in, the separation method will perform better.The direct learning structure is adopted to train neural network. For the input training samples are more close to the actual input signals in statistical features, the generalization capability of the neural network predistorter based on direct learning structure should be better. This is verified by simulation result.SVM is utilized by us to model the predistorter of the nonlinear amplifier with memory at first time so far as we know. Features of local kernel function, global kernel function and combination kernel function are analyzed. The performances in the cases of different kernel functions are studied by simulation, and the results show that the combination kernel function is the best choise to model predistorter. In addition, simulation results also show that the SVM predistorter performs more robust than the neural network predistorter does.The modified SMO algorithm proposed by Keerthi performs significantly faster than the original SMO by introducing two threshold parameters, the final bias value can be calculated by averaging the two threshold values, so the accuracy of the bias value will be influenced if the two threshold values fail to satisfy the optimality condition. The reason of violating optimality condition is analysed and the algorithm to find the bias value is deduced from the primal problem of regression. By analyzing the variation range of the bias value, the new algorithm is proved to be an optimality problem of one dimension convex function. By adopting golden section algorithm to solve the optimality problem to get bias value, we improve the ability of the SVM predistorter to linearize the nonlinear amplifier with memory.A novel tube compressing model is proposed for SVM regression problem. The model can forecast the support vectors of the regression function under smallε-insensitive value by learned function under largerε-insensitive value, the new training samples correspond to the support vectors are extracted , so the problem scale is decreased and the training efficiency will be improved.In the last part of this dissertation, the works on the predistortion technique for nonlinear power amplifier with memory that we have done are summarized, the next research directions and key points are predicted.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络