节点文献

选择性神经网络集成算法研究

Investigations upon the Algorithms for Selective Ensemble of Neural Networks

【作者】 傅强

【导师】 胡上序;

【作者基本信息】 浙江大学 , 控制理论与工程, 2007, 博士

【摘要】 集成学习已经成为近年来机器学习领域的热点之一,其中选择性的集成方法由于其在适应性、推广性、组合性等方面的优势成为集成学习的一个重要方向。论文以神经网络集成为研究对象,利用信息论和计算科学等相关领域的理论和方法对选择性集成算法进行了深入的研究,提出了多种高性能的选择性集成的方法,并对算法的机理、性能、参数选择以及差异度等问题进行了深入的探讨。具体来说进行了以下几方面的工作。对采用全局优化策略的选择性神经网络集成算法进行了进一步研究。引入两种高性能全局优化算法—粒子群优化(PSO)和蚁群优化(ACO)算法用于神经网络集成的构建,分别提出了基于离散二进制粒子群优化(BPSO)的神经网络优选集成方法和基于蚁群优化的神经网络优选集成算法。基于BPSO的优选集成算法用n维离散0-1空间的一个位置对应于一个可能的神经网络集成,将选择性集成问题转化为粒子在离散二进制空间寻找最优位置的粒子群优化问题。基于ACO的优选集成算法在构建求解模型时,采用信息素反映神经网络个体精确度,差异度启发信息反映神经网络个体的差异度,有效地提高了搜索效率和预测精度。对采用聚类选择策略的选择性神经网络集成算法进行了进一步研究。针对传统k-均值聚类方法对数据分布要求严格的局限,采用谱聚类(SC)的思想和方法提出了基于谱聚类的神经网络优选集成算法。算法采用互信息描述神经网络个体的差异程度,并将神经网络个体按相似程度进行聚类后,挑选每一类的一个代表构建神经网络集成。谱聚类方法将所有神经网络个体映射到低维谱空间,保证了聚类的准确性,从而提高了由聚类选择获得的神经网络集成的性能。提出了神经网络组合集成的思想,即将神经网络集成作为广义神经网络集成的一个个体,通过调节神经网络个体的加权组合系数的方式调节成员神经网络集成的差异度,进而提高神经网络集成的性能。提出了基于该思想的两种算法:基于最小信息损失的神经网络组合集成—EoE-MIL算法和基于最大独立性的神经网络组合集成方法—EoE-AI算法。EoE-MIL算法以保证集成构建过程中信息最小损失为原则,利用协方差矩阵主要特征值对应的特征向量将神经网络个体进行线性组合,特征向量的线性无关保证组合集成中各个神经网络集成的差异性。EoE-AI算法将Kullback-Leibler信息距离作为各神经网络统计独立性的度量,并以此为基础以保证组合集成的每个个体(神经网络集成)的最大独立性为原则构建神经网络组合集成。两种算法在提高系统预测性能的同时也具有一定的根据问题选择模型的能力。此外,论文还对神经网络集成中的差异度进行了讨论。未来的研究将包括选择性集成理论的研究,新的高效算法以及选择性方法在新的应用领域的拓展。随着集成学习理论的进一步完善和各种新的方法的出现,选择性集成的思想和方法将会在更广泛的领域发挥更大的作用。

【Abstract】 Ensemble learning has become a hot topic in the field of machine learning, and selective ensemble is attracting more and more attentions by the advantage of its applicability and combinability to many learning machines. This dissertation investigates the selective neural network ensemble by means of the theories and methods of relational fields, such as information theory and computing science. Some approaches are proposed to construct selective neural network ensemble with high performance, and then their working mechanism and preferences are discussed in details. The major contributions of this dissertation are emphatically stated as follows.Firstly, one category of ensemble methods based on the strategy of global optimization is carefully explored. Two kinds of powerful optimization tools - Ant Colony Optimization (ACO) and Partical Swarm Optimization (PSO) are employed to construct selective ensemble so that selective optimum neural network ensemble based on ACO and selective optimum neural network ensemble based on discrete Binary PSO (BPSO) are proposed. In BPSO-based approach, each candidate ensemble corresponds to a position of n-dimension 0-1 space and the goal of constructing optimum ensemble is achieved by particle optimization in discrete binary space. In ACO based approach, the pheromone reflects the accrracy of ensemble while the diversity heuristic information indicates the diversity of individuls. Both approches show perfectly predictive ability.Secondly, clustering-based selective algorithm for constructing neural network ensemble is investigated, where neural networks are clustered according to similarity and the most accurate individual network from each cluster is selected to make up the ensemble. The usage of traditional k-means clustering is limited due to its strict requirements in data distribution. Alternatively, Spectral Cluster (SC) has no prerequest on the global structure of data. So selective optimum neural network ensemble based upon spectal clustering is proposed to improve the predictivity accuracy of selective ensemble, in which mutual information is used to measure the diversity of neural network and the group relationships among data points are preserved as much as possible in a lower dimensional representation.Thirdly, an idea named "ensemble of ensembles (EoE)" is proposed. Different from ordinary neural network ensemble, ensemble of neural network ensembles is a two-layered neural network ensemble architecture and employs weighted neural network ensemble as individual of ensemble. The advantage of ensemble of ensembles lies in that individual diversity can be manipulated by adjusting the weights of weighted ensemble rather than the architecture or function of neural network. Two approches based on EoE idea named EoE-MIL (Ensemble of neural Network Ensembles based on Minimum Information Loss) and EoE-AI (Ensemble of neural Network Ensembles based on mAximum Independence) are designed and implemented. In EoE-MIL approach, neural networks are combinated weightedly by the eigenvectors of principal eigenvalues of the covariance to construct individual of EoE accrording to minimum information loss principle, and the diversity among individual of EoE is guaranteed by the linear independence of eigenvectors. In EoE-AL approch, Kullback-Leibler information distance is used to measure statistic independence of individual of EoE, and weighted combination of neural network by the eigenvectors of principal eigenvalues of the correlation matrix becomes the individual of EoE as well according to maximum independence principle. Both approches show the ability of reducing predictive error and that of selecting the model coresponding to particular problem.Furthermore, the diversity of the neural network ensemble investigated.The future work should include a generalized and deeper theoretical study, to explore new powful constructing algrithms, and to expand the applications to a wider scope.

  • 【网络出版投稿人】 浙江大学
  • 【网络出版年期】2008年 09期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络