节点文献

基于精英集选择与扩展策略的多目标智能算法研究

Research on Multiobjective Intelligent Algorithm Based on Elite Set Selection and Expansion Strategy

【作者】 赵森

【导师】 郝志峰;

【作者基本信息】 华南理工大学 , 计算机应用技术, 2013, 博士

【摘要】 最优化问题是科学研究和工程设计中的主要问题,现实当中这些优化问题大多具备多目标的特征,目标之间彼此冲突、相互制约,一个子目标性能提升的同时必然会导致其它至少一子目标性能的退化。与单目标优化问题不同,多目标优化问题(Multi-objectiveOptimization Problem,MOP)没有绝对的或者唯一的最优解,而是存在一个最优解集合,集合中的元素称为Pareto最优解,解集中的元素就所有目标而言是彼此不可比较的。智能优化算法是通过模拟某一自然现象或过程而提出的随机搜索算法,具有高度并行、自学习、自组织与自适应的特征,为求解复杂优化问题提供了一种新途径。智能优化算法中的遗传算法、差分算法、粒子群算法通过由潜在解组成种群不断进化的全局搜索方法非常适合求解多目标优化问题,受到国内、外专家和学者的广泛关注。本文旨在基于这三类智能优化算法对多目标优化问题的求解展开研究,如何将有关的智能优化算法、进化策略与多目标优化技巧进行有效地结合设计出较高搜索效率、较强鲁棒性的算法,进而提高求解问题解的质量,将是本文研究的重要问题。本文的主要内容和创新点可以概括如下:(1)精英保留机制能够有效防止进化过程中由于随机因素所导致的最优个体丢失问题,保证算法的全局收敛;对精英保留机制的两种实现方法进行了改进和扩展,提出了基于区域的(μ+λ)选择法和具有自我更新机制的外部归档法,并将这两种方法应用于具体的算法设计中;(2)针对经典多目标进化算法NSGA-II存在的模拟二进制交叉算子搜索性能较弱影响搜索效率以及精英保留机制不足影响种群的多样性问题,提出了一种基于渐变交叉分布指数和分层策略的多目标进化算法。算法通过分析模拟二进制交叉算子中包含的交叉分布指数的特征,利用与时间相关的逻辑斯蒂函数将其设计成随进化代数由小变大的渐变交叉分布指数。实验表明,这种渐变交叉分布指数能够改进算法的性能,提高收敛效率,保持种群多样性;针对NSGA-II算法精英保留机制存在的不足,应用基于区域的(μ+λ)选择法,提出了一种基于分层策略的精英保留机制,通过对目标空间分区间扫描的方式实现对精英个体的选择,有效地避免了在Pareto最优前沿的局部造成覆盖空白的局面,保持了种群多样性。数值仿真实验结果验证,对于ZDT系列的大部分测试函数,文中算法比NSGA-II算法具有更好的性能。(3)有效地平衡收敛速度和多样性这一矛盾的统一体是多目标优化问题的永恒主题,文中引进差分进化算法提高多目标进化算法的优化速度,设计序值变异和基于拥挤距离最小淘汰选择有利于保持种群的多样性。算法根据进化过程中种群个体的非劣级别自适应地选择变异策略,改进了差分进化的变异操作;采用拥挤距离最小淘汰的精英保留机制实现下一代个体的选择,改进了差分进化的选择操作;将控制参数差分矢量缩放因子F和交叉概率CR设计成与进化代数相关的线性递减函数。对标准测试函数进行了数值仿真,并将结果与经典的多目标优化算法进行了比较,无论在解集的逼近性及均匀性方面均取得了良好的效果,而且具有良好的稳定性,是一种求解多目标优化问题的实用、有效的方法。(4)利用粒子群算法对多目标优化问题的求解进行了研究,并将具有自我更新机制的外部归档法应用于精英保留机制的设计,提出了一种基于外部归档集自适应繁殖的多目标粒子群算法。该算法针对将粒子群算法用于解决多目标优化问题时出现的易陷入局部最优、收敛精度差、全局最优粒子难确定等问题,提出了相应的改进策略,具体表现在:①针对粒子群对初始解敏感,采用反向学习策略产生初始解,提高了初始种群解的质量;②使用自适应惯性权重和加速系数调整策略,增强算法初期的全局搜索能力,加大算法后期的局部精细化搜索能力;③根据外部归档集中非支配解的数量采用不同的繁殖策略改进外部归档集中非支配解的数量和质量;④与当前粒子具有较小的收敛距离且拥有较大的拥挤距离的非支配解将被选作全局最优粒子;⑤当连续几代个体最优粒子没有得到提升时,采用局部搜索的爬山算法更新个体最优粒子。数值仿真实验结果表明该算法相对于其它算法性能较好或相当。

【Abstract】 Optimization problems are the main concern in scientific research and engineering design,most of which feature multiple objectives that are contradictory to and restrained by eachother. The performance improvement of one sub-objective will necessarily leads to theperformance degradation of at least another one sub-objective. Different fromSingle-objective Optimization Problem, Multi-objective Optimization Problem (MOP) has noabsolute or sole optimal solution but a optimal solution set, the elements in which are calledPareto optimal solutions. The elements of solution set are incomparable with each other interms of all objectives. Intelligent Optimization Algorithms is a random search algorithmcame up with through simulating certain natural phenomenon or process, featuringembarrassingly parallel, self-learning, self-organizing and self-adaptation, which provide anew method for solving complex optimization problems. Genetic Algorithm, DifferenceAlgorithm and Particle Swarm Optimization in Intelligent Optimization Algorithms, withconstantly-evolutionary overall searching method throng potential solutions constitutingpopulations, act very suitably as the solution for multi-objective optimization problems, andattract extensive attention from experts and scholars from home and abroad. This paper is tostudy the solutions of multi-objective optimization problem based on the aforesaid threeIntelligent Optimization Algorithms. An important issue in this paper is about how todesignate an algorithm with higher search efficiency and stronger robustness througheffective combination of relevant Intelligent Optimization Algorithms, evolutionary strategiesand Multi-objective optimization techniques so as to improve the quality of problem solutions.Main contents and innovative ideas in this paper are summarized as follows:(1) Elite retention mechanism can effectively prevent optimal individual loss problemincurred by random factors during evolutionary process and ensure global convergence of thealgorithm. The two implementation methods of elite retention mechanism are improved andexpanded, and region-based (μ+λ) selection method as well as external archive set withself-update mechanism are put forward, which are adopted in specific algorithm design;(2) A multi-objective evolutionary algorithm based on gradient cross distribution indexand layer strategy is put forward regarding the fact that the searching capability of thesimulated binary crossover operator in classical multi-objective evolutionary algorithmNSGA-II has relatively smaller impact on searching efficiency and that elite retentionmechanism is not robust enough to influence the population diversity. The algorithm, throughanalyzing the characteristics of cross distribution index contained in simulated binary crossover operators and using time-related logistic function, can designated it into gradientcross distribution index changing from small to large along with evolutionary generation.According to the experiment, this gradient cross distribution index can improve the algorithmperformance, enhance convergence efficiency and keep the population diversity; with regardto the deficiencies in elite retention mechanism of NSGA-II algorithm, an elite retentionmechanism based on layer strategy is proposed by adopting the region-based (μ+λ) selectionmethod, in which blank-coverage at part of Pareto optimal frontier is effectively avoided andthe population diversity is kept by selecting elitists with partition-based scanning to the targetspace. The result of numerical simulation experiment proves that, for most test functions ofZDT series, the algorithm in this paper demonstrates better performance than NSGA-IIalgorithm.(3) To effectively balance convergence rate and diversity, which is a contradictory unity,maintains as the everlasting issue of multi-objective optimization problem. This paperintroduces Differential Evolution Algorithm to increase the optimization speed ofmulti-objective optimization algorithm. The designated rank mutation and crowding distancebased minimum excluded choices contribute to keeping the population diversity. According tothe non-inferiority grade of populations and individuals in evolutionary process, the algorithmselects mutation strategy self-adaptively, improving the mutation operation of differentialevolution; to realize the individual selection of the next generation by adopting elite retentionmechanism with minimum excluded choices improves the selecting operation of differentialevolution; zoom factor F of controlled variable difference vector and crossover probabilityCR are designed into linear decreasing function related to evolutionary generation. Numericalsimulation is carried out to standard test functions and the results are compared with classicalmulti-objective evolutionary algorithm, which shows good results in both approximation anduniformity of the solution sets with good stability. It’s a practical and effective method forsolving multi-objective optimization problems.(4) This paper studies the solutions of multi-objective optimization problem by usingparticle swarm optimization, and applies external archive set with self-update mechanism tothe design of elite retention mechanism, thus coming up with a Multiple Objective ParticleSwarm Optimization based on external archive collection and self-adaptive propagation. Thisalgorithm, regarding the easy tendency of local optimum, poor convergence precision anddifficult determination of global optimal particle etc. when applying particle swarmoptimization to solving multi-objective optimization problem, provides corresponding improvement strategies, specifically showing as:①using opposition based learning strategyto generate initial solution and improve the quality of initial population solution regardingparticle swarm’s sensitivity to initial solution;②using adaptive inertia weight and speededadjustment strategy to increase early global searching ability of algorithm and to strengthenthe algorithm’s late local fine search ability;③using different reproduction strategy toimprove the quantity and quality of non-dominated solutions in external archive set accordingto the quantity of non-dominated solutions in external archive set;④non-dominated solutionsthat has smaller convergence distance and larger crowding distance with current participleswill be selected as global optimal particles;⑤using hill climbing algorithm of local search toupdate individual optimal particles when the individual optimal particles of several successivegenerations are not promoted; the result of numerical simulation experiment shows that theperformance of this algorithm is better than or equal to the that of others.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络