节点文献

元启发式优化算法理论与应用研究

A Study on the Theory and Applications of Meta-Heuristic Optimization Algorithms

【作者】 徐俊杰

【导师】 忻展红;

【作者基本信息】 北京邮电大学 , 管理科学与工程, 2007, 博士

【摘要】 科学技术日益表现出交叉和渗透的特征,特别是计算机科学技术改变了人类生产与生活方式。然而,现有计算机的计算能力并非无所不能,它在某些具有不确定性、动态性、非线形或多态(Multi-modal)问题上常常不能满足人们的要求,因此人们对于高效计算技术的探索从未停止。近50年来元启发式优化算法得到了广泛研究,如遗传算法、模拟退火等,这类算法均通过模拟自然现象(Nature-Inspired)为解决复杂问题提供了新的思路和手段。本论文中主要介绍了两大类元启发式优化算法,第一类是群体智能(Swarm Intelligence)算法,包括蚁群优化(Ant Colony Optimization)和粒子群优化(Particle Swarm Optimization)两种算法,这两种都是基于种群策略的仿生算法;第二类是微正则退火(Microcanonocal Annealing)算法,它是来自于对物理学的借鉴。本文中主要通过仿真手段,研究了这两类元启发式优化算法的几种改进策略及应用。全文主要由九章构成,主要内容如下:第一章介绍了论文的研究背景,对群体智能优化和微正则优化的研究现状做了简要综述,并概括了本文的研究内容和创新点。第二章阐述了元启发式算法的相关概念,并将典型的元启发式算法的优化模式划分为两类,第一类是基于单一解形式,第二类是基于种群策略,后者又根据个体间的交互程度细分为强交互类型和弱交互类型。第三章是对蚁群优化的综述,重点介绍了蚁群优化的算法背景、基本算法和著名的改进形式,并对处理连续性问题、收敛性理论分析及其应用做了简要总结。第四章是对粒子群优化的综述,以算法背景、基本算法及改进参数、著名的改进形式和处理离散问题的研究为主,最后也简述了粒子群优化己知的实际应用。第五章介绍了微正则退火的基本原理,通过对经典TSP实例的仿真,发现微正则退火的能量下降速度明显快于模拟退火。尽管两种算法的搜索性能基本持平,但微正则退火搜索到最优解或局部最优解的时间性能要优于模拟退火,这使得它在某些大计算量优化问题上具有显著优势。仿真中还发现,该算法提出者Creutz所声称的“妖的能量上限初始值只要大于最大可能遇到的能量差”并不准确,该初始值实际上存在一个最优区间,过大的初始值会降低搜索到最优解的机会。同时仿真中也揭示出,妖的能量上限的下降系数取一个较大的值时算法表现出了强鲁棒性。尤其观察到即便该系数等于1,即能量上限保持恒定时,只要初始值选择得当,算法依旧能产生优良的搜索性能。本章提出了三种改进策略,第一种是状态回溯机制,保存最近的若干优良状态,若系统状态长期无法改善,则强迫回溯到某个优良状态上,试验发现只要相关参数选择得当,该策略能提高脱离局部极值点的能力。第二种是能量奖励策略,在拒绝状态时对妖的能量进行一定程度的奖励。观察了有上界约束和无上界约束两种方式下的搜索性能,并提出了无上界约束时降低加快收敛速度的方法。第三种是能量收缩策略,当妖的能量超过门槛水平后,执行小幅缩减操作以达到加快收敛的目的。本章最后将微正则退火应用到固定频率分配问题上,仿真发现该算法在频点资源紧张时明显优于模拟退火算法。第六章介绍了增强型参考位置的粒子群优化算法,这种改进策略在速度迭代公式中同时考虑邻居中最佳粒子以及种群中最佳粒子的吸引作用,相当于基本算法Gbest与Lbest的融合形式。对于速度迭代公式中的三项偏差,设计了确定性的参数配置和具有随机扰动的参数配置方式,仿真发现,当对种群最佳粒子的权重做随机扰动时,改进算法在搜索成功率和目标平均评价次数上都要优于Lbest算法,而且对于其他两个参数的相对大小并不敏感。第七章将基于共享适应值的小生境技术应用到粒子群优化中,小生境技术的特征是仿照生态系统,限制相似个体过分聚集。本章构建了固定邻居结构的F-ShPSO算法和动态选择邻居的D-ShPSO算法两种形式,对经典函数的仿真表明,F-ShPSO的搜索成功率不低于Lbest算法,但迭代次数波动较小,且对邻居规模不敏感,而D-ShPSO性能比较差,表明在这种改进策略中动态邻居机制并不可行。第八章介绍了粒子群优化的一种两阶段实施策略,该策略本质上是先对种群进行分群操作,然后对由“群首”构成的临时群执行再迭代,而具体的粒子状态更新机制可灵活选择。对比试验中揭示,该策略能提高搜索到多态函数最优解的成功率,且降低了基本算法对两项偏差权重的敏感度。试验结果也验证了对于最优子群数量的估计,该数量宜选择适中数值以综合考虑计算效率与效果。第九章对全文的研究内容做了简要的回顾,并指出了研究内容的不足和今后可能的研究方向。

【Abstract】 Computer science and technology has penetrated all walks of life and changed human societies from industry to daily life. However the current computer, especially the personal computer, is not powerful enough. It requires the computing task to be well-defined, fairly predictable, and computable in reasonable time. It generates unsatisfied performance on uncertain, dynamic, nonlinear, or multi-modal problems. So the human beings never cease to explore alternative computing techniques with high performance.Meta-heuristic optimization algorithms are one of the alternatives which have caught comprehensive attentions in these years. Meta-heuristic optimization algorithm refers to a general-purpose heuristic algorithm that can be used on different types of problems, including genetic algorithm, simulated annealing and some others. They are also called nature-inspired algorithms and have provided novel approaches for attacking complicated problems as mentioned above. In this dissertation two kinds of meta-heuristic optimization algorithms have been discussed. One is swarm intelligence technique including ant colony optimization (ACO) and particle swarm optimization (PSO). These two are both bio-inspired population-based algorithms. The other is microcanonical annealing (MA) algorithm which comes from physics. In the dissertation, several improved strategies about these algorithms are introduced by means of simulations. The whole paper consists of nine chapters listed as follows:Chapter 1 introduces the background of this work and presents brief reviews on swarm intelligence and microcanonical annealing. The main research contents and corresponding contributions are summarized in the end of this chapter.Chapter 2 provides relevant concepts of meta-heuristic optimization algorithm and divides it into two categories, the first searches the target space by a single solution, the second seeks optimal solution in a population-based way. The latter is also divided into two styles, one utilizes interaction among the population and the other works on interaction between individual and environment. Chapter 3 presents reviews on ant colony optimization. The main contents are made up of the algorithm background, basic paradigms and notable improvements, attempts of tackling continuous function, convergence researches and its applications.Chapter 4 gives surveys on particle swarm optimization. This section consists of biology background, canonical algorithms, well-known improvements and attempts of attacking discrete problem. Industry applications of particle swarm optimization are also listed in a short summary.Chapter 5 explains the theory background of microcanonical annealing and its implementation process on optimization. The simulations on benchmark show that this new annealing approach is faster than simulated annealing with comparative success rate. Microcanonical annealing requires fewer evaluations in success trials, so it will be a competitive algorithm in the circumstances attaching much importance to computing efficiency. The simulations also reveal that there is an optimal range for the initial upper bound (Emax) of demon’s energy(E_D) and too large value will decrease the possibility of finding optimal solution. This observation is not consistent with Creutz who just claimed that the start value of Emax be considerably larger than the largest energy gap normally encountered. The decreasing coefficient should be a large value in order to provide high success rate, further more the simulations show that even if the coefficient is set to 1 the algorithm can also provide good performance as long as Emax be specified to an appropriate start value. Three improvements are proposed in this chapter, the first one called a history state conserving mechanism is designed to save a number of good states. Once the current state cannot be improved for certain iterations, the algorithm will sample a random state in the conserved queues and transfer to it. Simulations indicate this mechanism can accelerate the convergence process of microcanonical annealing if relevant parameters be set a proper value. The second one named energy enhancement strategy is to increase E_D to some extent when the optimization stagnates. Two kinds of energy enhancements are investigated including enhancement with upper bound and without upper bound. An approach to decrease the evaluation times under the improvement is also proposed. The last one is called energy shrinking which is combinded with a decreasing operation on E_D when E_D exceeds the threshold. At the end of this chapter, an application on frequency assignment problem by means of microcanonical annealing has demonstrated its promising future in network optimization. Chapter 6 discusses an extended particle swarm optimization which combines Gbest and Lbest to consider both global best particle and local best neighbor in the updating formula of velocity. Two kinds of weight assignment rules in the formula are proposed, the first one fixes the weight for each part but the performance is not outstanding, the second* one specifies a random weight for one of the parts. The simulations show that if the weight for global best part is a random value, the new algorithm outperforms Lbest in both success rate and evaluation times and is not sensitive to the other two weights on the benchmark.Chapter 7 applies a niche technique called sharing fitness to particle swarm optimization. Niche technique comes from bionomics and prevents similar individuals from gathering around a single individual. Two versions of this algorithm are proposed, one fixes the neighborhood structure (F-ShPSO) and the other constructs the neighborhood dynamically (D-ShPSO). The simulations on benchmark show that F-ShPSO provides a comparative success rate than Lbest with a lower fluctuating on the evaluation times. Furthermore it is not much sensitive to neighborhood size. D-ShPSO exhibits bad performance in the comparison and this result indicates the dynamic strategy in this context is not feasible.Chapter 8 introduces a two-stage implementation strategy for particle swarm optimization. The essence of this method is to divide the whole population into several subpopulations and an updating process is performed on the temporary colony composed of global best individual in each subpopulations. The updating formula of individual is not specified in the current approach. Simulations on multi-modal function reveal that it can enhance the success rate and decrease the sensitivity of weights in the traditional updating formula. The experiments demonstrate that the amount of subpopulations should be in an appropriate range to balance success rate and computation cost.Chapter 9 gives a summary of the whole dissertation and points out the limitations and potential directions in future research.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络