节点文献

离散群体智能算法的研究与应用

Research and Application on Discrete Swarm Intelligence Optimization

【作者】 陈恩修

【导师】 刘希玉;

【作者基本信息】 山东师范大学 , 信息管理及电子商务, 2009, 博士

【摘要】 最优化问题是人们在科学研究、工程技术和经济管理等诸多领域中经常碰到的问题,其目的是从众多备选方案中选择出使目标函数达到最小或最大的方案。优化方法涉及的应用领域很广,问题种类与性质繁多,根据不同的原则可以给出不同的分类。根据决策变量的取值类型,可分为函数优化问题与组合优化问题(又称离散优化问题)。离散优化问题是一类重要的优化问题,随着计算机科学、管理科学和现代化生产技术等的日益发展,这类问题与日俱增,且其大部分都是NP-hard问题。这些问题正越来越受到运筹学、应用数学、计算机科学及管理科学等诸多学科的高度重视。很长时间以来,人们试图寻找解决各种组合问题的有效算法。长期的努力在此问题上取得了一定的成效,但NP问题仍然是21世纪一个最具挑战性的科学难题,是在理论信息学中计算复杂度理论领域里至今没有解决的问题。群体智能优化方法是一个新兴的研究领域,为复杂优化问题的求解提供了一个有效手段,已引起相关领域学者的广泛关注。其中有代表性的有意大利学者Marco Dorigo于1991年提出的蚁群优化方法和1995年James Kennedy和Russell Eberhart基于对鸟群、鱼群捕食行为的模拟,提出的粒子群优化方法。由于这些方法概念简明、所需设置的参数较少、实现方便,特别用以解决复杂的组合优化问题具有优越性,迅速得到国际优化计算领域的认可,并在工程设计、生产优化等应用领域取得成功的应用。论文首先系统深入地分析了离散粒子群算法的本质,构建了一种简便高效的二元离散粒子群算法。然后,提出了一种全新的求解线性顺序问题的离散粒子群算法。其次,对置换流水车间调度问题初始解集的各种构建方法进行了比较分析。最后,提出了一种能够减少早熟现象的并行蚁群算法。论文的主要研究工作与创新点归纳如下:1、对“离散型粒子群算法的本质”进行了探索。通过对经典的离散型粒子群算法中各部件进行拆分和分析;并在分析的基础上,对这些部件以一种全新的方式重新组合起来,构建了一种简便高效的离散粒子群算法。新算法中每个粒子的新位置仅同其当前位置、其前一个位置、其历史最优位置和其邻域内的历史最优位置有关。对于各元素仅用0和1表示的二值问题,在这些位置,这些元素要么是0,要么是1。粒子中各元素在新位置取0或者1的概率,仅同以上位置该元素取0或者1的比例相关;即各元素在新位置的取值正比例于其当前位置、其历史最优位置和其邻域内的历史最优位置的取值,而负比例于其前一个位置的取值。该算法无需涉及在离散型粒子群算法中难以解释的“速度”的概念,没有使用sigmoid函数,无需对任何变量值施加变化范围限制,仅使用基于值比例的概率,公式简洁易于理解。该探索提供了一种窥视二元离散粒子群算法的新角度,为未来进一步探索离散粒子群算法的本质提供了一个基点。在De Jong Test Suite测试集上,运行该离散粒子群算法,并与经典的离散粒子群算法进行比较。测试表明,提出的算法简便高效。另外,还在算法中引入了一个领袖粒子(Queen Informant)。该粒子使用单独的类似蚁群信息素的更新规则,并向所有其它粒子提供信息;而仅有当前全局历史最优解向其提供信息。该粒子类似量子,有两部分组成,其各元素两个值分别表示该元素取0和1的比例概率。在算法中加入领袖粒子,将其运行结果同先前的运行结果和经典算法的运行结果进行比较。实证表明,领袖粒子的引入有效地加快了算法的收敛速度,且没有增加函数的评估计算量。2、求解线性顺序问题(Linear Order Problem,LOP)的离散粒子群算法。线性顺序问题是一种NP-hard的组合优化问题,其每个解可用一个“n个数的排列”来表示。他让某矩阵的行和列同时使用这个排列顺序,使得该矩阵主对角线以上元素值的总和最大化。论文提出了一种简便的离散粒子群算法,无需交换、交叉、变异、插入、删除等算子,仅需在每个粒子中存储各元素在其排列中的位置,而不是排列本身。将这些位置看成可以左右移动的;每个粒子的速度是由其元素左右移动形成的,从而对粒子的速度有一个清晰直观的解释。并用连续型的粒子群算法更新每个元素在其排列中的位置,然后用排序的方式确定各元素在排列中的相对位置即可。将该算法同基于交换算子的粒子群算法在标准LOLIB测试集上进行比较,表明该算法具有强大的优势。该方法适用于解元素在解排列中的绝对位置比相对位置更为重要的各种组合优化问题。3、对置换流水车间调度问题(Permutation Flowshop Scheduling Problem.PFSP)初始解集构建方法的比较分析。群体智能算法解决优化问题的第一个步骤是创建一个初始解集。需要一定数量的初始解集的群体智能算法包括遗传算法(Genetic Algorithm,GA)、发散搜索算法(Scatter Search,SS)、粒子群优化算法(Particle Swarm Optimization,PSO),Memetic算法(Memetic Algorithm,MA),差分进化算法(Differential Evolution Algorithm,DE)等等。这些方法都已被用于求解该问题。如何创建一个高质量高多样性的初始解集以便减少求解的整体运行时间,始终是一个待解决的问题。对置换流水车间调度问题,先前的初始解比较研究仅限于创建出单个解的方法之间的解质量比较分析。论文将各种初始解集构建方法应用到Taillard提出的PFSP测试集上(共120个算例),比较它们产生的初始解集的质量,并用各种距离测度比较解集的多样性,从而为求解PFSP问题的群体智能算法的进一步改进提供了基点。一个有效的算法在很大程度上依赖于好的合适的初始解集,您可以根据这些初始解集构建方法的特性和您的算法自身的特点,选择合适的初始解集构建方法。4、基于信息素交叉算子和排斥算子的并行蚁群算法。在并行蚁群算法中,多个蚁群同时并行存在。本算法充分利用了这个特点,提出了一种减少算法过早局部收敛的办法。它引入了两个新算子:信息素交叉算子和排斥算子。算法采用主/从模式、星形拓扑连接;主机位于中心位置。最初,多个蚁群同时被随机初始化,然后始终同时并行运行。每个子种群都是一个可以独立运行的蚁群优化系统,它们拥有属于自己的信息素矩阵和α、β、ρ等参数。当主机检测到某个蚁群已陷入局部最优解时,就利用信息素交叉算子、信息素排斥算子和多个已发现的次最优解,重新初始化这个蚁群。主、从机间仅需传送几个次最优解和几个参数,通信量极小。文中还给出了该算法的异步并行实现。该算法被用于解决TSP问题,实证表明,该算法提高了问题求解效率。

【Abstract】 Many scientific,engineering and economic problems need the optimization of a set of decision-making variables with the aim of choosing the optimum one from many candidate schemes to minimize or maximize the objective function.The application of optimization methods is very extensive,and it involves a lot of problems and these problems have different characteristics.According to different principles,they can be divided into different classes.For example,according to the value type of the decision-making variable,they can be divided into two classes, function optimization problem and combinational optimization problem(namely, discrete optimization problem).The discrete optimization problem is an important optimization problem,and with the development of computer science,the science of the management and the technology of the modern production,it is getting more and more attention by the subjects of operational research,applied mathematics,computer science and management science.Most part of these problems being optimized is NP-hard problem.Since many years,people are trying to look for efficient algorithms to solve the combinational problem,and many efficient algorithms have been proposed,but NP problem is still a science difficulty problem in the 21century,and it is not solved in the complexity field of the theory informatics yet.Swarm intelligent optimization method is an emerging area of research,which provides an effective tool for solving complex optimization problems,and it has attracted extensively attentions from different research areas.The comprehensive and typical methods of swarm intelligence optimization are Ant Colony Optimization (ACO) and Particle Swarm Optimization(PSO).ACO was offered in 1991 by Marco Dorigo,an Italian researcher.And James Kennedy and Russell Eberhart offered PSO based on the simulating of birds colonys’ and fish colonys’ preying action in 1995. These methods received recognition in international optimization computing field rapidly because the methods are simple in concept,few in parameters,and easy in implementation,especially in solving complex combinational optimization problem,it was also applied successfully in engineering design and production optimization.In this dissertation,after the essential components of discrete particle swarm are searched systematically,a simple and effective binary discrete PSO is introduced.Then it gives a modified discrete PSO for linear order problem.The next,it presents a comparison of initial population generation schemes for permutation flowshop scheduling problem with the makespan criterion.Finally,it proposes a parallel multiple ant colonies optimization which can avoid some stagnating states of the iteration in the basic ACO.The main contributions and innovative viewpoints of this dissertation are summarized as follows:(1) In Search of the essential binary Discrete Particle Swarm.In this part,the canonical particle swarm algorithm is broken down into its essential components,and reinterpreting them in another ways as new program.After that,these essential components are recombined in a different manner which builds a fast and easy binary discrete PSO(BPSO).In this algorithm each particle represents a candidate solution of the problem being solved in which all solution elements can only be either 0 or 1.In next position of a particle,the probability of a certain particle element assuming a value of 0 or 1 is in positive proportion to value 0 or 1 of this element in the current position of the particle,the historic best position it experienced,and the historic best position experienced of its immediate neighbors in the topological population;but in negative proportion to value 0 or 1 of this element in the former position of the particle.This algorithm doesn’t involve the meaning of velocity which is usually hard to be defined in the discrete PSO.Furthermore,it only use value proportion probability,and needn’t sigrnoid function and quantitative limitation on any variable. Iterative formulas are simple,reliable and easy to understand.It provides a new angle of observation point to the essential of binary discrete PSO and a solid base for future research.Experimental results show that this algorithm is faster than the Standard binary discrete PSO on a suite of test functions,and that accuracy is improved for most benchmark functions used.A queen informant is also introduced.It informs all the other particles,but only the global best particle updates it.It has two parts and each part owns the same dimension as other particle.Each value in the queen informant represents the proportion probability of a particle element assuming a value of 0 or 1.Its updating rule is the same as that of pheromone in the ACO.It doesn’t increase the number of function evaluations;however,it appears that it greatly speeds up the convergence. Simulation experiments also prove its efficiency and good performance.(2) Discrete Particle Swarm Optimization for Linear Order Problem.The linear order problem(LOP) consists of finding a permutation of the columns(and rows) of a matrix of weights in order to maximize the sum of the weights in the triangle above the main diagonal of this matrix.It is a NP-hard problem.This paper introduces a modified discrete PSO(DPSO) for LOP.It doesn’t use exchange,crossover,mutation, insertion and deletion operators.But the positions of elements in solution permutation are stored in the particle instead of elements themselves.We only change our observation point of view and conceive the velocity as the shift in the position of elements.So the particle position is updated in much the same way as the canonical continuous PSO.Then the updated solution permutation was obtained by sorting the values of the updated position of elements.Experiments on instances in LOLIB show that the modified DPSO is promising in the linear order problem.It is suitable to the optimization of the kind of permutation problems in which solution absolute positioning of the elements is more important than relative positioning of the elements.(3) A Comparison of Initial Population Generation Schemes for Permutation Flowshop Scheduling Problem with the Makespan Criterion.Initializing a population of solutions is the first period in many swarm intelligence techniques,such as Genetic Algorithm(GA),Scatter Search(SS),Particle Swarm Optimization(PSO),Memetic Algorithm(MA),Differential Evolution Algorithm(DE),etc.These approaches are applied to solve permutation flowshop scheduling problem(PFSP).It is an open issue how to create a high-quality and diverse initial population in order to reduce the computational time of these techniques.However,previous comparative studies only involved the initial methods which produced only a single initial solution.This article compares the quality and diversity of the population which are created by different initial population constructions for minimizing makespan in PFSP on Taillard(1993)’s famous benchmarks.This paper can be seen as a starting point for future research needs of improving and developing better approaches to PFSP.You can select a proper initialization scheme for your algorithm according to the characteristic of your algorithm and properties of these initialization implementations.And the efficiency of algorithms can be largely increased by selecting a good initial population.In a word, you can select a proper initialization scheme for your algorithm according to the characteristic of your algorithm and properties of these initialization schemes we have analyzed.(4) Parallel Ant Colony Algorithm using both Pheromone Crossover and Repulsive Operator based on Multi-Optimum for TSP.At the same time,many ant colonies exist in parallel ant colony optimization.Making full use of this characteristic,it proposes a new method to induce local premature convergence.It introduces two operators: pheromone crossover and repulsive operator.It employs master/slave paradigm,star topology.Master processor locates in the center,and every slavery ant colony owns its pheromone array and parameters.Above all,several slavery ant colonies are created, and then they perform iterating and updating their pheromone arrays respectively. Once a colony ant system arrives at stagnating state,the master processor reinitializes it by using both pheromone arithmetic crossover and repulsive operator based on multi-optimum.At the same time,the main parametersα,βand p of algorithm are adjusted self-adaptively.It can avoid some stagnating states of the iteration in basic ant colony optimization.The contents of communication between master processor and slave processors only are some parameters and several sub-optimum solutions,the computation to communication time ratio is relatively small.In this paper,a parallel asynchronous algorithm process is also presented.At the end of this paper,the experimental result is presented to show the effectiveness of this method for TSP.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络