节点文献

求解非线性约束优化问题的精确罚函数方法

Exact Penalty Function for Solving Nonlinear Constrained Optimization Problems

【作者】 李冉冉

【导师】 赵文玲;

【作者基本信息】 山东理工大学 , 应用数学, 2012, 硕士

【摘要】 精确罚函数方法是求解非线性约束优化问题的一种重要方法。理论上,精确罚函数方法只需求解罚参数取某一有限值的罚问题,就可得到约束优化问题的解,从而避免了当罚参数的值趋于无穷大时产生病态的缺点。精确罚函数又分为不可微精确罚函数和连续可微精确罚函数。通常情况下,简单精确罚函数一定是不可微的,从而会在一些快速算法中阻止局部快速收敛,产生" Maratos效应”。连续可微精确罚函数就克服了上述缺点,因此具有更好地性质。增广拉格朗日函数就是这样一种特殊的连续可微精确罚函数。对于一般的非线性约束优化模型,本文将提出一种新的非线性Lagrange函数,讨论该函数在KKT点处的性质,并证明在适当条件下,基于该函数的对偶算法产生的迭代点列具有局部收敛性,然后给出与罚参数有关的解的误差估计。这为解决非线性约束优化问题又提供了一种新途径。然后对非光滑罚函数进行二阶可微光滑逼近,并给出原优化问题、相应的非光滑罚函数、光滑罚函数最优值间的误差估计,然后设计基于该光滑罚函数的算法,并证明在适当条件下它具有全局收敛性,最后再利用数值实验来说明算法的有效性。最后对于锥优化问题,运用增广拉格朗日函数这一特殊的精确罚函数,给出一种迭代算法,并证明这种算法具有一种较弱的全局收敛性,即提出一种ε-全局最优解,对于每一次迭代k,得到相应的εk-全局最优解,该序列都收敛到原问题的ε-全局最优解,从而证明算法具有ε-全局收敛性。

【Abstract】 The exact penalty function method is an important method for solving nonlinear constrained optimization problem. In theory, the exact penalty function method only need to solve the penalty problem when the parameter take a finite value, then we can get the solution of the constrained optimization problem, thus avoid the shortcomings of morbid when the value of the penalty parameter tends to infinity. Exact penalty function is divided into nondifferentiable exact penalty function and continuously differentiable exact penalty function. Under normal circumstances, the simple exact penalty function must be nondifferentiable, which will prevent the local fast convergence of some fast algorithm, and result in the "Maratos effect". Continuously differentiable exact penalty function overcomes the above shortcomings, it has better properties. Augmented Lagrangian function is a special kind of continuously differentiable exact penalty function.First, for a general nonlinear constrained optimization model, this paper will propose a new nonlinear Lagrangian function to discuss the properties of the function at the KKT point, and prove that under wild conditions, the iterated points based on the dual algorithm of the function are local convergent, and then give the error estimates of the solution about the penalty parameter. This provides a new way for solving nonlinear constrained optimization problem.Then we make a second order differentiable smooth approximation for the nonsmooth penalty function of the above model, and give the error estimates of the optimal values of the original optimization problem, the corresponding nonsmooth penalty function and the smooth penalty function, and design the algorithm based on the smooth penalty function, then prove that under wild conditions, it is globally convergent, and finally numerical experiments are given to illustrate the effectiveness of the algorithm.Finally, for the cone optimization problem, we use the augmented Lagrangian function, which is a special exact penalty function, give an iterative algorithm, and prove that the algorithm has a weak global convergence, that is, propose a ε-global optimal solution, then for each iterationk, we get the corresponding εkglobal optimal solution,which converge to the ε-global optimal solution,thus the ε-global convergence of the algorithm is proved.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络