节点文献

非线性最优化的几种算法研究

Some Algotithms in Nonlinear Optimization Problems

【作者】 郑艳梅

【导师】 孙清滢;

【作者基本信息】 中国石油大学 , 计算数学, 2007, 硕士

【摘要】 对求解无约束优化问题的共轭梯度法中的方向参数给定了一种新的区间取法以保证搜索方向是目标函数的充分下降方向,在此基础上提出了一种新的记忆梯度算法,在目标函数的梯度一致连续的条件下证明了算法的全局收敛性,数值试验表明新算法是有效的.然后将该算法应用于以下两个方面的研究:(i)在将互补问题转化为无约束优化问题的基础上,应用记忆梯度算法求解之并证明了算法的收敛性和线性收敛速度;(ii)在通过广义D-gap函数将半定互补问题转化为无约束优化问题的基础上,应用记忆梯度算法求解之并证明了算法的收敛性.进一步,增加记忆项的项数,将记忆梯度算法推广到三项记忆梯度算法,在算法的步长选取上提出了一种新的非单调线搜索技巧,该线搜索在每一迭代步内得到较大的步长,有利于算法的快速收敛,在目标函数的梯度一致连续的条件下证明了算法的全局收敛性,并在一定条件下讨论了算法线性收敛速度.最后,应用三项记忆梯度算法求解信赖域子问题,该方法保证试探步的充分下降性,并结合线搜索技巧,即在试探步不成功时,不重解信赖域子问题,而采用非精确Armijo线搜索获得下一迭代点,从而减少了计算量.在此基础上,提出了一种带线搜索的信赖域算法,在一定的条件下证明了算法的全局收敛性.数值试验表明新算法是有效的.

【Abstract】 A new assumption is given on the scalar of conjugate gradient method to ensure that the direction is a sufficient descent direction. A new class of memory gradient method for unconstrained optimization is presented and global convergence properties of the new algorithm are discussed on condition that the gradient of the objective function is uniformly continuous. Numerical results show that the new algorithm is efficient. And then the algorithm is applied to the following aspects: (i) Based on a reformulation of the complementarity problem as unconstrained optimization, the memory gradient method is used for solving the complementarity problem. The global convergence and the linear convergence of the new algorithm are shown. Numerical results show that the new algorithm is efficient. (ii) Based on a reformulation of the semi-definite complementarity problem as unconstrained optimization by the generalized D-gap merit function, the memory gradient algorithm is used for solving the semi-definite complementarity problem, which is proved globally convergent. Furthermore, by adding the numbers of memory term, the memory gradient algorithm is generalized to the three-term memory gradient method. On choosing the step size of the algorithm, a new non-monotone line-search rule is proposed which can get a larger step size at each iteration and be benefit to the fast convergence of the algorithm. The global convergence property of the new algorithm is discussed on condition that the gradient of the objective function is uniformly continuous and the linear convergence is also discussed under certain conditions. Numerical results show that the new algorithm is efficient. Finally, the new three-term memory gradient method is used to solve the trust region sub-problem, which ensures the trial step is sufficient descent. A new algorithm for unconstrained optimization that employs both trust region and line-search is proposed, and the algorithm does not resolve the sub-problem if the trial step results in an increase in the objective function, but performs a new inexact Armijo line search to obtain the next iteration point. The convergence property of the algorithm is proved under certain conditions. Numerical results show that the new algorithm is efficient.

  • 【分类号】O224
  • 【下载频次】323
节点文献中: 

本文链接的文献网络图示:

本文的引文网络