节点文献

非线性优化问题的一类无记忆非拟牛顿算法研究

【作者】 于静静

【导师】 焦宝聪;

【作者基本信息】 首都师范大学 , 应用数学, 2006, 硕士

【摘要】 对于非线性优化问题寻找快速有效的算法一直是优化专家们研究的热门方向之一。经理论证明和实践检验,拟牛顿法和共轭梯度法已经成为无约束下最优化方法中最有效的两类算法。前者具有许多优点,比如,迭代中仅需一阶导数不必计算Hessian矩阵,当H_k正定时,算法产生的方向均为下降方向,并且这类算法具有二次终止性,在一定的条件下,文[11,25,26,27,28]等给出了除DFP算法外的Broyden族算法的超线性收敛性,而且还具有n步二阶收敛速率。拟牛顿算法的缺点是所需存储量较大,对于大型问题,可能遇到存储方面的困难。共扼梯度法的基本思想是把共轭性与最速下降方法相结合,具有占用内存少,二次终止性和良好的数值表现。然而当目标函数为一般的非线性函数时,即使在精确线搜索下,各共轭梯度法的收敛性也很难保证。考虑到以上两种算法的优缺点,文[3]给出了无约束优化的一类无记忆拟牛顿算法。较求解无约束优化问题的共轭梯度法,无记忆拟牛顿法无论在内存还是每次迭代的计算量都没有增加多少,但其计算表现比共轭梯度法好得多。本文基于非拟Newton方程,结合文[3]中的无记忆拟牛顿法,给出了求解无约束非线性优化问题的一类新算法。数值实验表明,此类算法具有良好的计算效能,特别适合求解大规模的最优化问题。 在第一章我们首先简要的介绍了最优化问题的提出以及判断最优解常用的最优性条件,回顾了无约束优化问题常用的几类导数下降类算法。 在第二章中,就无记忆拟牛顿族在无约束最优化问题上,采用非单调线搜索下是否具有全局收敛性进行了研究。在目标函数为凸函数的条件下,证明了无记忆拟牛顿算法是全局收敛的。 在第三章中,导出了无记忆非拟Newton公式,并讨论了目标函数在满足一致凸的条件下,就采用非精确线搜索是否具有全局收敛性的问题进行了研究。文中还将无记忆非拟Newton公式和Perry-Shanno无记忆拟牛顿公式相结合,提出了另一种混合无记忆非拟牛顿校正公式,并证明在目标函数是凸函数的条件下,此校正在Wolfe线搜索下具有全局收敛性。

【Abstract】 Seeking fast theoretical convergence and effective algorithms in unconstrained optimization is a very interested research topic for the optimization specialists and engineers, non-quasi-newton methods and conjugate method have been proved to be two of the most efficient methods when applied to unconstrained optimization by the favorable numerical experience and theoretics. It. also show clear superities that it is not necessary to compute hessian matrix, only need the gradient. Decent direction can be resulted when the hessian matrix is positive. It has been shown that the iterates generated by broyden class except the DFP method has superline convergence[ll,26,27,28,29], and has n-step convergence rate. The disadvantages of quasi-Newton algorithm is the great memory,so for large problems, memory difficulties may be encountered.The basic idea of the conjugate gredient methods is the combination of the conjugation and the most rapid decline method.It involves less memory and has secondary termination and effective numerical performance. However, when the function is the general nonlinear function,even under the exact line search,the convergence of the conjugate gredient methods can hardly be got. Consider the advantages and disadvantages of the above two algorithms,paper [3] gives a class of memoryless non-quasi-newton algorithm about unconstrained optimization. Compared the conjugate gredient method for unconstrained optimization, memoryless non-quasi-newton algorithm has not much increased whether in memory or each iterative calculations,but its numerical performance is much better than the conjugate gradient methods.Based on the non-quasi-newton equation, combining the memoryless non-quasi-newton algorithm in paper [3], we give a new class of algorithm for unconstrained optimization.numerical experiments indicate that this algorithm is very effective, particularly for solving large-scale optimization problems.In chapter 1 , we first introduce the development of optimization and some extensive optimality conditions which to decide the optimum solution. We review

  • 【分类号】O221.2
  • 【被引频次】1
  • 【下载频次】159
节点文献中: 

本文链接的文献网络图示:

本文的引文网络