节点文献

大规模矩阵特征值及线性系统的Krylov子空间算法研究

Krylov Subspace Methods for Large Matrix Eigenvalue and Linear System Problems

【作者】 牛强

【导师】 卢琳璋;

【作者基本信息】 厦门大学 , 计算数学, 2008, 博士

【摘要】 大规模矩阵特征值及稀疏线性系统数值求解问题已成为许多科学、工程、工业模拟,以及金融等领域的核心问题,由于其求解时间在整个问题解决上占有相当大的比重,因此,高效地解决这些大规模矩阵计算问题能够很大程度上提高整个问题的求解效率.最近20多年来这些大规模矩阵计算问题的算法研究一直是计算数学的热点,国际、国内的研究十分活跃.其求解算法,特别是Krylov子空间类型的算法得到了长足的发展,然而,仍有许多问题尚待解决,基于问题的重要性及其长远意义,本文讨论这些大规模矩阵计算问题的数值求解算法,一方面,本文研究了大规模矩阵特征值问题的数值求解算法,算法的收敛性、稳定性等问题;另一方面,还讨论了大规模稀疏线性系统的加速算法、预处理技术、以及相关的收敛性分析,全文共分八章.第一章分别介绍了大规模矩阵特征值和稀疏线性系统问题的来源、研究历史、发展现状以及解决这些问题的基本方法.第二章、第三章讨论了大规模非对称矩阵部分特征对的数值计算问题,其中,第二章提出了一种计算大规模矩阵部分内部特征对的Arnoldi类型的算法,分析了算法的收敛性,与调和Arnoldi算法之间的关系,最后给出数值算例,试验对比表明:当近似子空间空间维数较小时,本文的算法能够得到更快的收敛速度;当近似子空间维数逐渐增加时,两种算法的收敛速度都明显加快,子空间维数较大时,两种算法的收敛效果相差不大.第三章讨论了求解大规模矩阵特征值问题的一类带压缩向量的块类型Arnoldi算法,每次重新启动时,利用上一个循环得到的近似特征向量来构造初始块向量,在正交基向量的构造过程中采用‘非精确压缩’,一方面,在新的求解子空间中算法可以包含已经收敛的近似特征向量,另一方面,通过非精确压缩,初始‘块’的列数随着近似特征对的收敛而减小,从而使得新的近似子空间对于尚未收敛的特征对更有利.因此,这种方法能够克服单向量的Krylov子空间不能计算重特征值的缺陷,而且比传统的块Krylov子空间方法更稳定,更有效,近似子空间的性质分析表明这种方法渐进地具备添加特征向量重新启动的Arnoldi算法的优势,随着近似特征对的收敛,每次重新启动生成的近似子空间对未收敛的特征对越来越有利,最后给出数值试验,结果表明本文的算法能够克服块的Krylov子空间不规则收敛的现象,具有非常光滑的收敛性质.第四至第七章讨论的是大规模稀疏线性系统的求解问题.其中,第四章首先通过实验发现添加近似特征向量重新启动的GMRES(Generalized MinimumRESidual)算法(GMRES-E)每隔一步生成的残向量角度往往很小.从而提出在GMRES-E的基础上添加修正向量的一种双重增广重新启动的GMRES算法(LGMRES-E).数值试验表明新的重新启动方式可以有效的纠正残向量经常出现的跳跃角很小的现象,从而可以使每次重新启动得到的近似子空间保持适当的正交性,一定程度上克服原来方法由于重新启动所造成的近似子空间整体维数下降问题,同时由于添加近似特征向量,新方法也能够有效地压缩掉影响收敛速度的小特征值.数值试验表明了这种双重增广方法对于系数矩阵具有少数几个相对较小特征值的线性系统具有非常好的效果.新方法可以加快GMRES-E的收敛速度.第五章我们提出在GCRO-DR每次重新启动时保留部分修正向量信息,并将其添加到新的求解子空间中.这种策略可以使前后近似求解子空间保持适当的线性无关性,从而可以有效的缩短GCRO-DR经常出现的收敛较慢的现象.当只保留一个最新生成的修正向量时,分析表明,算法的改进是非常自然的,只需要极小的改动就能达到目的.当需要添加的修正向量的个数大于1时,本章重点讨论了一种非精确方法.它可以保证在近似子空间维数相同时,改进后的算法与原算法的运算量相差不大.然而改进后的方法具有明显的加速现象.我们对比了两种方法在求解单个线性系统以及序列线性系统方面的收敛速度,讨论了添加修正向量个数对于收敛性的影响.数值试验表明新方法能够有效加快GCRO-DR的收敛速度,在解决模拟机械疲劳断裂问题产生的系列线性系统等问题上的数值试验效果表明,新方法的收敛速度提高接近10%.第六章讨论了切频率过滤分解与组合预处理技术.首先,我们观察到传统的右侧过滤分解同样可以在满足左侧过滤条件下来完成.在此基础上,本文提出了一种双侧切频率过滤分解预处理子.在过滤向量的选取上采用ones来作为过滤向量.如此选取有以下几个优点,一方面,可以节省其他类似算法的前处理过程来计算过滤向量.另一方面,对于使用ones作为左侧过滤向量所构造的过滤预处理子,如果选取适当的初始向量,那么预处理的Krylov子空间迭代算法能保证其残向量的和始终为零,即所谓的材料均衡误差为零的性质.将切频率过滤分解所构造的预处理子与传统的ILU(0)预处理子以乘性方式相结合,我们讨论了两种组合预处理技术.谱分析表明组合预处理子能吸收两个预处理子的优点,使预处理矩阵的特征值很好的聚集在1附近.最后,对于一类非线性偏微分方程离散化产生的大规模稀疏线性系统,我们对比了几种不同类型组合预处理子的试验效果,数值结果表明了本文方法的稳定性和有效性.第七章讨论了鞍点问题的预处理技术,基于对鞍点问题(1,1)块的切频率过滤分解和约束预处理子的结构,本章讨论了一种切频率过滤分解类型的预处理子.通过对Stokes问题及Oseen方程离散化产生的鞍点问题,我们分析了预处理子在网格精化,Reynolds数增加时的性态.数值结果表明:在内迭代精度和Reynolds数不变的情形下,随着网格精化,需要的迭代步数逐渐增加但变化不是很明显.在矩阵网格单元及内迭代精度不变的情形下,迭代步数基本不受Reynolds数变化的影响,这一性质要比同类型的ILU预处理子要好很多.本章的后半部分,我们提出了一类推广Schilders分解类型的预处理子.理论分析表明,预处理之后矩阵的谱性质与Schilders分解类型的预条件子对标准鞍点问题预处理之后矩阵的谱性质相同.因此,本文的预处理子是Schilders分解类型约束预处理子对于广义鞍点问题的自然推广.由于广义鞍点问题非零(2,2)块的出现,本文的预条件子在实际应用时涉及到Schur补类型的计算.对于一类特殊类型的问题,本文讨论了算法的一种非精确变形可以避免Schur补的计算,数值试验还在准备中.最后一章提出了一种修正的切频率过滤分解预处理子(MTFFD),并对其进行了Fourier分析.用2维Poisson方程作为模型问题,通过Fourier分析确定了最优修正阶数以及最优参数. Fourier分析结果表明MTFFD预处理之后的矩阵条件数为O(h?).这一结果表明MTFFD的性能应该比BILU以及MBILU都要好.Fourier分析得到的结论都通过试验进行了验证.最后,通过数值算例表明MTFFD预处理子的效果比TFFD更好.

【Abstract】 Large scale eigenvalue problems along with large sparse linear system of equations are at the heart of many large scale computations in science, engineering, industry simulation, finance and so on. In order to solve the real problem completely, considerable CPU time will be spent in solving the linear equations or calculating some meaningful eigenpairs. Therefore, the simulation process can be accelerated considerably, if the associate matrix computation problems are solved efficiently. During the past twenty years, these large scale matrix computation problems have always been one of the most active areas in computational mathematics, and they are widely investigated by many experts all over the world. Many successful algorithms have been proposed, particularly, Krylov subspace type algorithms have made considerable progress. However, there are still many problems unsolved, and many algorithms need to be analyzed and improved. In view of the importance and the long-range significance of these problems, we endeavor to develop and analyze Krylov subspace type algorithms for solving large scale eigenvalue problems and large sparse linear system of equations. This thesis comprises eight chapters.In Chapter§1, we firstly give a brief review of the background and projection type solvers of eigenvalue problems. Then we briefly recall the development of iterative method and preconditioning techniques for solving large sparse linear system of equations.Chapter§2 and Chapter§3 deal with large scale eigenvalue problems. In Chapter§2, we propose an Arnoldi-type algorithm for computing a few interior eigenvalues of large scale matrices. The convergence analysis, and some comparisons between the algorithm and the harmonic Arnoldi algorithm are done. The results show that the algorithm is efficient, especially when the subspace dimension is small. When enlarging the subspace dimension, both methods improve quickly and the convergence behaviors are close to each other.In Chapter§3, we present a class of deflated block Arnoldi type methods for computing partial eigenpairs of large matrices. The Rayleigh-Ritz procedure and its refined variants are analyzed. At the beginning of each restart, the computed approximate eigenvectors are used to form the next initial block vector. By the inexact deflation prodecure in the orthogonalization process, the size of the initial block will be automatically reduced by one whenever an eigenpair achieves convergence, we reveal that the procedure can not only retain the information of the approximate eigenvectors that have converged, but also makes the new generated approximate subspace more favorable for the unconverged eigenpairs. Theoretical analysis shows that the methods are robust and can inherit the advantages of the regular block Krylov subspace methods for solving multiple or closely clustered eigenvalue problems. Qualitative analysis shows that the methods also have the property of gradually possessing the merits of the restarted Arnoldi methods augmented with approximate eigenvectors. Numerical experiments show that the new methods can mitigate the irregular convergence behavior of the block Krylov subspace methods, and exhibit rather smooth convergence behavior.From Chapter§4 to Chapter§7, we investigate the iterative solvers of large sparse linear systems. Particularly, in Chapter§4, Based on the phenomena that the skip angles of GMRES-E are usually small, we proposed to accelerate the convergence of GMRES-E by augmenting error approximations, which results in a combined augmented restarted GMRES method. We demonstrate that the resulted combination method can gain the advantages of two approaches: (i) effectively deflate small eigenvalues in magnitude that may hamper the convergence of the method; (ii) partially recover the linear independence of the basis. The numerical results show that the method can efficiently accelerate the convergence of GMRES-E method, and especially suitable for linear systems with a few relatively small eigenvalues.In Chapter§5, an accelerating strategy of GCRO-DR method is proposed. By recycling some error approximations generated during previous restart cycles, the new method is able to mitigate the occurrence of small skip angles in GCRO-DR, so that it can effectively shrink the phase of slow convergence. If just one error approximation to be retained, we show that the accelerating strategy can be done in a natural way. However, if more than one error approximations to be retained, we analyzed an inexact variant, which is cheap and effective. Applications of the new method for solving a single and a sequence of linear systems are discussed, and the efficiency of the new method is illustrated by some examples from practical applications. In particular, for solving sequence of linear systems arising from fatigue and fracture engineering components, we show that the convergence of the new method can reduces the overall cost by nearly 10%.In Chapter§6, the tangential frequency filtering decomposition and combination preconditioning are discussed. Particularly, we first show that the convential filtering decomposition can also be carried out by satisfying the left filtering property. Then the two sides tangential frequency filtering decomposition is introduced. By combining the new filtering preconditioner with the classical ILU(O) preconditioner in multiplicative ways, composite preconditioners are produced. Analysis show that the composite preconditioners benefit from each of the preconditioners. On the filtering vector, we reveal that ones is a reasonable choice, which is efficient and can avoid the preprosess needed in other methods to build the filtering vector. By setting an appropriate initial solution, the preconditioner by using ones as the left filtering vector ensures the sum of the residual vector is zero all throughout the iterations, i.e. material balance error is zero. Numerical test show that the composite preconditioner is rather robust and efficient for block tridiagonal linear systems arising from the discretisation of partial differential equations with strongly varying coefficient.In Chapter§7, we discuss preconditioning techniques for saddle point problems. By utilizing the structure of the (1,1) block of the saddle point problem along with the structure of constraint preconditioner, we propose a tangential frequency filtering decomposition type preconditioner for saddle point problems. We test the preconditioner on the saddle point problems discretized from Stokes and Oseen equations. We analyzed the properties of the preconditioner when the grid size refines and the Reynolds number increases. The numerical results show that: for fixed inner iteration tolerance and the fixed Reynolds number, the outer iteration number increases slowly when the grid size refines; for fixed inner iteration tolerance and the fixed grid size, the outer iteration numbers only slightly influenced by the Reynolds number. The later property more appealing than the same type of ILU preconditioner. In the later part of this chapter, we propose a class of Schilders factorization type constraint preconditioner for regularized saddle point ptoblems. The spectrum analysis shows that the preconditioned matrix inherits properties of preconditioned matrix by using the Schilders factorization type constraint preconditioner for standard saddle point problems. Therefore, the factorization proposed in this paper naturally generalized the Schilders factorization to the context of generalized saddle point problems. However, due to the nonzero (2,2) block appeared in the regularized saddle point problem, the application of the preconditioner involves Schur complement type linear systems, which generally poses difficulties. For some special regularized saddle point problems often arise from practical applications, an inexact variant of the preconditioner is analyzed, which can avoid the Schur complement type computation. Numerical tests are under preparation.In the last Chapter, a modified tangential frequency filtering decomposition (MTFFD) preconditioner is proposed. The optimal order of the modification and the optimal relaxation parameter is determined by Fourier analysis. The Fourier results shows that the condition number of the preconditioned matrix is (?)(h-2/3), and the spectra distribution of the preconditioned matrix can be predicted by the Fourier results. The performance of MTFFD is compared with tangential frequency filtering (TFFD) preconditioner on a variety of large sparse matrices from discretization of PDEs with discontinuous coefficients, the numerical results show that the MTFFD preconditioner is much more efficient than the TFFD preconditioner.

  • 【网络出版投稿人】 厦门大学
  • 【网络出版年期】2009年 08期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络