节点文献

几类神经网络的稳定性及非线性逼近性的研究

Stability and Nonlinear Approximation of Several Classes of Neural Networks

【作者】 全志勇

【导师】 黄立宏;

【作者基本信息】 湖南大学 , 应用数学, 2014, 博士

【摘要】 本文通过运用拓扑度理论,同胚映射原理,新型不等式技巧及Lypaunov泛函等相方法对几类神经网络模型平衡点的存在性和全局稳定性进行了研究,建立了易于验证的LMI条件,所得结论削弱了已有结果中的条件,从而对于设计实用稳定的神经网络系统具有重要的理论指导意义.在最后两章,本文还对双权值神经网络的逼近误差估计问题进行了研究.全文内容共分为六章.在第1章,本文首先阐述了人工神经网络及其结构.然后,介绍了本文所要研究的几类神经网络模型,并对研究背景及本文所做的主要工作进行了说明.在这一章最后,给出了本文需要用到的一些记号和几个引理.在第2章,在去除现有文献中关于激励函数的有界性和单调性假设,对激励函数仅要求满足全局Lipschitz条件的情况下,利用度理论、新型不等式技巧和构造Lyapunov泛函等方法,对具反应扩散项和分布时滞的BAM神经网络,获得了系统平衡点的存在性和全局渐近稳定性的基于LMI的相同的充分条件,并给出了实例和数值模拟以说明结论的有效性.在第3章和第4章,分别对具时滞的惯性BAM神经网络和Cohen-Grossberg神经网络使用不同于第2章的方法证明了平衡点的存在唯一性和全局指数稳定性.首先通过变量替换把系统化为一阶微分方程,然后利用同胚映射原理,新型不等式和构造适当的Lyapunov泛函等方法建立了系统基于LMI的充分条件,并在第3章,对BAM神经网络去除了现有文献中关于激励函数的有界性假设,而对激励函数仅要求满足全局Lipschitz条件;在第4章,对Cohen-Grossberg神经网络去除了现有文献中关于行为函数可微性和单调性的假设,对行为函数仅要求满足全局Lipschitz条件.由于Cohen-Grossberg神经网络的放大函数的非线性性增大了系统稳定性分析的难度,因此,第3章和第4章在处理方法和难度上有所不同.在这两章最后都给出了实例说明和数值模拟.在第5章,构造了一类具单隐层的双权值神经网络,并以光滑模为度量工具,运用不等式技巧证明了在隐层节点数充分大时,这类神经网络在Lp度量空间中可以任意地逼近任何非线性Lp可积函数,且其逼近能力要优于在现有文献中所构造的BP神经网络.本章所得结果还去除了现有文献中关于激励函数为奇函数的假设,而且所构造的双权值神经网络的阂值和方向权值也与现有文献中的不同.在第6章,构造了一类新的具S形函数的双权值神经网络,并以连续模为度量工具,利用Fourier级数、单位分解逼近技巧和不等式方法,证明了只要隐层节点的数目充分大,具S形函数的双权值神经网络就可以任意地逼近非线性连续函数,而且它的逼近能力要优于现有文献中构造的BP神经网络.

【Abstract】 In this thesis, some novel LMI-based sufficient conditions for existence and global stability of the equilibrium point of several classes of neural networks are obtained by using degree theory, LMI method, new inequalities technique and constructing Lyapunov functionals, which can be easily verified via the Matlab LMI toolbox. Furthermore, the approximation errors of the neural networks with two weights are estimated in the last two chapters. The thesis is divided into six chapters.As the introduction, in Chapter One, the artificial neural network and its structure are firstly described. Then, several classes of neural networks, which will be discussed in this thesis, are briefly addressed while the motivations and outline of this work are also given in this chapter. And some notations, definitions and several lemmas are alse listed in this chapter.In Chapter Two, under the assumption that the activation functions only sat-isfy global Lipschitz conditions, a novel LMI-based sufficient condition for global asymptotic stability of equilibrium point of a class of BAM neural networks with reaction-diffusion terms and distributed delays is obtained by using degree theory, new inequalities technique and constructing Lyapunov functionals. In our results, the assumptions for boundedness and monotonicity in existing papers on the ac-tivation functions are removed. A numerical example is also provided to show the effectiveness of the derived LMI-based stability condition.In Chapter Three, the existence and global exponential stability of equilib-rium points for inertial BAM neural networks with time delays are investigated. Firstly, the system is transformed to the first order differential equations with chosen variable substitution. Then using homeomorphism theory and construct-ing Lyapunov functionals, the LMI-based sufficient condition on the existence and uniqueness of equilibrium point for above inertial BAM neural networks is obtained by using novel inequalities. Secondly, a new LMI-based condition which can ensure the global exponential stability of equilibrium point for the system is obtained by using new LMI method and new inequality technique. Our results extend and improve some earlier Publications. An numerical example is given to illustrate the theoretical results. The Chapter Four is similar but more difficult to be discussed.In Chapter Five, the neural network with two weights and one hidden layer is constructed to approximate Lp integrable functions. We not only show that the constructed neural network with two weights can approximate any Lp integrable function arbitrarily in the Lp metric as long as the number of hidden nodes is sufficiently large, but also show that the the neural network with two weights is of better approximation ability than the BP neural network constructed in a literature reference by using inequalities technique and the modulus of smoothness as a metric tool. Compared with the existing result in a literature reference, in our result, the assumption for the odd functions on the activation functions is removed. On the other hand, the input weights and thresholds are different from those in the existing result.Finally, in Chapter Six, the technique of approximate partition of unity, the way of Fourier series and inequality technique are used to construct a neural network with two weights and with sigmoidal functions. Furthermore by using inequality technique and the modulus of continuity as a metric tool, we prove that the neural network with two weights has a better approximation ability than BP neural network constructed in a literature reference.

  • 【网络出版投稿人】 湖南大学
  • 【网络出版年期】2014年 12期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络