节点文献

神经网络学习算法研究

The Research on Neural Network’s Algorithm

【作者】 刘军

【导师】 邱晓红;

【作者基本信息】 江西师范大学 , 计算机软件与理论, 2009, 硕士

【摘要】 随着神经网络学科又一次研究高潮的到来,神经网络已广泛应用与科学计算,模式分类,模式提取,金融行业,国防工业,航空行业,智能控制等等。神经网络不但具备逼近任何非线性函数能力,而且网络的泛化能力很强,从而达到一种函数映射关系。正因为神经网络的以上优点,才应用十分广泛。神经网络为建立模型提供一种很好的方法。尤其为对于复杂的,不确定性,信息量很少系统,利用神经网络能建立输入与输出的对应关系,满足相应功能,使系统设计复杂度大大减小。学习算法是神经网络中十分重要的内容,神经网络的训练过程本质上是一个优化问题,目前利用的技术是最优化理论中技术。梯度下降法和改进的梯度下降法,是主要的训练算法。这种算法存在训练时间长,权值初始化问题,不具备全局寻优能力,样本的遗忘等等。这些问题主要是学习算法所导致的。如果能找到一个合适训练算法,以上问题都能迎刃而解。根据“没有免费午餐定理”,则需要通过增加训练时间复杂度和空间复杂度来,达到要求。对于复杂样本和复杂系统进行训练,通常需要大量的时间,随着网络复杂,所需要的训练时间急剧上升,而且训练效果远远达到要求。因为随着网络增加所需要计算的向量梯度和Hessian矩阵需要大量时间,于是本文提出将神经网络模块化思想。通过数学公式达到函数的映射,本文对傅立叶神经网络进行建模,并提出了一种更好的学习算法。本文通过对粒子群算法进行研究,提出了将粒子群算法和梯度下降法结合起来,即利用粒子群算法全局寻优思想,NW法初始化网络的权值和速度,利用正则化改变目标函数,利用LM算法对简单网络进行训练,利用共轭梯度法对复杂的网络进行训练,以及利用神经网络集成方法,提高神经网络泛化能力,以及对每个传递函数的系数进行优化。本文的工作和创新之处:(1)利用一种新的思想对神经网络进行训练,以及对神经网络如何实现和参数设置提出自己看法。(2)为反馈神经网络学习算法,提供了一种新的思路。

【Abstract】 With the development of Neural Network and its application, it is widely used in Science, technology, pattern classification, feature extraction, finance, defense, industry, aerospace, intelligence control etc. Neural Network not only has the powerful approximation ability, but also has the powerful generalization ability. So the network can set up the function. Because of them, Neural Network has been used in many circumstances.Neural Network provides a good method for setting up Model, in particular, the system has complex factors, uncertainty, less informative. So the difficulty of system design is diminished by the Neural Network which can construct the relationship between the input and output without any theories.The algorithm is very important in Neural Network and the training process, in essence, is the optimization process. In generally, we use the back-propagation algorithm and some improved algorithms to train the neural network. However, the back-propagation has very long training time, the initialization problem, not having global search ability, forgetting the former training samples. The main reason is the learning algorithm. According to“No free lunch theorems for optimization”, we should increase time complexity and space complexity in order to get around the problems. If we can search an efficient training method, the problems can be solved.As to complex training set or complex system, BP algorithm need much time. With the dimension increasing, the training time needs time by quickly and the training precision is very low, because the algorithm need compute the steepest vector and Hessian matrix. We propose the modular neural network which deals with the problem. According to the Fourier transport, we also research the Fourier Neural Network and give the better algorithm in terms of training precision.After researching on the Particle Swarm Optimization, we propose the hybrid BP-PSO which employs some technology to train the neural network. The main ideas use the search idea of PSO, use NW method to initialization network, use regularization to improve the goal function, use LM algorithm to train simple network, use conjugate algorithm to train complex system, use the neural network ensembles to improve the generalization, and optimize the coefficient of every transfer function.The work and innovations in this paper are as follows: First, we propose the novel idea in relation to the training, how to come true the ideas and the experience of selecting parameter; Second, the paper gives the new idea of training the recurrent neural network.

节点文献中: