期刊文献+

Convergence of BP Algorithm for Training MLP with Linear Output

Convergence of BP Algorithm for Training MLP with Linear Output
在线阅读 下载PDF
导出
摘要 The capability of multilayer perceptrons(MLPs)for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades.Back propagation(BP)algorithm is the most popular learning algorithm for training of MLPs.In this paper,a simple iteration formula is used to select the leaming rate for each cycle of training procedure,and a convergence result is presented for the BP algo- rithm for training MLP with a hidden layer and a linear output unit.The monotonicity of the error function is also guaranteed during the training iteration. The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation (BP) algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.
基金 This research was supported by the National Natural Science Foundation of China (10471017).
关键词 多层感知器 BP算法 收敛性 单调性 神经网络 Multilayer perceptron BP algorithm Convergence Monotonicity.
作者简介 Corresponding author.E-mail: wuweiw@dlut.edu.cnE-mail: W.B.Liu@kent.ac.uk
  • 相关文献

参考文献14

  • 1Wei Wu,Guorui Feng,Xin Li.Training Multilayer Perceptrons Via Minimization of Sum of Ridge Functions[J].Advances in Computational Mathematics.2002(4)
  • 2W. Liu,Y. H. Dai.Minimization Algorithms Based on Supervisor and Searcher Cooperation[J].Journal of Optimization Theory and Applications.2001(2)
  • 3F. Girosi,T. Poggio.Networks and the best approximation property[J].Biological Cybernetics.1990(3)
  • 4A. A. Goldstein.Cauchy’s method of minimization[J].Numerische Mathematik.1962(1)
  • 5Chen TP.Universal approximation to nonlinear operations by neural networks with arbitrary activation functions and its application to dynamical syste[].IEEE TNeural Networks.1995
  • 6Gaivoronski AA.Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods[].OptimMethodSoftw.1994
  • 7Girosi F,Poggio T.Networks and the best approximation property[].Biological Cybernetics.1990
  • 8Hagan MT,Demuth HB,Beale M.Neural Network Design[]..2003
  • 9Haykin S.Neural Networks:A Comprehensive Foundation[]..2001
  • 10Liu WB,Dai YH.Minimization algorithm based on supervisor and seracher cooperation[].Journal of Optimization Theory and Applications.2001

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部