摘要
The capability of multilayer perceptrons(MLPs)for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades.Back propagation(BP)algorithm is the most popular learning algorithm for training of MLPs.In this paper,a simple iteration formula is used to select the leaming rate for each cycle of training procedure,and a convergence result is presented for the BP algo- rithm for training MLP with a hidden layer and a linear output unit.The monotonicity of the error function is also guaranteed during the training iteration.
The capability of multilayer perceptrons (MLPs) for approximating continuous functions with arbitrary accuracy has been demonstrated in the past decades. Back propagation (BP) algorithm is the most popular learning algorithm for training of MLPs. In this paper, a simple iteration formula is used to select the learning rate for each cycle of training procedure, and a convergence result is presented for the BP algorithm for training MLP with a hidden layer and a linear output unit. The monotonicity of the error function is also guaranteed during the training iteration.
基金
This research was supported by the National Natural Science Foundation of China (10471017).
作者简介
Corresponding author.E-mail: wuweiw@dlut.edu.cnE-mail: W.B.Liu@kent.ac.uk