1Rumelhart D E, Hinton G E, Williams R J. Learninginternal repr esentatio ns by error propagation[A].Rumelhart D E James L.McClelland J L. Parallel di stributed processing: explorations in the microstructure of cognition[C], vol ume 1, Cambridge, MA:MIT Press, 1986.318~362.
3Fahlman S E. Faster-learning variations on back-propagation: an e mpirical study[A].Touretzky D,Hinton G,Sejnowski T. Proceedings of the 1988 C onnectionist Models Summer School[C].Carnegic Mellon University,1988,38~51.
4Jacobs R A. Increased rates of convergence through learning rate adaptation[J]. Neural Networks,1988,1:295~307.
5Shar S, Palmieri F. MEKA-a fast, local algorithm for training feedforwa rd neural networks[A]. Proceedings of the International Joint Conference on Ne ural Networks[C]. IEEE Press, New York, 1990.41~46.
6Watrous R L. Learning algorithms for connectionist network: appli ed gradie nt methods of nonlinear optimization[A]. Proceedings of IEEE International Con ference on Neural Networks[c]. IEEE Press, New York, 1987.619~627.
7Shar S,Palmieri F,Datum M.Optimal filtering algorithms f or fast l earning in feedforward neural networks[J]. Neural Networks,1992, 5(5):779~7 87.
8Martin R,Heinrich B. A Direct Adaptive Method for F aster Backpropagation Learning: The RPROP Algorithrm[A]. Ruspini H. Proceedi ngs of the IEEE Interna t ional Conference on Neural Networks (ICNN)[C]. IEEE Press, New York. 1993.58 6~591.
9Fletcher R,Reeves C M. Function minimization by conjugate gra dients[J]. Computer Journal ,1964,7:149~154.
10Powell MJD. Restart procedures for the conjugate gradient metho d[J]. Mathematical Programming, 1977, 12: 241~254.