To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real tim...To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15×15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.展开更多
The problem of passivity analysis for a class of discrete-time stochastic neural networks (DSNNs) with time-varying interval delay was investigated. The delay-dependent sufficient criteria were derived in terms of lin...The problem of passivity analysis for a class of discrete-time stochastic neural networks (DSNNs) with time-varying interval delay was investigated. The delay-dependent sufficient criteria were derived in terms of linear matrix inequalities (LMIs). The results are shown to be generalization of some previous results and are less conservative than the existing works. Meanwhile, the computational complexity of the obtained stability conditions is reduced because less variables are involved. A numerical example is given to show the effectiveness and the benefits of the proposed method.展开更多
The dynamic behavior of discrete-time cellular neural networks(DTCNN), which is strict with zero threshold value, is mainly studied in asynchronous mode and in synchronous mode. In general, a k-attractor of DTCNN is n...The dynamic behavior of discrete-time cellular neural networks(DTCNN), which is strict with zero threshold value, is mainly studied in asynchronous mode and in synchronous mode. In general, a k-attractor of DTCNN is not a convergent point. But in this paper, it is proved that a k-attractor is a convergent point if the strict DTCNN satisfies some conditions. The attraction basin of the strict DTCNN is studied, one example is given to illustrate the previous conclusions to be wrong, and several results are presented. The obtained results on k-attractor and attraction basin not only correct the previous results, but also provide a theoretical foundation of performance analysis and new applications of the DTCNN.展开更多
We propose a new approach for analyzing the global asymptotic stability of the extended discrete-time bidirectional associative memory (BAM) neural networks. By using the Euler rule, we discretize the continuous-tim...We propose a new approach for analyzing the global asymptotic stability of the extended discrete-time bidirectional associative memory (BAM) neural networks. By using the Euler rule, we discretize the continuous-time BAM neural networks as the extended discrete-time BAM neural networks with non-threshold activation functions. Here we present some conditions under which the neural networks have unique equilibrium points. To judge the global asymptotic stability of the equilibrium points, we introduce a new neural network model - standard neural network model (SNNM). For the SNNMs, we derive the sufficient conditions for the global asymptotic stability of the equilibrium points, which are formulated as some linear matrix inequalities (LMIs). We transform the discrete-time BAM into the SNNM and apply the general result about the SNNM to the determination of global asymptotic stability of the discrete-time BAM. The approach proposed extends the known stability results, has lower conservativeness, can be verified easily, and can also be applied to other forms of recurrent neural networks.展开更多
Discrete Hopfield neural network with delay is an extension of discrete Hopfield neural network. As it is well known, the stability of neural networks is not only the most basic and important problem but also foundati...Discrete Hopfield neural network with delay is an extension of discrete Hopfield neural network. As it is well known, the stability of neural networks is not only the most basic and important problem but also foundation of the network's applications. The stability of discrete HJopfield neural networks with delay is mainly investigated by using Lyapunov function. The sufficient conditions for the networks with delay converging towards a limit cycle of length 4 are obtained. Also, some sufficient criteria are given to ensure the networks having neither a stable state nor a limit cycle with length 2. The obtained results here generalize the previous results on stability of discrete Hopfield neural network with delay and without delay.展开更多
A type of stochastic interval delayed Hopfield neural networks as du(t) = [-AIu(t) + WIf(t,u(t)) + WIτf7τ(uτ(t)] dt +σ(t, u(t), uτ(t)) dw(t) on t≥0 with initiated value u(s) = ζ(s) on - τ≤s≤0 has been studie...A type of stochastic interval delayed Hopfield neural networks as du(t) = [-AIu(t) + WIf(t,u(t)) + WIτf7τ(uτ(t)] dt +σ(t, u(t), uτ(t)) dw(t) on t≥0 with initiated value u(s) = ζ(s) on - τ≤s≤0 has been studied. By using the Razumikhin theorem and Lyapunov functions, some sufficient conditions of their globally asymptotic robust stability and global exponential stability on such systems have been given. All the results obtained are generalizations of some recent ones reported in the literature for uncertain neural networks with constant delays or their certain cases.展开更多
基金Project(50276005) supported by the National Natural Science Foundation of China Projects (2006CB705400, 2003CB716206) supported by National Basic Research Program of China
文摘To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15×15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.
基金Projects(60874030,60835001,60574006)supported by the National Natural Science Foundation of ChinaProjects(07KJB510125,08KJD510008)supported by the Natural Science Foundation of Jiangsu Higher Education Institutions of ChinaProject supported by the Qing Lan Program,Jiangsu Province,China
文摘The problem of passivity analysis for a class of discrete-time stochastic neural networks (DSNNs) with time-varying interval delay was investigated. The delay-dependent sufficient criteria were derived in terms of linear matrix inequalities (LMIs). The results are shown to be generalization of some previous results and are less conservative than the existing works. Meanwhile, the computational complexity of the obtained stability conditions is reduced because less variables are involved. A numerical example is given to show the effectiveness and the benefits of the proposed method.
文摘The dynamic behavior of discrete-time cellular neural networks(DTCNN), which is strict with zero threshold value, is mainly studied in asynchronous mode and in synchronous mode. In general, a k-attractor of DTCNN is not a convergent point. But in this paper, it is proved that a k-attractor is a convergent point if the strict DTCNN satisfies some conditions. The attraction basin of the strict DTCNN is studied, one example is given to illustrate the previous conclusions to be wrong, and several results are presented. The obtained results on k-attractor and attraction basin not only correct the previous results, but also provide a theoretical foundation of performance analysis and new applications of the DTCNN.
基金This project was supported by the National Natural Science Foundation of China (60074008) .
文摘We propose a new approach for analyzing the global asymptotic stability of the extended discrete-time bidirectional associative memory (BAM) neural networks. By using the Euler rule, we discretize the continuous-time BAM neural networks as the extended discrete-time BAM neural networks with non-threshold activation functions. Here we present some conditions under which the neural networks have unique equilibrium points. To judge the global asymptotic stability of the equilibrium points, we introduce a new neural network model - standard neural network model (SNNM). For the SNNMs, we derive the sufficient conditions for the global asymptotic stability of the equilibrium points, which are formulated as some linear matrix inequalities (LMIs). We transform the discrete-time BAM into the SNNM and apply the general result about the SNNM to the determination of global asymptotic stability of the discrete-time BAM. The approach proposed extends the known stability results, has lower conservativeness, can be verified easily, and can also be applied to other forms of recurrent neural networks.
文摘Discrete Hopfield neural network with delay is an extension of discrete Hopfield neural network. As it is well known, the stability of neural networks is not only the most basic and important problem but also foundation of the network's applications. The stability of discrete HJopfield neural networks with delay is mainly investigated by using Lyapunov function. The sufficient conditions for the networks with delay converging towards a limit cycle of length 4 are obtained. Also, some sufficient criteria are given to ensure the networks having neither a stable state nor a limit cycle with length 2. The obtained results here generalize the previous results on stability of discrete Hopfield neural network with delay and without delay.
基金This project was supported by the National Natural Science Foundation of China (60074008, 60274007, 60274026) National Doctor foundaction of China (20010487005).
文摘A type of stochastic interval delayed Hopfield neural networks as du(t) = [-AIu(t) + WIf(t,u(t)) + WIτf7τ(uτ(t)] dt +σ(t, u(t), uτ(t)) dw(t) on t≥0 with initiated value u(s) = ζ(s) on - τ≤s≤0 has been studied. By using the Razumikhin theorem and Lyapunov functions, some sufficient conditions of their globally asymptotic robust stability and global exponential stability on such systems have been given. All the results obtained are generalizations of some recent ones reported in the literature for uncertain neural networks with constant delays or their certain cases.