期刊文献+
共找到18篇文章
< 1 >
每页显示 20 50 100
Hausdorff Dimension of Range and Graph for General Markov Processes
1
作者 CHEN Zhi-He 《应用概率统计》 CSCD 北大核心 2024年第6期942-956,共15页
We establish the Hausdorff dimension of the graph of general Markov processes on Rd based on some probability estimates of the processes staying or leaving small balls in small time.In particular,our results indicate ... We establish the Hausdorff dimension of the graph of general Markov processes on Rd based on some probability estimates of the processes staying or leaving small balls in small time.In particular,our results indicate that,for symmetric diffusion processes(withα=2)or symmetricα-stable-like processes(withα∈(0,2))on Rd,it holds almost surely that dimH GrX([0,1])=1{α<1}+(2−1/α)1{α≥1,d=1}+(d∧α)1{α≥1,d≥2}.We also systematically prove the corresponding results about the Hausdorff dimension of the range of the processes. 展开更多
关键词 markov process Hausdorff dimension RANGE GRAPH
在线阅读 下载PDF
Markov repairable systems with stochastic regimes switching 被引量:5
2
作者 Liying Wang Lirong Cui Mingli Yu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2011年第5期773-779,共7页
Compared with the classical Markov repairable system, the Markov repairable system with stochastic regimes switching introduced in the paper provides a more realistic description of the practical system. The system ca... Compared with the classical Markov repairable system, the Markov repairable system with stochastic regimes switching introduced in the paper provides a more realistic description of the practical system. The system can be used to model the dynamics of a repairable system whose performance regimes switch according to the external conditions. For example, to satisfy the demand variation that is typical for the power and communication systems and reduce the cost, these systems usually adjust their operating regimes. The transition rate matrices under distinct operating regimes are assumed to be different and the sojourn times in distinct regimes are governed by a finite state Markov chain. By using the theory of Markov process, Ion channel theory, and Laplace transforms, the up time of the system are studied. A numerical example is given to illustrate the obtained results. The effect of sojourn times in distinct regimes on the availability and the up time are also discussed in the numerical example. 展开更多
关键词 markov repairable system up time stochastic regimes switching system markov process.
在线阅读 下载PDF
Exponential stability of impulsive jump linear systems with Markov process 被引量:3
3
作者 Gao Liju Wu Yuqiang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期304-310,共7页
The exponential stability is investigated for a class of continuous time linear systems with a finite state Markov chain form process and the impulsive jump at switching moments. The conditions, based on the average d... The exponential stability is investigated for a class of continuous time linear systems with a finite state Markov chain form process and the impulsive jump at switching moments. The conditions, based on the average dwell time and the ratio of expectation of the total time running on all unstable subsystems to the expectation of the total time running on all stable subsystems,assure the exponential stability with a desired stability degree of the system irrespective of the impact of impulsive jump. The uniformly bounded result is realized for the case in which switched system is subjected to the impulsive effect of the excitation signal at some switching moments. 展开更多
关键词 Jump systems Exponential stability Average dwell time markov process.
在线阅读 下载PDF
Robust H_∞ Control for Uncertain Markovian Jump Linear Time-Delay Systems 被引量:2
4
作者 Zhong Maiying, Zhu Kunping & Tang Bingyong Business and Management School of Donghua University, Shanghai 200051, P. R. China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2002年第1期13-20,共8页
This paper studies the robust stochastic stabilization and robust H∞ control for linear time-delay systems with both Markovian jump parameters and unknown norm-bounded parameter uncertainties. This problem can be sol... This paper studies the robust stochastic stabilization and robust H∞ control for linear time-delay systems with both Markovian jump parameters and unknown norm-bounded parameter uncertainties. This problem can be solved on the basis of stochastic Lyapunov approach and linear matrix inequality (LMI) technique. Sufficient conditions for the existence of stochastic stabilization and robust H∞ state feedback controller are presented in terms of a set of solutions of coupled LMIs. Finally, a numerical example is included to demonstrate the practicability of the proposed methods. 展开更多
关键词 Feedback control Linear algebra Linear equations Linear systems Lyapunov methods markov processes Robustness (control systems)
在线阅读 下载PDF
Modeling and inferring 2.1D sketch with mixed Markov random field
5
作者 Anlong Ming Yu Zhou Tianfu Wu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2017年第2期361-373,共13页
This paper presents a method of computing a 2.1D sketch (i.e., layered image representation) from a single image with mixed Markov random field (MRF) under the Bayesian framework. Our model consists of three layers: t... This paper presents a method of computing a 2.1D sketch (i.e., layered image representation) from a single image with mixed Markov random field (MRF) under the Bayesian framework. Our model consists of three layers: the input image layer, the graphical representation layer of the computed 2D atomic regions and 3-degree junctions (such as T or arrow junctions), and the 2.1D sketch layer. There are two types of vertices in the graphical representation of the 2D entities: (i) regions, which act as the vertices found in traditional MRF, and (ii) address variables assigned to the terminators decomposed from the 3-degree junctions, which are a new type of vertices for the mixed MRF. We formulate the inference problem as computing the 2.1D sketch from the 2D graphical representation under the Bayesian framework, which consists of two components: (i) region layering/coloring based on the Swendsen-Wang cuts algorithm, which infers partial occluding order of regions, and (ii) address variable assignments based on Gibbs sampling, which completes the open bonds of the terminators of the 3-degree junctions. The proposed method is tested on the D-Order dataset, the Berkeley segmentation dataset and the Stanford 3D dataset. The experimental results show the efficiency and robustness of our approach. © 2017 Beijing Institute of Aerospace Information. 展开更多
关键词 Graphic methods Image segmentation Inference engines markov processes Structural frames
在线阅读 下载PDF
FTC of hidden Markov process with application to resource allocation in air operation
6
作者 Neng Eva Wu Matthew Charies Ruschmann 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2011年第1期12-21,共10页
This paper investigates the feedback control of hidden Markov process(HMP) in the face of loss of some observation processes.The control action facilitates or impedes some particular transitions from an inferred cur... This paper investigates the feedback control of hidden Markov process(HMP) in the face of loss of some observation processes.The control action facilitates or impedes some particular transitions from an inferred current state in the attempt to maximize the probability that the HMP is driven to a desirable absorbing state.This control problem is motivated by the need for judicious resource allocation to win an air operation involving two opposing forces.The effectiveness of a receding horizon control scheme based on the inferred discrete state is examined.Tolerance to loss of sensors that help determine the state of the air operation is achieved through a decentralized scheme that estimates a continuous state from measurements of linear models with additive noise.The discrete state of the HMP is identified using three well-known detection schemes.The sub-optimal control policy based on the detected state is implemented on-line in a closed-loop,where the air operation is simulated as a stochastic process with SimEvents,and the measurement process is simulated for a range of single sensor loss rates. 展开更多
关键词 hidden markov process(HMP) DECENTRALIZATION information fusion fault tolerant estimation air operation receding horizon control(RHC).
在线阅读 下载PDF
Recorded recurrent deep reinforcement learning guidance laws for intercepting endoatmospheric maneuvering missiles 被引量:1
7
作者 Xiaoqi Qiu Peng Lai +1 位作者 Changsheng Gao Wuxing Jing 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期457-470,共14页
This work proposes a recorded recurrent twin delayed deep deterministic(RRTD3)policy gradient algorithm to solve the challenge of constructing guidance laws for intercepting endoatmospheric maneuvering missiles with u... This work proposes a recorded recurrent twin delayed deep deterministic(RRTD3)policy gradient algorithm to solve the challenge of constructing guidance laws for intercepting endoatmospheric maneuvering missiles with uncertainties and observation noise.The attack-defense engagement scenario is modeled as a partially observable Markov decision process(POMDP).Given the benefits of recurrent neural networks(RNNs)in processing sequence information,an RNN layer is incorporated into the agent’s policy network to alleviate the bottleneck of traditional deep reinforcement learning methods while dealing with POMDPs.The measurements from the interceptor’s seeker during each guidance cycle are combined into one sequence as the input to the policy network since the detection frequency of an interceptor is usually higher than its guidance frequency.During training,the hidden states of the RNN layer in the policy network are recorded to overcome the partially observable problem that this RNN layer causes inside the agent.The training curves show that the proposed RRTD3 successfully enhances data efficiency,training speed,and training stability.The test results confirm the advantages of the RRTD3-based guidance laws over some conventional guidance laws. 展开更多
关键词 Endoatmospheric interception Missile guidance Reinforcement learning markov decision process Recurrent neural networks
在线阅读 下载PDF
Fuzzy Q learning algorithm for dual-aircraft path planning to cooperatively detect targets by passive radars 被引量:7
8
作者 Xiang Gao Yangwang Fang Youli Wu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2013年第5期800-810,共11页
The problem of passive detection discussed in this paper involves searching and locating an aerial emitter by dualaircraft using passive radars. In order to improve the detection probability and accuracy, a fuzzy Q le... The problem of passive detection discussed in this paper involves searching and locating an aerial emitter by dualaircraft using passive radars. In order to improve the detection probability and accuracy, a fuzzy Q learning algorithrn for dual-aircraft flight path planning is proposed. The passive detection task model of the dual-aircraft is set up based on the partition of the target active radar's radiation area. The problem is formulated as a Markov decision process (MDP) by using the fuzzy theory to make a generalization of the state space and defining the transition functions, action space and reward function properly. Details of the path planning algorithm are presented. Simulation results indicate that the algorithm can provide adaptive strategies for dual-aircraft to control their flight paths to detect a non-maneuvering or maneu- vering target. 展开更多
关键词 markov decision process (MDP) fuzzy Q learning dual-aircraft coordination path planning passive detection.
在线阅读 下载PDF
A guidance method for coplanar orbital interception based on reinforcement learning 被引量:6
9
作者 ZENG Xin ZHU Yanwei +1 位作者 YANG Leping ZHANG Chengming 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2021年第4期927-938,共12页
This paper investigates the guidance method based on reinforcement learning(RL)for the coplanar orbital interception in a continuous low-thrust scenario.The problem is formulated into a Markov decision process(MDP)mod... This paper investigates the guidance method based on reinforcement learning(RL)for the coplanar orbital interception in a continuous low-thrust scenario.The problem is formulated into a Markov decision process(MDP)model,then a welldesigned RL algorithm,experience based deep deterministic policy gradient(EBDDPG),is proposed to solve it.By taking the advantage of prior information generated through the optimal control model,the proposed algorithm not only resolves the convergence problem of the common RL algorithm,but also successfully trains an efficient deep neural network(DNN)controller for the chaser spacecraft to generate the control sequence.Numerical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude. 展开更多
关键词 orbital interception reinforcement learning(RL) markov decision process(MDP) deep neural network(DNN)
在线阅读 下载PDF
Optimal index shooting policy for layered missile defense system 被引量:2
10
作者 LI Longyue FAN Chengli +2 位作者 XING Qinghua XU Hailong ZHAO Huizhen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第1期118-129,共12页
In order to cope with the increasing threat of the ballistic missile(BM)in a shorter reaction time,the shooting policy of the layered defense system needs to be optimized.The main decisionmaking problem of shooting op... In order to cope with the increasing threat of the ballistic missile(BM)in a shorter reaction time,the shooting policy of the layered defense system needs to be optimized.The main decisionmaking problem of shooting optimization is how to choose the next BM which needs to be shot according to the previous engagements and results,thus maximizing the expected return of BMs killed or minimizing the cost of BMs penetration.Motivated by this,this study aims to determine an optimal shooting policy for a two-layer missile defense(TLMD)system.This paper considers a scenario in which the TLMD system wishes to shoot at a collection of BMs one at a time,and to maximize the return obtained from BMs killed before the system demise.To provide a policy analysis tool,this paper develops a general model for shooting decision-making,the shooting engagements can be described as a discounted reward Markov decision process.The index shooting policy is a strategy that can effectively balance the shooting returns and the risk that the defense mission fails,and the goal is to maximize the return obtained from BMs killed before the system demise.The numerical results show that the index policy is better than a range of competitors,especially the mean returns and the mean killing BM number. 展开更多
关键词 Gittins index shooting policy layered missile defense multi-armed bandits problem markov decision process
在线阅读 下载PDF
A novel dynamic call admission control policy for wireless network 被引量:1
11
作者 黄国盛 陈志刚 +2 位作者 李庆华 赵明 郭真 《Journal of Central South University》 SCIE EI CAS 2010年第1期110-116,共7页
To address the issue of resource scarcity in wireless communication, a novel dynamic call admission control scheme for wireless mobile network was proposed. The scheme established a reward computing model of call admi... To address the issue of resource scarcity in wireless communication, a novel dynamic call admission control scheme for wireless mobile network was proposed. The scheme established a reward computing model of call admission of wireless cell based on Markov decision process, dynamically optimized call admission process according to the principle of maximizing the average system rewards. Extensive simulations were conducted to examine the performance of the model by comparing with other policies in terms of new call blocking probability, handoff call dropping probability and resource utilization rate. Experimental results show that the proposed scheme can achieve better adaptability to changes in traffic conditions than existing protocols. Under high call traffic load, handoff call dropping probability and new call blocking probability can be reduced by about 8%, and resource utilization rate can be improved by 2%-6%. The proposed scheme can achieve high source utilization rate of about 85%. 展开更多
关键词 wireless network call admission control quality of service markov decision process
在线阅读 下载PDF
Distributed cooperative task planning algorithm for multiple satellites in delayed communication environment 被引量:2
12
作者 Chong Wang Jinhui Tang +2 位作者 Xiaohang Cheng Yingchen Liu Changchun Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第3期619-633,共15页
Multiple earth observing satellites need to communicate with each other to observe plenty of targets on the Earth together. The factors, such as external interference, result in satellite information interaction delay... Multiple earth observing satellites need to communicate with each other to observe plenty of targets on the Earth together. The factors, such as external interference, result in satellite information interaction delays, which is unable to ensure the integrity and timeliness of the information on decision making for satellites. And the optimization of the planning result is affected. Therefore, the effect of communication delay is considered during the multi-satel ite coordinating process. For this problem, firstly, a distributed cooperative optimization problem for multiple satellites in the delayed communication environment is formulized. Secondly, based on both the analysis of the temporal sequence of tasks in a single satellite and the dynamically decoupled characteristics of the multi-satellite system, the environment information of multi-satellite distributed cooperative optimization is constructed on the basis of the directed acyclic graph(DAG). Then, both a cooperative optimization decision making framework and a model are built according to the decentralized partial observable Markov decision process(DEC-POMDP). After that, a satellite coordinating strategy aimed at different conditions of communication delay is mainly analyzed, and a unified processing strategy on communication delay is designed. An approximate cooperative optimization algorithm based on simulated annealing is proposed. Finally, the effectiveness and robustness of the method presented in this paper are verified via the simulation. 展开更多
关键词 Earth observing satellite(EOS) distributed coo-perative task planning delayed communication decentralized partial observable markov decision process(DEC-POMDP) simulated annealing
在线阅读 下载PDF
A new analytical algorithm for computing probability distribution of project completion time
13
作者 侯振挺 张玄 孔祥星 《Journal of Central South University》 SCIE EI CAS 2010年第5期1006-1010,共5页
An analytical algorithm was presented for the exact computation of the probability distribution of the project completion time in stochastic networks,where the activity durations are mutually independent and continuou... An analytical algorithm was presented for the exact computation of the probability distribution of the project completion time in stochastic networks,where the activity durations are mutually independent and continuously distributed random variables. Firstly,stochastic activity networks were modeled as continuous-time Markov process with a single absorbing state by the well-know method of supplementary variables and the time changed from the initial state to absorbing state is equal to the project completion time.Then,the Markov process was regarded as a special case of Markov skeleton process.By taking advantage of the backward equations of Markov skeleton processes,a backward algorithm was proposed to compute the probability distribution of the project completion time.Finally,a numerical example was solved to demonstrate the performance of the proposed methodology.The results show that the proposed algorithm is capable of computing the exact distribution function of the project completion time,and the expectation and variance are obtained. 展开更多
关键词 stochastic activity networks project completion time distribution function markov process supplementary variable technique
在线阅读 下载PDF
Probabilistic Analysis and Multicriteria Decision for Machine Assignment Problem with General Service Times
14
作者 Wang, Jing 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1994年第1期53-61,共9页
In this paper we carried out a probabilistic analysis for a machine repair system with a general service-time distribution by means of generalized Markov renewal processes. Some formulas for the steady-state performan... In this paper we carried out a probabilistic analysis for a machine repair system with a general service-time distribution by means of generalized Markov renewal processes. Some formulas for the steady-state performance measures. such as the distribution of queue sizes, average queue length, degree of repairman utilization and so on. are then derived. Finally, the machine repair model and a multiple critcria decision-making method are applied to study machine assignment problem with a general service-time distribution to determine the optimum number of machines being serviced by one repairman. 展开更多
关键词 Machine assignment problem Queueing model Multicriteria decision markov processes
在线阅读 下载PDF
Optimal policy for controlling two-server queueing systems with jockeying
15
作者 LIN Bing LIN Yuchen BHATNAGAR Rohit 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第1期144-155,共12页
This paper studies the optimal policy for joint control of admission, routing, service, and jockeying in a queueing system consisting of two exponential servers in parallel.Jobs arrive according to a Poisson process.U... This paper studies the optimal policy for joint control of admission, routing, service, and jockeying in a queueing system consisting of two exponential servers in parallel.Jobs arrive according to a Poisson process.Upon each arrival, an admission/routing decision is made, and the accepted job is routed to one of the two servers with each being associated with a queue.After each service completion, the servers have an option of serving a job from its own queue, serving a jockeying job from another queue, or staying idle.The system performance is inclusive of the revenues from accepted jobs, the costs of holding jobs in queues, the service costs and the job jockeying costs.To maximize the total expected discounted return, we formulate a Markov decision process(MDP) model for this system.The value iteration method is employed to characterize the optimal policy as a hedging point policy.Numerical studies verify the structure of the hedging point policy which is convenient for implementing control actions in practice. 展开更多
关键词 queueing system jockeying optimal policy markov decision process(MDP) dynamic programming
在线阅读 下载PDF
Reliability Analysis of Some Typical Repairable Systems with Arbitrary Repair-Time Distribution
16
作者 Wang Jing and Yang DeliInst. of Systems Eng., Dalian Univ. of Tech., Dalian 116023, China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1992年第4期61-72,共12页
In this paper, reliability of some typical non-Markov repairable systems, including series systems, m-out-of- n or majority vote systems, and n : m cross-strapping standby redundant systems with general repair-time di... In this paper, reliability of some typical non-Markov repairable systems, including series systems, m-out-of- n or majority vote systems, and n : m cross-strapping standby redundant systems with general repair-time distribution, are studied by applying the generalized Markov renewal process (GMRP). The stochastic behavior of the typical systems is analyzed here. Formulas for mean time to first system failure, MTBF, MTTR, and availability are then developed. 展开更多
关键词 Repairable systems Reliability analysis markov processes.
在线阅读 下载PDF
A GMRP Approach for Reliability Analysis of Repairable Systems
17
作者 Wang Jing & Yang DeliInst. of Systems Engineering, Dalian University of Technology. Dalian 116023, China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 1991年第2期44-54,共11页
We propose here a mathematical approach for the study of repairable systems with arbitrary distributions. The idea is to define a new type of stochastic process, called a generalized Markov renewal process (GMRP). whi... We propose here a mathematical approach for the study of repairable systems with arbitrary distributions. The idea is to define a new type of stochastic process, called a generalized Markov renewal process (GMRP). which may describe the transition behavior of the stochastic process at non-regenerative points. In the paper an analytical method for the GMRP is put forward and the formulas are then presented for reliability analysis of repairable systems which can be described by a GMRP with finite states. A signal flow graph technique for system modeling is also summarized here. Finally- an analytical model to evaluate the reliability of a m-out-of- n.G system with general repair-time distribution is developed by means of the GMRP approach. 展开更多
关键词 Reliability evaluation markov process Stochastic processes.
在线阅读 下载PDF
Computational analysis of(MAP_1,MAP_2)/(PH_1,PH_2)/N queues with finite buffer in wireless cellular networks
18
作者 Zonghao Zhou Yijun Zhu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2011年第5期739-748,共10页
This paper studies a queueing model with the finite buffer of capacity K in wireless cellular networks, which has two types of arriving calls--handoff and originating calls, both of which follow the Markov arriving pr... This paper studies a queueing model with the finite buffer of capacity K in wireless cellular networks, which has two types of arriving calls--handoff and originating calls, both of which follow the Markov arriving process with different rates. The channel holding times of the two types of calls follow different phase-type distributions. Firstly, the joint distribution of two queue lengths is derived, and then the dropping and blocking probabilities, the mean queue length and the mean waiting time from the joint distribution are gotten. Finally, numerical examples show the impact of different call arrival rates on the performance measures. 展开更多
关键词 wireless cellular network queue markov arriving process (MAP) phase-type (PH) distribution handoff call originating call.
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部