As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when ...As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.展开更多
The process of making decisions is something humans do inherently and routinely,to the extent that it appears commonplace. However,in order to achieve good overall performance,decisions must take into account both the...The process of making decisions is something humans do inherently and routinely,to the extent that it appears commonplace. However,in order to achieve good overall performance,decisions must take into account both the outcomes of past decisions and opportunities of future ones. Reinforcement learning,which is fundamental to sequential decision-making,consists of the following components: 1 A set of decisions epochs; 2 A set of environment states; 3 A set of available actions to transition states; 4 State-action dependent immediate rewards for each action.At each decision,the environment state provides the decision maker with a set of available actions from which to choose. As a result of selecting a particular action in the state,the environment generates an immediate reward for the decision maker and shifts to a different state and decision. The ultimate goal for the decision maker is to maximize the total reward after a sequence of time steps.This paper will focus on an archetypal example of reinforcement learning,the stochastic multi-armed bandit problem. After introducing the dilemma,I will briefly cover the most common methods used to solve it,namely the UCB and εn- greedy algorithms. I will also introduce my own greedy implementation,the strict-greedy algorithm,which more tightly follows the greedy pattern in algorithm design,and show that it runs comparably to the two accepted algorithms.展开更多
蒙特卡洛树搜索(Monte Carlo tree search, MCTS)将强化学习的反馈优化与生长树的动态规划相结合,在输出当前状态的最佳动作的同时极大地减少了计算量,因此成为开放环境下众多领域智能系统的关键通用方法.但由于计算资源匮乏或者计算成...蒙特卡洛树搜索(Monte Carlo tree search, MCTS)将强化学习的反馈优化与生长树的动态规划相结合,在输出当前状态的最佳动作的同时极大地减少了计算量,因此成为开放环境下众多领域智能系统的关键通用方法.但由于计算资源匮乏或者计算成本昂贵等原因,完全充分地对树结构进行搜索是难以实现的,因此在有限的预算下高效合理地分配计算资源从而获得当前状态下的最优动作是目前研究的一个重要问题.现有大多数算法仅以识别准确率作为性能指标,通过实验对比验证算法性能,缺少对算法的识别误差和影响因素的分析,从而降低了算法的可信性和可解释性.针对该问题,选择基础核心的2名玩家、完全信息、零和博弈场景,提出了固定预算设定下MCTS抽象模型的最优行动识别算法DLU——基于相对熵置信区间的纯探索(relative entropy confidence interval based pure exploration).首先提出了基于相对熵置信区间的估值方法对叶子节点胜率进行估计,其可以从底层提高树节点估值准确性;其次给出了第1层节点值估计、最优节点选择策略以形成完整算法流程;然后推导了DLU算法的识别误差上界,并分析了算法性能的影响因素;最后在人造树模型和井字棋2种场景下验证算法性能.实验结果表明,在人造树模型上基于相对熵的算法类具有更高的准确度,且模型越复杂识别难度越高时,该算法类的性能优势越显著.在井字棋场景下,DLU算法能有效地识别最优动作.展开更多
In order to cope with the increasing threat of the ballistic missile(BM)in a shorter reaction time,the shooting policy of the layered defense system needs to be optimized.The main decisionmaking problem of shooting op...In order to cope with the increasing threat of the ballistic missile(BM)in a shorter reaction time,the shooting policy of the layered defense system needs to be optimized.The main decisionmaking problem of shooting optimization is how to choose the next BM which needs to be shot according to the previous engagements and results,thus maximizing the expected return of BMs killed or minimizing the cost of BMs penetration.Motivated by this,this study aims to determine an optimal shooting policy for a two-layer missile defense(TLMD)system.This paper considers a scenario in which the TLMD system wishes to shoot at a collection of BMs one at a time,and to maximize the return obtained from BMs killed before the system demise.To provide a policy analysis tool,this paper develops a general model for shooting decision-making,the shooting engagements can be described as a discounted reward Markov decision process.The index shooting policy is a strategy that can effectively balance the shooting returns and the risk that the defense mission fails,and the goal is to maximize the return obtained from BMs killed before the system demise.The numerical results show that the index policy is better than a range of competitors,especially the mean returns and the mean killing BM number.展开更多
基金supported by the National Natural Science Foundation of China(NSFC)(62102232,62122042,61971269)Natural Science Foundation of Shandong Province Under(ZR2021QF064)。
文摘As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.
文摘The process of making decisions is something humans do inherently and routinely,to the extent that it appears commonplace. However,in order to achieve good overall performance,decisions must take into account both the outcomes of past decisions and opportunities of future ones. Reinforcement learning,which is fundamental to sequential decision-making,consists of the following components: 1 A set of decisions epochs; 2 A set of environment states; 3 A set of available actions to transition states; 4 State-action dependent immediate rewards for each action.At each decision,the environment state provides the decision maker with a set of available actions from which to choose. As a result of selecting a particular action in the state,the environment generates an immediate reward for the decision maker and shifts to a different state and decision. The ultimate goal for the decision maker is to maximize the total reward after a sequence of time steps.This paper will focus on an archetypal example of reinforcement learning,the stochastic multi-armed bandit problem. After introducing the dilemma,I will briefly cover the most common methods used to solve it,namely the UCB and εn- greedy algorithms. I will also introduce my own greedy implementation,the strict-greedy algorithm,which more tightly follows the greedy pattern in algorithm design,and show that it runs comparably to the two accepted algorithms.
文摘蒙特卡洛树搜索(Monte Carlo tree search, MCTS)将强化学习的反馈优化与生长树的动态规划相结合,在输出当前状态的最佳动作的同时极大地减少了计算量,因此成为开放环境下众多领域智能系统的关键通用方法.但由于计算资源匮乏或者计算成本昂贵等原因,完全充分地对树结构进行搜索是难以实现的,因此在有限的预算下高效合理地分配计算资源从而获得当前状态下的最优动作是目前研究的一个重要问题.现有大多数算法仅以识别准确率作为性能指标,通过实验对比验证算法性能,缺少对算法的识别误差和影响因素的分析,从而降低了算法的可信性和可解释性.针对该问题,选择基础核心的2名玩家、完全信息、零和博弈场景,提出了固定预算设定下MCTS抽象模型的最优行动识别算法DLU——基于相对熵置信区间的纯探索(relative entropy confidence interval based pure exploration).首先提出了基于相对熵置信区间的估值方法对叶子节点胜率进行估计,其可以从底层提高树节点估值准确性;其次给出了第1层节点值估计、最优节点选择策略以形成完整算法流程;然后推导了DLU算法的识别误差上界,并分析了算法性能的影响因素;最后在人造树模型和井字棋2种场景下验证算法性能.实验结果表明,在人造树模型上基于相对熵的算法类具有更高的准确度,且模型越复杂识别难度越高时,该算法类的性能优势越显著.在井字棋场景下,DLU算法能有效地识别最优动作.
基金supported by the National Natural Science Foundation of China(7170120971771216)+1 种基金Shaanxi Natural Science Foundation(2019JQ-250)China Post-doctoral Fund(2019M653962)
文摘In order to cope with the increasing threat of the ballistic missile(BM)in a shorter reaction time,the shooting policy of the layered defense system needs to be optimized.The main decisionmaking problem of shooting optimization is how to choose the next BM which needs to be shot according to the previous engagements and results,thus maximizing the expected return of BMs killed or minimizing the cost of BMs penetration.Motivated by this,this study aims to determine an optimal shooting policy for a two-layer missile defense(TLMD)system.This paper considers a scenario in which the TLMD system wishes to shoot at a collection of BMs one at a time,and to maximize the return obtained from BMs killed before the system demise.To provide a policy analysis tool,this paper develops a general model for shooting decision-making,the shooting engagements can be described as a discounted reward Markov decision process.The index shooting policy is a strategy that can effectively balance the shooting returns and the risk that the defense mission fails,and the goal is to maximize the return obtained from BMs killed before the system demise.The numerical results show that the index policy is better than a range of competitors,especially the mean returns and the mean killing BM number.