期刊文献+
共找到3,348篇文章
< 1 2 168 >
每页显示 20 50 100
Machine learning models for optimization, validation, and prediction of light emitting diodes with kinetin based basal medium for in vitro regeneration of upland cotton (Gossypium hirsutum L.)
1
作者 ÖZKAT Gözde Yalçın AASIM Muhammad +2 位作者 BAKHSH Allah ALI Seyid Amjad ÖZCAN Sebahattin 《Journal of Cotton Research》 2025年第2期228-241,共14页
Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is inf... Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency. 展开更多
关键词 Machine learning COTTON In vitro regeneration Light emitting diodes optimIZATION KINETIN
在线阅读 下载PDF
玻尔兹曼优化Q-learning的高速铁路越区切换控制算法 被引量:2
2
作者 陈永 康婕 《控制理论与应用》 北大核心 2025年第4期688-694,共7页
针对5G-R高速铁路越区切换使用固定切换阈值,且忽略了同频干扰、乒乓切换等的影响,导致越区切换成功率低的问题,提出了一种玻尔兹曼优化Q-learning的越区切换控制算法.首先,设计了以列车位置–动作为索引的Q表,并综合考虑乒乓切换、误... 针对5G-R高速铁路越区切换使用固定切换阈值,且忽略了同频干扰、乒乓切换等的影响,导致越区切换成功率低的问题,提出了一种玻尔兹曼优化Q-learning的越区切换控制算法.首先,设计了以列车位置–动作为索引的Q表,并综合考虑乒乓切换、误码率等构建Q-learning算法回报函数;然后,提出玻尔兹曼搜索策略优化动作选择,以提高切换算法收敛性能;最后,综合考虑基站同频干扰的影响进行Q表更新,得到切换判决参数,从而控制切换执行.仿真结果表明:改进算法在不同运行速度和不同运行场景下,较传统算法能有效提高切换成功率,且满足无线通信服务质量QoS的要求. 展开更多
关键词 越区切换 5G-R Q-learning算法 玻尔兹曼优化策略
在线阅读 下载PDF
Low rank optimization for efficient deep learning:making a balance between compact architecture and fast training
3
作者 OU Xinwei CHEN Zhangxin +1 位作者 ZHU Ce LIU Yipeng 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期509-531,F0002,共24页
Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices... Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training. 展开更多
关键词 model compression subspace training effective rank low rank tensor optimization efficient deep learning
在线阅读 下载PDF
Hybrid Q-learning for data-based optimal control of non-linear switching system 被引量:1
4
作者 LI Xiaofeng DONG Lu SUN Changyin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第5期1186-1194,共9页
In this paper,the optimal control of non-linear switching system is investigated without knowing the system dynamics.First,the Hamilton-Jacobi-Bellman(HJB)equation is derived with the consideration of hybrid action sp... In this paper,the optimal control of non-linear switching system is investigated without knowing the system dynamics.First,the Hamilton-Jacobi-Bellman(HJB)equation is derived with the consideration of hybrid action space.Then,a novel data-based hybrid Q-learning(HQL)algorithm is proposed to find the optimal solution in an iterative manner.In addition,the theoretical analysis is provided to illustrate the convergence and optimality of the proposed algorithm.Finally,the algorithm is implemented with the actor-critic(AC)structure,and two linear-in-parameter neural networks are utilized to approximate the functions.Simulation results validate the effectiveness of the data-driven method. 展开更多
关键词 switching system hybrid action space optimal control reinforcement learning hybrid Q-learning(HQL)
在线阅读 下载PDF
Finding optimal Bayesian networks by a layered learning method 被引量:4
5
作者 YANG Yu GAO Xiaoguang GUO Zhigao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第5期946-958,共13页
It is unpractical to learn the optimal structure of a big Bayesian network(BN)by exhausting the feasible structures,since the number of feasible structures is super exponential on the number of nodes.This paper propos... It is unpractical to learn the optimal structure of a big Bayesian network(BN)by exhausting the feasible structures,since the number of feasible structures is super exponential on the number of nodes.This paper proposes an approach to layer nodes of a BN by using the conditional independence testing.The parents of a node layer only belong to the layer,or layers who have priority over the layer.When a set of nodes has been layered,the number of feasible structures over the nodes can be remarkably reduced,which makes it possible to learn optimal BN structures for bigger sizes of nodes by accurate algorithms.Integrating the dynamic programming(DP)algorithm with the layering approach,we propose a hybrid algorithm—layered optimal learning(LOL)to learn BN structures.Benefitted by the layering approach,the complexity of the DP algorithm reduces to O(ρ2^n?1)from O(n2^n?1),whereρ<n.Meanwhile,the memory requirements for storing intermediate results are limited to O(C k#/k#^2 )from O(Cn/n^2 ),where k#<n.A case study on learning a standard BN with 50 nodes is conducted.The results demonstrate the superiority of the LOL algorithm,with respect to the Bayesian information criterion(BIC)score criterion,over the hill-climbing,max-min hill-climbing,PC,and three-phrase dependency analysis algorithms. 展开更多
关键词 BAYESIAN network (BN) structure learning layeredoptimal learning (LOL)
在线阅读 下载PDF
Cognitive interference decision method for air defense missile fuze based on reinforcement learning 被引量:1
6
作者 Dingkun Huang Xiaopeng Yan +2 位作者 Jian Dai Xinwei Wang Yangtian Liu 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第2期393-404,共12页
To solve the problem of the low interference success rate of air defense missile radio fuzes due to the unified interference form of the traditional fuze interference system,an interference decision method based Q-lea... To solve the problem of the low interference success rate of air defense missile radio fuzes due to the unified interference form of the traditional fuze interference system,an interference decision method based Q-learning algorithm is proposed.First,dividing the distance between the missile and the target into multiple states to increase the quantity of state spaces.Second,a multidimensional motion space is utilized,and the search range of which changes with the distance of the projectile,to select parameters and minimize the amount of ineffective interference parameters.The interference effect is determined by detecting whether the fuze signal disappears.Finally,a weighted reward function is used to determine the reward value based on the range state,output power,and parameter quantity information of the interference form.The effectiveness of the proposed method in selecting the range of motion space parameters and designing the discrimination degree of the reward function has been verified through offline experiments involving full-range missile rendezvous.The optimal interference form for each distance state has been obtained.Compared with the single-interference decision method,the proposed decision method can effectively improve the success rate of interference. 展开更多
关键词 Cognitive radio Interference decision Radio fuze Reinforcement learning Interference strategy optimization
在线阅读 下载PDF
Bayesian network learning algorithm based on unconstrained optimization and ant colony optimization 被引量:3
7
作者 Chunfeng Wang Sanyang Liu Mingmin Zhu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第5期784-790,共7页
Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony opt... Structure learning of Bayesian networks is a wellresearched but computationally hard task.For learning Bayesian networks,this paper proposes an improved algorithm based on unconstrained optimization and ant colony optimization(U-ACO-B) to solve the drawbacks of the ant colony optimization(ACO-B).In this algorithm,firstly,an unconstrained optimization problem is solved to obtain an undirected skeleton,and then the ACO algorithm is used to orientate the edges,thus returning the final structure.In the experimental part of the paper,we compare the performance of the proposed algorithm with ACO-B algorithm.The experimental results show that our method is effective and greatly enhance convergence speed than ACO-B algorithm. 展开更多
关键词 Bayesian network structure learning ant colony optimization unconstrained optimization
在线阅读 下载PDF
Reinforcement learning based parameter optimization of active disturbance rejection control for autonomous underwater vehicle 被引量:3
8
作者 SONG Wanping CHEN Zengqiang +1 位作者 SUN Mingwei SUN Qinglin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第1期170-179,共10页
This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater ve... This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater vehicle(AUV).The number of controllers is increased to realize AUV motion decoupling.At the same time, in order to avoid the oversize of the algorithm, combined with the controlled content, a simplified Q-learning algorithm is constructed to realize the parameter adaptation of the LADRC controller.Finally, through the simulation experiment of the controller with fixed parameters and the controller based on the Q-learning algorithm, the rationality of the simplified algorithm, the effectiveness of parameter adaptation, and the unique advantages of the LADRC controller are verified. 展开更多
关键词 autonomous underwater vehicle(AUV) reinforcement learning(RL) Q-learning linear active disturbance rejection control(LADRC) motion decoupling parameter optimization
在线阅读 下载PDF
LSTM-DPPO based deep reinforcement learning controller for path following optimization of unmanned surface vehicle 被引量:3
9
作者 XIA Jiawei ZHU Xufang +1 位作者 LIU Zhong XIA Qingtao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第5期1343-1358,共16页
To solve the path following control problem for unmanned surface vehicles(USVs),a control method based on deep reinforcement learning(DRL)with long short-term memory(LSTM)networks is proposed.A distributed proximal po... To solve the path following control problem for unmanned surface vehicles(USVs),a control method based on deep reinforcement learning(DRL)with long short-term memory(LSTM)networks is proposed.A distributed proximal policy opti-mization(DPPO)algorithm,which is a modified actor-critic-based type of reinforcement learning algorithm,is adapted to improve the controller performance in repeated trials.The LSTM network structure is introduced to solve the strong temporal cor-relation USV control problem.In addition,a specially designed path dataset,including straight and curved paths,is established to simulate various sailing scenarios so that the reinforcement learning controller can obtain as much handling experience as possible.Extensive numerical simulation results demonstrate that the proposed method has better control performance under missions involving complex maneuvers than trained with limited scenarios and can potentially be applied in practice. 展开更多
关键词 unmanned surface vehicle(USV) deep reinforce-ment learning(DRL) path following path dataset proximal po-licy optimization long short-term memory(LSTM)
在线阅读 下载PDF
Thickness of excavation damaged zone estimation using four novel hybrid ensemble learning models : A case study of Xiangxi Gold Mine and Fankou Lead-zinc Mine in China
10
作者 LIU Lei-lei HONG Zhi-xian +1 位作者 ZHAO Guo-yan LIANG Wei-zhang 《Journal of Central South University》 CSCD 2024年第11期3965-3982,共18页
Underground excavation can lead to stress redistribution and result in an excavation damaged zone(EDZ),which is an important factor affecting the excavation stability and support design.Accurately estimating the thick... Underground excavation can lead to stress redistribution and result in an excavation damaged zone(EDZ),which is an important factor affecting the excavation stability and support design.Accurately estimating the thickness of EDZ is essential to ensure the safety of the underground excavation.In this study,four novel hybrid ensemble learning models were developed by optimizing the extreme gradient boosting(XGBoost)and random forest(RF)algorithms through simulated annealing(SA)and Bayesian optimization(BO)approaches,namely SA-XGBoost,SA-RF,BO XGBoost and BO-RF models.A total of 210 cases were collected from Xiangxi Gold Mine in Hunan Province and Fankou Lead-zinc Mine in Guangdong Province,China,including seven input indicators:embedding depth,drift span,uniaxial compressive strength of rock,rock mass rating,unit weight of rock,lateral pressure coefficient of roadway and unit consumption of blasting explosive.The performance of the proposed models was evaluated by the coefficient of determination,root mean squared error,mean absolute error and variance accounted for.The results indicated that the SA-XGBoost model performed best.The Shapley additive explanations method revealed that the embedding depth was the most important indicator.Moreover,the convergence curves suggested that the SA-XGBoost model can reduce the generalization error and avoid overfitting. 展开更多
关键词 excavation damaged zone machine learning simulated annealing Bayesian optimization extreme gradient boosting random forest
在线阅读 下载PDF
A Bayesian Network Learning Algorithm Based on Independence Test and Ant Colony Optimization 被引量:21
11
作者 JI Jun-Zhong ZHANG Hong-Xun HU Ren-Bing LIU Chun-Nian 《自动化学报》 EI CSCD 北大核心 2009年第3期281-288,共8页
关键词 最优化 随机系统 自动化 BN
在线阅读 下载PDF
离散四水库问题基准下基于n步Q-learning的水库群优化调度 被引量:5
12
作者 胡鹤轩 钱泽宇 +1 位作者 胡强 张晔 《中国水利水电科学研究院学报(中英文)》 北大核心 2023年第2期138-147,共10页
水库优化调度问题是一个具有马尔可夫性的优化问题。强化学习是目前解决马尔可夫决策过程问题的研究热点,其在解决单个水库优化调度问题上表现优异,但水库群系统的复杂性为强化学习的应用带来困难。针对复杂的水库群优化调度问题,提出... 水库优化调度问题是一个具有马尔可夫性的优化问题。强化学习是目前解决马尔可夫决策过程问题的研究热点,其在解决单个水库优化调度问题上表现优异,但水库群系统的复杂性为强化学习的应用带来困难。针对复杂的水库群优化调度问题,提出一种离散四水库问题基准下基于n步Q-learning的水库群优化调度方法。该算法基于n步Q-learning算法,对离散四水库问题基准构建一种水库群优化调度的强化学习模型,通过探索经验优化,最终生成水库群最优调度方案。试验分析结果表明,当有足够的探索经验进行学习时,结合惩罚函数的一步Q-learning算法能够达到理论上的最优解。用可行方向法取代惩罚函数实现约束,依据离散四水库问题基准约束建立时刻可行状态表和时刻状态可选动作哈希表,有效的对状态动作空间进行降维,使算法大幅度缩短优化时间。不同的探索策略决定探索经验的有效性,从而决定优化效率,尤其对于复杂的水库群优化调度问题,提出了一种改进的ε-greedy策略,并与传统的ε-greedy、置信区间上限UCB、Boltzmann探索三种策略进行对比,验证了其有效性,在其基础上引入n步回报改进为n步Q-learning,确定合适的n步和学习率等超参数,进一步改进算法优化效率。 展开更多
关键词 水库优化调度 强化学习 Q学习 惩罚函数 可行方向法
在线阅读 下载PDF
基于Q-learning的工业互联网资源优化调度 被引量:3
13
作者 张延华 杨乐 +3 位作者 李萌 吴文君 杨睿哲 司鹏搏 《北京工业大学学报》 CAS CSCD 北大核心 2020年第11期1213-1221,共9页
面对5G与工业互联网中日益增长的数据传输与计算需求,移动边缘计算已逐渐成为一种新兴的解决方法,可有效应对工业互联网设备自身计算能力的不足,并充分缓解网络拥塞等问题.然而,当数量庞大的设备同时发送计算请求时,往往会超出边缘计算... 面对5G与工业互联网中日益增长的数据传输与计算需求,移动边缘计算已逐渐成为一种新兴的解决方法,可有效应对工业互联网设备自身计算能力的不足,并充分缓解网络拥塞等问题.然而,当数量庞大的设备同时发送计算请求时,往往会超出边缘计算服务器的计算负载.此外,工业互联网设备通常仅装配有限的能量供给,无法承受能源消耗过多的任务,且庞大的设备数量还决定了网络连接、数据计算等系统开销.因此,面向工业互联网场景中机器类型通信设备的计算任务卸载问题,提出一种基于Q-learning的计算任务卸载决策方法,综合考虑任务卸载过程中的网络环境和服务器状态,并联合优化卸载过程产生的时延、能耗和经济开销.仿真结果表明,所提优化框架可有效减少计算任务卸载系统的时延、能耗和经济的总开销. 展开更多
关键词 资源优化 计算任务卸载 工业互联网 移动边缘计算 Q-learning 机器类型通信设备
在线阅读 下载PDF
基于n步Q-learning算法的风电抽水蓄能联合系统日随机优化调度研究 被引量:7
14
作者 李文武 马浩云 +1 位作者 贺中豪 徐康 《水电能源科学》 北大核心 2022年第1期206-210,共5页
针对Q-learning算法求解风电抽蓄联合系统日随机优化调度中,存在功率偏差大及收敛速度慢的问题,提出基于n步Q-learning算法的风电抽蓄日随机优化调度方法。先将风电出力随机过程视为Markov过程并建立风电抽蓄日随机优化调度模型;其次分... 针对Q-learning算法求解风电抽蓄联合系统日随机优化调度中,存在功率偏差大及收敛速度慢的问题,提出基于n步Q-learning算法的风电抽蓄日随机优化调度方法。先将风电出力随机过程视为Markov过程并建立风电抽蓄日随机优化调度模型;其次分析n步Q-learning算法应用于优化调度模型中的优势;最后按照应用流程求解优化调度模型。算例表明,n步Q-learning算法的优化结果与n步和学习率取值有关,当两个参数取值适中时能得到最优功率偏差结果,在求解该问题上对比n步Q-learning与Q-learning算法,前者能更快收敛且较后者功率偏差降低7.4%、求解时间降低10.4%,验证了n步Q-learning算法的求解优越性。 展开更多
关键词 风蓄随机优化调度 强化学习 Q-learning算法 n步自举法
在线阅读 下载PDF
基于Q-Learning算法的毫微微小区功率控制算法 被引量:2
15
作者 李云 唐英 刘涵霄 《电子与信息学报》 EI CSCD 北大核心 2019年第11期2557-2564,共8页
该文研究macro-femto异构蜂窝网络中移动用户的功率控制问题,首先建立了以最小接收信号信干噪比为约束条件,最大化毫微微小区的总能效为目标的优化模型;然后提出了基于Q-Learning算法的毫微微小区集中式功率控制(PCQL)算法,该算法基于... 该文研究macro-femto异构蜂窝网络中移动用户的功率控制问题,首先建立了以最小接收信号信干噪比为约束条件,最大化毫微微小区的总能效为目标的优化模型;然后提出了基于Q-Learning算法的毫微微小区集中式功率控制(PCQL)算法,该算法基于强化学习,能在没有准确信道状态信息的情况下,实现对小区内所有用户终端的发射功率统一调整。仿真结果表明该算法能实现对用户终端的功率有效控制,提升系统能效。 展开更多
关键词 集中式功率控制 Q-learning算法 能效优化
在线阅读 下载PDF
Fault diagnosis model based on multi-manifold learning and PSO-SVM for machinery 被引量:6
16
作者 Wang Hongjun Xu Xiaoli Rosen B G 《仪器仪表学报》 EI CAS CSCD 北大核心 2014年第S2期210-214,共5页
Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold l... Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold learning and particle swarm optimization support vector machine(PSO-SVM) is studied. This fault diagnosis model is used for a rolling bearing experimental of three kinds faults. The results are verified that this model based on multi-manifold learning and PSO-SVM is good at the fault sensitive features acquisition with effective accuracy. 展开更多
关键词 FAULT diagnosis multi-manifold learning particle SWARM optimization support vector machine
在线阅读 下载PDF
Improved artificial bee colony algorithm with mutual learning 被引量:7
17
作者 Yu Liu Xiaoxi Ling +1 位作者 Yu Liang Guanghao Liu 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第2期265-275,共11页
The recently invented artificial bee colony (ABC) al- gorithm is an optimization algorithm based on swarm intelligence that has been used to solve many kinds of numerical function optimization problems. It performs ... The recently invented artificial bee colony (ABC) al- gorithm is an optimization algorithm based on swarm intelligence that has been used to solve many kinds of numerical function optimization problems. It performs well in most cases, however, there still exists an insufficiency in the ABC algorithm that ignores the fitness of related pairs of individuals in the mechanism of find- ing a neighboring food source. This paper presents an improved ABC algorithm with mutual learning (MutualABC) that adjusts the produced candidate food source with the higher fitness between two individuals selected by a mutual learning factor. The perfor- mance of the improved MutualABC algorithm is tested on a set of benchmark functions and compared with the basic ABC algo- rithm and some classical versions of improved ABC algorithms. The experimental results show that the MutualABC algorithm with appropriate parameters outperforms other ABC algorithms in most experiments. 展开更多
关键词 artificial bee colony (ABC) algorithm numerical func- tion optimization swarm intelligence mutual learning.
在线阅读 下载PDF
Learning Bayesian network structure with immune algorithm 被引量:4
18
作者 Zhiqiang Cai Shubin Si +1 位作者 Shudong Sun Hongyan Dui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2015年第2期282-291,共10页
Finding out reasonable structures from bulky data is one of the difficulties in modeling of Bayesian network (BN), which is also necessary in promoting the application of BN. This pa- per proposes an immune algorith... Finding out reasonable structures from bulky data is one of the difficulties in modeling of Bayesian network (BN), which is also necessary in promoting the application of BN. This pa- per proposes an immune algorithm based method (BN-IA) for the learning of the BN structure with the idea of vaccination. Further- more, the methods on how to extract the effective vaccines from local optimal structure and root nodes are also described in details. Finally, the simulation studies are implemented with the helicopter convertor BN model and the car start BN model. The comparison results show that the proposed vaccines and the BN-IA can learn the BN structure effectively and efficiently. 展开更多
关键词 structure learning Bayesian network immune algorithm local optimal structure VACCINATION
在线阅读 下载PDF
基于Q-learning的弹道优化研究 被引量:2
19
作者 周毅昕 程可涛 +2 位作者 柳立敏 何贤军 黄振贵 《兵器装备工程学报》 CSCD 北大核心 2022年第5期191-196,共6页
为提升弹道优化效率,缩短作战响应时间,提出了一种基于Q-learning算法的简控弹道优化方法。首先在竖直平面内以3自由度(DOF)只受重力和空气阻力的质点弹丸为研究对象,建立无控弹道方程组作为参考模型并用龙格库塔法求解。在此基础上分... 为提升弹道优化效率,缩短作战响应时间,提出了一种基于Q-learning算法的简控弹道优化方法。首先在竖直平面内以3自由度(DOF)只受重力和空气阻力的质点弹丸为研究对象,建立无控弹道方程组作为参考模型并用龙格库塔法求解。在此基础上分别以最远飞行距离和最大落点速度为目标,以加速度指令直接控制输出,建立有控弹道优化模型。在设定初速度与出射角的情况下,在弹丸的外弹道飞行过程利用Q-learning算法输出控制指令,通过强化学习迭代计算实现弹道优化目标。仿真模拟结果证明,在强化学习控制下的导弹射程比无控时明显增加,表明所提出的优化设计方法可有效优化弹道,且效率高。 展开更多
关键词 弹道优化 强化学习 Q-learning算法 外弹道
在线阅读 下载PDF
移动网络用户行为挖掘模型及在E-Learning系统中的应用 被引量:5
20
作者 王玲 《现代电子技术》 北大核心 2016年第24期83-87,共5页
通过对移动用户的行为挖掘模型构建,并应用在E-Learning网络学习移动中,实现E-Learning系统的优化设计。提出一种基于频繁项集关联规则分析的移动网络用户行为挖掘模型,结合嵌入式Linux进行E-Learning系统的开发设计。进行E-Learning系... 通过对移动用户的行为挖掘模型构建,并应用在E-Learning网络学习移动中,实现E-Learning系统的优化设计。提出一种基于频繁项集关联规则分析的移动网络用户行为挖掘模型,结合嵌入式Linux进行E-Learning系统的开发设计。进行E-Learning系统的总体设计描述,开启SQL驱动支持来编译基于ARM平台的QWT库,构建Tiny OS的通信机制,实现无线消息包组的传输。软件开发主要包括移动网络用户节点程序设计、节点程序开发、上位机通信。在嵌入式Linux系统下的程序引导和软件的移植实现了对移动网络用户行为模型挖掘和E-Learning系统的软件开发设计。实验结果表明,该移动网络用户行为挖掘模型具有较好的数据挖掘性能,系统优化设计提高了E-Learning系统对移动网络用户的服务质量,展示了较好的应用价值。 展开更多
关键词 移动网络用户 行为挖掘 E-learning系统 优化设计
在线阅读 下载PDF
上一页 1 2 168 下一页 到第
使用帮助 返回顶部