期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
集装箱码头泊位-岸桥减排协同调度优化研究
1
作者 杨嘉卉 尤再进 +2 位作者 倪立夫 赵煜 李婉莹 《计算机工程》 北大核心 2025年第10期381-391,共11页
随着“双碳”目标的持续推进,港口产业进一步升级。在考虑港区船舶废气排放的前提下,建立船舶服务成本和排放成本最小化的双目标泊位-岸桥协同调度优化模型,并设计基于非支配排序遗传算法Ⅱ(NSGA-Ⅱ)的改进算法,即基于强化学习-Q学习NS... 随着“双碳”目标的持续推进,港口产业进一步升级。在考虑港区船舶废气排放的前提下,建立船舶服务成本和排放成本最小化的双目标泊位-岸桥协同调度优化模型,并设计基于非支配排序遗传算法Ⅱ(NSGA-Ⅱ)的改进算法,即基于强化学习-Q学习NSGA-Ⅱ(RL-Q-NSGA-Ⅱ)。通过对赤湾集装箱码头进行实证分析,将双目标减排协同调度优化模型分别采用改进算法、原始NSGA-Ⅱ算法与先到先服务调度模式得到的结果进行定量对比,实验结果表明,RL-Q-NSGA-Ⅱ算法在迭代速度、收敛性及帕累托前沿解聚集程度方面表现更优,与原始NSGA-Ⅱ算法相比,船舶服务成本和港区船舶大气污染排放成本分别优化12.19%和6.04%,总成本优化8.39%,与先到先服务模式相比,船舶服务成本和港区船舶大气污染排放成本分别优化18.68%和3.79%,总成本优化9.82%;同时,港区船舶废气排放成本与服务成本呈负相关关系,若码头仅考虑船舶服务效率或码头作业成本,都将导致港区废气排放的社会成本大幅增加。该模型和算法可为港方和船公司在不同情形下做出合理的泊位岸桥调度计划提供参考。 展开更多
关键词 水路运输 多目标优化 非支配排序遗传算法Ⅱ 泊位岸桥联合调度 强化学习-q学习
在线阅读 下载PDF
Supervisory control of the hybrid off-highway vehicle for fuel economy improvement using predictive double Q-learning with backup models 被引量:1
2
作者 SHUAI Bin LI Yan-fei +2 位作者 ZHOU Quan XU Hong-ming SHUAI Shi-jin 《Journal of Central South University》 SCIE EI CAS CSCD 2022年第7期2266-2278,共13页
This paper studied a supervisory control system for a hybrid off-highway electric vehicle under the chargesustaining(CS)condition.A new predictive double Q-learning with backup models(PDQL)scheme is proposed to optimi... This paper studied a supervisory control system for a hybrid off-highway electric vehicle under the chargesustaining(CS)condition.A new predictive double Q-learning with backup models(PDQL)scheme is proposed to optimize the engine fuel in real-world driving and improve energy efficiency with a faster and more robust learning process.Unlike the existing“model-free”methods,which solely follow on-policy and off-policy to update knowledge bases(Q-tables),the PDQL is developed with the capability to merge both on-policy and off-policy learning by introducing a backup model(Q-table).Experimental evaluations are conducted based on software-in-the-loop(SiL)and hardware-in-the-loop(HiL)test platforms based on real-time modelling of the studied vehicle.Compared to the standard double Q-learning(SDQL),the PDQL only needs half of the learning iterations to achieve better energy efficiency than the SDQL at the end learning process.In the SiL under 35 rounds of learning,the results show that the PDQL can improve the vehicle energy efficiency by 1.75%higher than SDQL.By implementing the PDQL in HiL under four predefined real-world conditions,the PDQL can robustly save more than 5.03%energy than the SDQL scheme. 展开更多
关键词 supervisory charge-sustaining control hybrid electric vehicle reinforcement learning predictive double Q-learning
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部