期刊文献+

融合时空特征的端到端自动驾驶车辆转向角预测 被引量:2

End-to-end Autonomous Driving Vehicle Steering Angle Prediction Based on Spatiotemporal Features
原文传递
导出
摘要 端到端自动驾驶系统可完成从感知输入到车辆控制输出的直接映射,已成为当前无人驾驶研究的一个重要方向。显然,在动态环境中自主驾驶车辆需要具有处理时空信息的能力,以实现精确平滑的车辆运动控制。为此,提出一种新的时空信息融合模型,在双流卷积网络(Two-stream CNN)的基础上引入门控循环单元(GRU)网络来实现端到端自动驾驶车辆转向角预测。该模型利用RGB图像、基于运动的光流图像和门控循环单元网络来融合连续多帧驾驶场景的空间特征与时间特征。首先通过双流卷积网络的2组卷积网络分支提取特征,一组分支从RGB图像中提取空间特征,另一组分支从光流中学习时间特征;然后利用门控循环单元网络对具有短时依赖关系的特征进行建模;最后,融合时间与空间特征,得到转向角预测结果。提出的结合门控循环单元的双流卷积模型(Two-stream C-GRU)获取的时间动态不仅依赖于表示前后2帧图像位移的光流,也与连续多帧图像相关。在真实驾驶场景数据集上进行模型的测试工作,试验结果表明:提出的时空模型在驾驶转向角预测的准确度和平稳性方面效果显著,优于其他主流时空模型;其中,对比基本的双流卷积网络,该模型在测试集1上的转向角预测精度和稳定度分别提高了20%和6%,在测试集2上分别提高了5%和10%。 The end-to-end autonomous driving system directly maps sensory inputs to controller outputs and has become an important research direction for autonomous driving.To perform accurate and smooth driving actions in dynamic environments,autonomous driving vehicles should include the ability to process spatiotemporal information.Therefore,we proposed a new spatiotemporal model to perform an end-to-end prediction of steering angles using two-stream convolutional neural networks(Two-Stream CNN)combined with gated recurrent unit(GRU)networks.The proposed model utilizes RGB images,motion-based optical flows,and gated recurrent unit networks to fuse spatial features and temporal features of consecutive driving scenarios.First,the two-stream convolutional networks composed of two CNN branches were used to extract features,where the first branch is intended to learn spatial features extracted from RGB images,and the second learns temporal characteristics from optical flows.Then,the features with short-term temporal dependence were modeled by the gated recurrent unit networks.Finally,the prediction results of the steering angle were obtained by integrating the spatiotemporal features.Therefore,the temporal dynamics captured using the proposed two-stream C-GRU model which does not only depend on the optical flows representing the displacement of objects between adjacent frames,but also relate to consecutive multiple frames.We used real driving dataset to test the proposed model,and the experimental results showed that the proposed model has a competitive performance in prediction accuracy and stability compared to other spatiotemporal models.Essentially,compared to the basic two-stream CNN,the proposed two-stream C-GRU model increases the steering angle prediction accuracy and stability by 20%and 6%on the test set 1,and by 5%and 10%on the test set 2,respectively.
作者 吕宜生 刘雅慧 陈圆圆 朱凤华 LYU Yi-sheng;LIU Ya-hui;CHEN Yuan-yuan;ZHU Feng-hua(The State Key Laboratory for Management and Control of Complex Systems,Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China;School of Artificial Intelligence,University of Chinese Academy of Sciences,Beijing 100049,China)
出处 《中国公路学报》 EI CAS CSCD 北大核心 2022年第3期263-272,共10页 China Journal of Highway and Transport
基金 国家自然科学基金项目(61876011) 广东省基础与应用基础研究基金项目(2019B1515120030).
关键词 交通工程 无人驾驶 时空模型 转向角预测 双流卷积网络 门控循环单元 traffic engineering autonomous driving spatiotemporal model steering angle prediction two-stream convolutional neural networks gated recurrent unit
作者简介 吕宜生(1983-),男,山东蒙阴人,副研究员,工学博士,E-mail:yisheng.lv@ia.ac.cn。
  • 相关文献

参考文献4

二级参考文献395

共引文献576

同被引文献17

引证文献2

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部