期刊文献+

基于深度残差U型网络的果园环境识别 被引量:1

ORCHARD ENVIRONMENT RECOGNITION BASED ON DEEP RESIDUAL U-NETWORK
在线阅读 下载PDF
导出
摘要 果园环境复杂多变,传统机器视觉识别算法易受到光照阴影等因素影响,识别目标能力有限且精度较低。深度残差U型网络可对果园环境中的树木、可行驶道路、杂物等进行语义分割。网络基本结构采用U型网络,在编码层和瓶颈层中加入残差学习,利用残差模块提升网络深度,增强不同层次的语义信息融合,提高特征表达能力和识别准确率;解码层中采用上采样进行特征映射,方便快捷,并通过跳跃连接融合编码层的语义信息,减少网络参数,加速训练。通过PyTorch深度学习框架搭建网络,训练数据集,并将该网络与全卷积神经网络和U型网络进行对比实验,结果表明深度残差U型网络识别准确率最高,平均交并比为83.3%,适用于果园环境识别。 Due to the complex and changeable orchard environment,the traditional machine vision recognition algorithm is susceptible to lighting shadows and other factors,and the ability to identify targets is limited and low accuracy.Deep residual U-type network can perform semantic segmentation of trees,roadable roads,debris in the orchard environment.The basic structure of the proposed network adopted U-type network,added residual learning to the coding layer and bottleneck layer,used the residual module to improve the depth of the network,enhanced the semantic information fusion of different levels and improved the characteristic expression ability and recognition accuracy.The decoding layer used the upper sampling for feature mapping,which was convenient and fast,and integrated the semantic information of the coding layer by jumping connection,reduced the network parameters and accelerated the training.Using the PyTorch deep learning framework to build a network and train data sets,and the network was compared with the full convolutional neural network and U-type network for experiments.The results show that the depth residual U-type network achieves highest recognition accuracy,and its average intersection-merge ratio is 83.3%,which is suitable for orchard environment recognition.
作者 商高高 朱鹏 刘刚 Shang Gaogao;Zhu Peng;Liu Gang(College of Automotive and Transportation Engineering,Jiangsu University,Zhenjiang 212001,Jiangsu,China)
出处 《计算机应用与软件》 北大核心 2023年第5期235-242,共8页 Computer Applications and Software
基金 江苏省重点研发计划(现代农业)重点项目(BE2017333)。
关键词 环境识别 机器视觉 深度残差U型网络 语义分割 信息融合 Environmental recognition Machine vision Deep residual U-type network Semantic segmentation Information fusion
作者简介 商高高,副教授,主研领域:智能农机;朱鹏,硕士生;刘刚,硕士生。
  • 相关文献

参考文献7

二级参考文献34

  • 1刘兆祥,陈艳,籍颖,刘刚,张漫,周建军.基于机器视觉的农业车辆路径跟踪[J].农业机械学报,2009,40(S1):18-22. 被引量:8
  • 2Zhengyou Zhang.A Flexible New Technique for Camera Calibration[J].IEEE Transactions on Patter Analysis and Machine Intelligence,2000,22(11):1330-1334.
  • 3Duane C Brown.Close-Range Camera Calibration[J].Photogrammetric Engineering,1971,37(8):855-866.
  • 4Tsai R Y.A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses[J].IEEE Journal of Robotics and Automation,1987,3(4):323-344.
  • 5C Brauer-Burchardt.A simple new method for precise lensdistortion correction of low cost camera systems[C]//DAGM,2004,LNCS 3175:570-577.
  • 6Gary Bradski,Adrian Kaeler.Learning OpenC V[M].O'Reilly Media,2009.
  • 7Wang Hongzhi,Li Meijing,Zhang Liwei.The Distortion Correction of Large View Wide-angle Lens for Image Mosaic Based on Open CV[C]//International Conference on Mechatronic Science,Electric Engineering and Computer.China,2011:1074-1077.
  • 8傅丹,周剑,邱志强,于起峰.基于直线的几何不变性标定摄像机参数[J].中国图象图形学报,2009,14(6):1058-1063. 被引量:13
  • 9陈娇,姜国权,杜尚丰,柯杏.基于垄线平行特征的视觉导航多垄线识别[J].农业工程学报,2009,25(12):107-113. 被引量:50
  • 10赵志宏,杨绍普,马增强.基于卷积神经网络LeNet-5的车牌字符识别研究[J].系统仿真学报,2010,22(3):638-641. 被引量:153

共引文献94

同被引文献3

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部