摘要
目标果实的精准识别是实现果园测产和机器自动采摘的基本保障。然而受复杂的非结构化果园环境、绿色苹果与枝叶背景颜色接近等因素的影响,制约着可见光谱范围下目标果实的检测精度,给机器视觉识别带来极大挑战。针对复杂果园环境下的不同光照环境和果实姿态,提出一种优化的一阶全卷积(FCOS)神经网络绿色苹果识别模型。首先,新模型在FCOS的基础上融合了卷积神经网络(CNN)的特征提取能力,消除了对锚框的依赖,以单阶段、全卷积、无锚框的方式预测果实置信度与边框偏移,在保证检测精度的前提下提升了模型的识别速度;其次,增加了自底向上的特征融合架构,为模型提供了更加准确的定位信息,进一步优化绿色苹果的检测效果;最后根据FCOS末端三个输出分支设计整体损失函数,完成模型训练。为尽可能模拟真实果园环境,分别采集不同光照环境、光照角度、遮挡类型、摄像距离的绿色苹果图像,制作数据集并用以模型训练。挑选最优训练模型在包含不同场景的验证集上进行评估,结果为:在检测效果方面,平均精度为85.6%,与目前最先进的检测模型Faster R-CNN,SSD,RetinaNet,FSAF相比,分别高出0.9,10.5,2.5,1.9个百分点;在模型设计方面,FCOS的模型参数量与整个检测流程所需的计算量分别为32.0 M和47.5 GFLOPs(10亿次浮点运算),与Faster R-CNN相比,分别降低了9.5 M和12.5 GFLOPs。对比表明,在可见光谱范围下,对复杂果园环境中绿色苹果,提出的新模型具有更高的检测精度和识别效率,为苹果果园测产和自动化采摘提供理论和技术支撑;也可为其他果蔬的球形绿色目标果实识别提供借鉴。
In the visible spectrum range,the accurate recognition of target fruit is the fundamental guarantee for achieving orchard yield measurement and machine automatic picking.However,this task is susceptible to many interferences,such as the complex unstructured orchard environment,the close color between green apples and background leaves,etc.,which significantly restrict the detection accuracy of target fruits and bring great challenges to recognition of machine vision.It targeted the different illumination environments and fruit postures under the complex orchard environment.An optimized convolution and one-stage(FCOS)fully neural network model for green apple recognition is proposed in this study.Firstly,the new model combines the feature extraction ability of convolutional neural network(CNN)based on FCOS,eliminates the dependence of previous detectors on anchor boxes,and switches to a novel manner of one-stage,full convolution and anchor-free for predicting the fruit confidence and boxes offsets,which greatly improves the recognition speed of the model while ensuring the detection accuracy simultaneously.Secondly,the bottom-up feature fusion architecture is embedded after the feature pyramid to provide more accurate positioning information for high-levels and thus further optimize the detection effect of green apple.Finally,the overall loss function is designed to complete the iterative training given three output branches of FCOS.To simulate the real orchard environment as possible,we collected green apple images in various environments with different lighting environments,illumination angle,occlusion type,camera distance for data sets generation and model training,and then evaluated the optimal model on validation set containing different scenes.The experimental results show that our proposed model’s average precision(AP)is 85.6%,which is 0.9,10.5,2.5 and 1.9 percentage points higher than the state-of-the-art detection models Faster-R-CNN,SSD,RetinaNet and FSAF,respectively.In the aspect of model design,the model parameters of FCOS and the calculation of the whole detection process are 32.0 M and 47.5 GFLOPs(billion floating-point operations),respectively,which are 9.5 M and 12.5 GFLOPs lower than those of Faster R-CNN.Comparisons of experimental results show that the new model has higher detection accuracy and recognition efficiency in the visible spectrum,which can provide theoretical and technical support for orchard yield measurement and automatic picking.In addition,the new model can also provide theoretical references for other kinds of fruits and vegetables.
作者
张中华
贾伟宽
邵文静
侯素娟
Ji Ze
郑元杰
ZHANG Zhong-hua;JIA Wei-kuan;SHAO Wen-jing;HOU Su-juan;Ji Ze;ZHENG Yuan-jie(School of Information Science and Engineering,Shandong Normal University,Ji’na 250358,China;Key Laboratory of Facility Agriculture Measurement and Control Technology and Equipment of Machinery Industry,Zhenjiang 212013,China;School of Engineering,Cardiff University,Cardiff CF243AA,United Kingdom)
出处
《光谱学与光谱分析》
SCIE
EI
CAS
CSCD
北大核心
2022年第2期647-653,共7页
Spectroscopy and Spectral Analysis
基金
国家自然科学基金项目(62072289,61973141,81871508)
山东省重点研发计划项目(2019GNC106115)
山东省自然科学基金项目(ZR2020MF076,ZR2019ZD04)资助。
作者简介
张中华,1997年生,山东师范大学信息科学与工程学院硕士研究生,e-mail:zzhs9714@163.com;通讯作者:贾伟宽,e-mail:wkjia@sdnu.edu.cn。