基于改进型YOLOv4-LITE轻量级神经网络的密集圣女果识别
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金资助项目(52075149);河南省科技攻关计划项目(212102110029);现代农业装备与技术教育部重点实验室和江苏省农业装备与智能化高技术重点实验室开放基金课题(JNZ201901);河南省高等教育教学改革研究与实践项目(研究生教育)成果(2019SJGLX063Y)


Recognition of dense cherry tomatoes based on improved YOLOv4-LITE lightweight neural network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    对密集圣女果遮挡、粘连等情况下的果实进行快速识别定位,是提高设施农业环境下圣女果采摘机器人工作效率和产量预测的关键技术之一,该研究提出了一种基于改进YOLOv4-LITE轻量级神经网络的圣女果识别定位方法。为便于迁移到移动终端,该方法使用MobileNet-v3作为模型的特征提取网络构建YOLOv4-LITE网络,以提高圣女果果实目标检测速度;为避免替换骨干网络降低检测精度,通过修改特征金字塔网络(Feature Pyramid Networks,FPN)+路径聚合网络(Path Aggregation Network,PANet)的结构,引入有利于小目标检测的104×104尺度特征层,实现细粒度检测,在PANet结构中使用深度可分离卷积代替普通卷积降低模型运算量,使网络更加轻量化;并通过载入预训练权重和冻结部分层训练方式提高模型的泛化能力。通过与YOLOv4在相同遮挡或粘连程度的测试集上的识别效果进行对比,用调和均值、平均精度、准确率评价模型之间的差异。试验结果表明:在重叠度为0.50时所提出的密集圣女果识别模型在全部测试集上调和均值、平均精度和准确率分别为0.99、99.74%和99.15%,同比YOLOv4分别提升了0.15、8.29、6.55个百分点,权重大小为45.3 MB,约为YOLOv4的1/5,对单幅416×416(像素)图像的检测,在图形处理器(Graphics Processing Unit,GPU)上速度可达3.01 ms/张。因此,该研究提出的密集圣女果识别模型具有识别速度快、识别准确率高、轻量化等特点,可为设施农业环境下圣女果采摘机器人高效工作以及圣女果产量预测提供有力的保障。

    Abstract:

    Abstract: Small and hidden conditions of dense cherry tomatoes have posed a great challenge to the rapid identification and positioning of fruits. New key technology with strong robustness is highly demanding to improve the efficiency and yield prediction of cherry tomatoes in the facility agriculture environment. In this study, a novel recognition method was proposed to locate the dense cherry tomatoes using an improved YOLOv4-LITE lightweight neutral network. A mobile Net-v3 easy migration to mobile terminals was selected as the feature extraction network of the model to construct a YOLOv4-LITE for a higher detection speed of cherry tomatoes. A feature pyramid network was set as the modified (FPN) + Path Aggregation Network (PANet) structure, in order to avoid replacing the backbone network to reduce the detection accuracy. Specifically, a 104×104 Future map was introduced to achieve fine-grained detection for the small targets. More importantly, a deep separable convolution was used in the PANet structure to reduce the number of model calculations. The new network was more lightweight, where the generalization ability of the model was improved by loading pre-training weights and freezing partial layer training. A comparison was made on the recognition effect of YOLOv4, F1, and AP on the test set with the same degree of occlusion or adhesion, further to evaluate the difference between the models. The test results show that the improved FPN structure on the basis of YOLOv4 was higher than the AP50 of the original YOLOv4 AP75 increased by 15.00 percentage points, and the F1 increased by 0.14 and 0.24 under the corresponding IOU threshold. However, the weight increased by 4 MB, while the detection speed increased to 0.27 ms/sheet, and the number of network parameters increased by 14.85%. The improved FPN structure on the basis of YOLOv4+MobiletNet-V3, AP50 increased by 6.58 percentage points, AP75 increased by 21.82 percentage points, F1 value increased by 0.13 and 0.20 under the corresponding IOU threshold, indicating that YOLOv4 and YOLOv4+MobiletNet-V3 lacked small goals. Fortunately, the Future map of small targets was added to improve the fine-grained detection of the model, but the number of model parameters and weights increased accordingly. As such, the PANet structure was improved to introduce a deep separable convolutional network, while ensuring high F1, AP, Recall and Precision.Optimal performance was achieved, where the model weight was compressed to 45.3 MB, the detection speed was 3.01 ms/sheet, and the network parameters were 12 026 685. Specifically, the new network was reduced by 198.7MB, compared with the original YOLOv4. The data indicated that the improved PANet strategy presented similar accuracy under such circumstances, while effectively reduced memory consumption, and the number of model parameters, but accelerated the speed of model recognition. The F1, AP50, and recall of the proposed recognition model for the dense cherry tree on all test sets were 0.99, 99.74%, and 99.15%, respectively. The improved YOLOv4 increased by 0.15, 8.29, and 6.55 percentage points, respectively, and the weight size was 45.3MB, about 1/5 of YOLOv4. Additionally, the detection of a single 416×416 image reached a speed of 3.01ms/frame on the GPU. Therefore, the recognition model of dense cherry tomatoes behaved a higher speed of recognition, a higher accuracy, and lighter weight than before. The finding can provide strong support to the efficient production forecast of cherry tomatoes in the facility agriculture environment.

    参考文献
    相似文献
    引证文献
引用本文

张伏,陈自均,鲍若飞,张朝臣,王治豪.基于改进型YOLOv4-LITE轻量级神经网络的密集圣女果识别[J].农业工程学报,2021,37(16):270-278. DOI:10.11975/j. issn.1002-6819.2021.16.033

Zhang Fu, Chen Zijun, Bao Ruofei, Zhang Chaochen, Wang Zhihao. Recognition of dense cherry tomatoes based on improved YOLOv4-LITE lightweight neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2021,37(16):270-278. DOI:10.11975/j. issn.1002-6819.2021.16.033

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-06-02
  • 最后修改日期:2021-08-14
  • 录用日期:
  • 在线发布日期: 2021-09-29
  • 出版日期:
文章二维码
您是第位访问者
ICP:京ICP备06025802号-3
农业工程学报 ® 2024 版权所有
技术支持:北京勤云科技发展有限公司