基于深度学习的番茄授粉机器人目标识别与检测
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

安徽省科技重大专项资助(202203a06020002)


Deep learning-based target recognition and detection for tomato pollination robots
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对植物工厂中对番茄花朵授粉作业的自动化和智能化需求,为克服当前机器人在授粉作业过程中因番茄花朵小、姿态朝向各异而导致的检测精度不高和授粉策略不完善等难题,该研究提出了一种由目标检测、花期分类和姿态识别相结合的番茄花朵检测分类算法--TFDC-Net(Tomato Flower Detection and Classification Network)。在目标检测阶段,提出了一种改进的YOLOv5s网络模型ACW_YOLOv5s,通过在YOLOv5s网络中添加卷积块注意力模块(Convolutional Block Attention Module,CBAM)并采用加权框融合(Weighted Boxes Fusion,WBF)方法,使模型的准确率达到0.957,召回率达到0.942,mAP0.5为0.968,mAP0.5-0.95为0.620,各项指标相较于原YOLOv5s网络模型分别提高了0.028、0.004、0.012、0.066,并改善了目标漏检和误检的状况。进而,针对不同花期的花朵以及花蕊不同姿态朝向的授粉问题,采用EfficientNetV2分类网络分别对3种不同花期和5种不同花蕊姿态朝向的花朵进行训练,分别得到花期分类模型及姿态识别模型,通过选取300张花期图片和200张姿态图片对其进行测试,花期分类模型和姿态分类模型的总体准确率分别为97.0%和90.5%。将研究提出的TFDC-Net算法应用于自主研发的授粉机器人中进行试验验证,结果表明,该算法能够实现对番茄花朵的目标检测、花期分类和姿态识别。在此基础上,通过坐标转换对目标快速定位,利用机器人机械臂末端执行器对番茄花朵中的花蕊完成了精准授粉,验证了该算法的有效性。该研究可实现对番茄花朵的目标识别与检测,有助于进一步推动授粉机器人的研发与应用。

    Abstract:

    Abstract: Intelligent pollination of tomatoes has been widely used in plant factories in modern agriculture in recent years. However, the low detection accuracy cannot fully meet the needs of large-scale production during robotic pollination. The imperfect pollination strategies can also be caused by the small flowers and different posture orientations in the pollination robots. In this study, tomato flower detection and classification were proposed to combine target detection, flowering classification, and posture recognition using deep learning. According to the characteristics of tomato flowers, the tomato flower detection and classification network (TFDC-Net) were also divided into two parts: The target detection of tomato flowers, and the flowering pose classification of flowers. In flower detection, the YOLOv5s network was selected to improve the target detection accuracy. Two improvements were proposed for the network structure: firstly, a Convolutional Block Attention Module (CBAM) was added to enhance the effective features but suppress the invalid ones, and secondly, a Weighted Boxes Fusion (WBF) approach was adopted to fully use the prediction information. The network was then trained using offline data augmentation to obtain the ACW_YOLOv5s model, indicating an accuracy of 0.957, a recall of 0.942, a mAP0.5 of 0.968, and a mAP0.5-0.95 of 0.620, with each index improving by 0.028, 0.004, 0.012, and 0.066, respectively, compared with the original. The actual detection performance of the model was verified for the tomato flowers. The original YOLOv5s model was selected to compare with the recognition of flowers under different complex situations. The tests show that the ACW_YOLOv5s model was used to better treat the missed detection of small distant targets, obscured targets, and false detection of overlapping targets that exist in the original YOLOv5s. At the same time, the better pollination of flowers was realized under the various flowering stages and different stamen orientations. EfficientNetV2 classification network was used to train three flowering stages and five postures of flowers, in order to obtain the flowering stage classification model and posture recognition model, respectively, indicating accuracy of 94.5%, and 86.9%, respectively. Furthermore, 300 flowering and 200 gesture images were selected to further validate the performance of the classification model. The overall accuracies were 97.0%, and 90.5%, respectively, for the flowering and gesture classification models. The TFDC-Net was obtained to integrate the ACW_YOLOv5s target detection, the flowering, and the posture classification model. As such, the detection of tomato flowers and the classification of flowering pose fully met the vision requirements of pollination robots. The TFDC-Net was also applied to the self-developed pollinator robot. It was found that the TFDC-Net was used to implement the target detection, flowering classification, and pose recognition of flowers. The target was then localized using coordinate conversion. The true 3D coordinates of the target were obtained in the coordinate system of robot arms. The feedback was received in the robot arm for the pollination of the target in full bloom with a front attitude. This finding can provide a technical basis for the target detection and localization of pollination robots.

    参考文献
    相似文献
    引证文献
引用本文

余贤海,孔德义,谢晓轩,王琼,白先伟.基于深度学习的番茄授粉机器人目标识别与检测[J].农业工程学报,2022,38(24):129-137. DOI:10.11975/j. issn.1002-6819.2022.24.014

Yu Xianhai, Kong Deyi, Xie Xiaoxuan, Wang Qiong, Bai Xianwei. Deep learning-based target recognition and detection for tomato pollination robots[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2022,38(24):129-137. DOI:10.11975/j. issn.1002-6819.2022.24.014

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-10-10
  • 最后修改日期:2022-12-13
  • 录用日期:
  • 在线发布日期: 2023-01-20
  • 出版日期:
文章二维码
您是第位访问者
ICP:京ICP备06025802号-3
农业工程学报 ® 2024 版权所有
技术支持:北京勤云科技发展有限公司