基于微根管图像的作物根系分割和表型信息提取
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

S24;TP391.4

基金项目:

国家自然科学青年基金项目(32101590);北京林业大学“5·5工程”科研创新团队项目(BLRC2023C05)


Crop root segmentation and phenotypic information extraction based on images of minirhizotron
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    微根管法采集的作物根系图像具有复杂的土壤背景和较小的根系占比,当深度学习的感受野较小或多尺度特征融合不充分时,会导致根系边缘处的像素被错分为土壤。同时,微根管法的图像采集周期长且在初期很难采集到大量有效样本,限制了根系提取模型的快速部署。为提升根系表型测算精度和优化提取模型部署策略,该研究设计了一种原位自动根系成像系统以实时获取作物的微根管图像,构建全尺度跳跃特征融合机制,使用感受野丰富的U2-Net模型对微根管图像中的根系像素进行有效分类。结合数据增强以及迁移学习微调训练,实现对目标种类根系提取模型的快速部署。试验结果表明,使用加入全尺度跳跃特征融合机制的改进U2-Net模型对蒜苗根系分割的F1得分和交并比IoU分别为86.54%和76.28%,相比改进前、U-Net、SegNet 和DeeplabV3+_Resnet50模型,F1得分分别提高0.66、5.51、8.67和2.84个百分点;交并比IoU分别提高1.02、8.18、12.52和4.31个百分点。迁移学习微调训练相比混合训练,模型的F1得分和交并比IoU分别提高了2.89和4.45个百分点。改进U2-Net模型分割图像的根系长度、面积和平均直径与手动标注结果的决定系数R2分别为0.965、0.966、0.830。研究结果可为提升基于微根管图像的根系表型测算精度和根系提取模型的快速部署提供参考。

    Abstract:

    The images of crop roots collected by the minirhizotron method have a complex soil background, and roots occupy a relatively small proportion. Common semantic segmentation models used for root extraction may misclassify pixels at the edge of roots as soil when the receptive field is small or multi-scale feature fusion is inadequate. Moreover, the long image acquisition cycle and initial difficulty in collecting a large number of valid samples with the minirhizotron method hindered the rapid deployment of root extraction models. To improve the accuracy of root phenotyping measurements and optimize extraction model deployment strategies, this study devised an in-situ automatic root imaging system to acquire minirhizotron images of crops in real time. A full-scale skip feature fusion mechanism is constructed for the U2-Net model with rich receptive fields for the effective classification of root pixels in minirhizotron images. Integrating data augmentation and fine-tuning method of transfer learning methods to achieve rapid deployment of root extraction models for target species. The full-scale skip feature fusion mechanism involved fusing the output features of the upper encoder and lower decoder of the U2-Net model across all scales, thereby serving as input features for a certain decoder layer and effectively retaining more feature information to enhance the decoder's information restoration capability. In terms of model deployment, this study compared fine tuning method of transfer learning with mixed training to address the issue of model training with limited samples. Experimental materials included self-developed in-situ automatic root imaging systems for collecting garlic sprout root system images and the publicly available minirhizotron dataset PRMI(plant root minirhizotron imagery). The experimental design included performance metric comparisons on the PRMI(plant root minirhizotron imagery)dataset, followed by analysis and validation on garlic sprout data. The control group comprised pre-improvement and post-improvement versions of the U2-Net, U-Net, SegNet, and DeeplabV3+_Resnet50 models. Finally, the predictive effects of fine-tuning and mixed training methods on the garlic sprout test set were compared to identify effective strategies for rapid model deployment. The experimental results demonstrate that on the garlic sprout dataset, the improved U2-Net model achieves average F1 and IoU scores of 86.54% and 76.28%, respectively. Compared to the pre-improvement version, U-Net, SegNet, and DeeplabV3+_Resnet50 models, the average F1 increases by 0.66、5.51、8.67 and 2.84 percentage points, respectively, while the average IoU increases by 1.02、8.18、12.52 and 4.31 percentage points, respectively. Practical segmentation shows enhanced recognition capability for root system edges, significantly reducing over-segmentation and under-segmentation phenomena. In the garlic sprout dataset with limited samples, fine-tuning of transfer learning outperformed mixed data and training, with model performance metrics F1 and IoU improving by 2.89 and 4.45 percentage points, respectively. For root length, area, and average diameter, the determination coefficient R2 values between the result of the model's predicted images and manually labeled images reached 0.965, 0.966, and 0.830, respectively. This study offers a reference for enhancing the accuracy of root phenotype measurement in minirhizotron images and the rapid deployment of root extraction models.

    参考文献
    相似文献
    引证文献
引用本文

郑一力,张振翔,邢达,刘卫平.基于微根管图像的作物根系分割和表型信息提取[J].农业工程学报,2024,40(18):110-119. DOI:10.11975/j. issn.1002-6819.202403138

ZHENG Yili, ZHANG Zhenxiang, XING Da, LIU Weiping. Crop root segmentation and phenotypic information extraction based on images of minirhizotron[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2024,40(18):110-119. DOI:10.11975/j. issn.1002-6819.202403138

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-03-21
  • 最后修改日期:2024-06-14
  • 录用日期:
  • 在线发布日期: 2024-09-11
  • 出版日期:
文章二维码
您是第位访问者
ICP:京ICP备06025802号-3
农业工程学报 ® 2024 版权所有
技术支持:北京勤云科技发展有限公司