基于改进轻量化YOLOv4模型的虾只肉壳辨识方法
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

S985.2

基金项目:

广东省重点领域研发计划项目(2021B0202060002)


Improved lightweight YOLOv4 model-based method for the identification of shrimp flesh and shell
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    为实现虾只机械剥壳环节裸肉虾与带壳虾自动分选,该研究提出一种基于改进YOLOv4模型的虾只肉壳辨识方法。将YOLOv4模型中CSP-Darknet53网络替换为GhostNet网络,增强模型自适应特征提取能力及简化模型参数计算量。在YOLOv4主干特征提取网络Resblock模块中引入轻量级注意力机制,增强主干特征提取网络的特征提取能力。将YOLOv4模型中GIoU损失函数替换为CIoU损失函数,提高模型预测框的回归效果。为检测改进效果进行了不同模型对比验证,轻量化结果表明改进YOLOv4模型参数量最少、计算量最小;消融试验表明改进YOLOv4模型的平均精度均值为92.8%,比YOLOv4模型提升了6.1个百分点。不同场景下应用改进YOLOv4模型进行虾只肉壳辨识性能试验。结果表明:同品种不同环境的虾只肉壳辨识总体平均准确率为95.9%,同品种不同剥壳方式的虾只肉壳辨识准确率平均值为90.4%,不同品种虾只肉壳辨识准确率平均值为87.2%。研究结果可为裸肉虾与带壳虾自动分选提供技术支撑。

    Abstract:

    An improved lightweight YOLOv4 model was proposed to realize the accurate, real-time, and robust automatic sorting of bare and shelled shrimp in the shrimp mechanical shelling process under complex scenarios. The CSP-Darknet53 network was replaced by the GhostNet in the YOLOv4 structure. The ability of the model was then improved to extract the features adaptively. The calculation of model parameters was also simplified after improvement. The GhostNet network was used for the YOLOv4 backbone feature extraction, in order to reduce the network model complexity, and the model parameters for better storage capacity and detection efficiency. A lightweight attention mechanism was introduced into the Resblock module of the YOLOv4 backbone feature extraction network, in order to enhance the feature extraction capability of the backbone feature extraction network. The SE attention mechanism module was used to enhance the attention between feature channels. The attention of the network model was improved to the shrimp shell by fitting the relevant feature information to the target channel and suppressing invalid information. The model recognition accuracy was improved to reduce background interference. The original GIoU loss function was replaced with a CIoU loss function to improve the regression effect of the prediction frame. The CIoU loss function made the data obtained from non-maximal suppression more reasonable and efficient. Furthermore, the prediction frame was more accurate to minimize the distance between the centroids of the detection frame and the labelled frame. The lightweight GhostNet-YOLOv4 model was compared with the YOLOv7, EfficientNet Lite3-YOLOv4, ShuffleNetV2-YOLOv4, and MobilenetV3-YOLOv4 models. The results showed that the GhostNet-YOLOv4 model shared the lowest number of parameters and computational effort. An ablation comparison experiment was designed to verify that replacing the backbone feature extraction network and embedding the SE attention mechanism optimized for the module. The replacement of the CSP-Darknet53 backbone feature extraction network with the GhostNet resulted in a 2.9 percentage point improvement in the mAP and a significant reduction in the number of model parameters and output weights, compared with the original model. The addition of the SE attention mechanism improved the anti-interference and feature extraction ability, whereas the mAP was improved by 1.8 percentage points. After replacing the GIoU loss function with the CIoU one, the shrimp recognition accuracy was further improved, where the mAP was improved by 1.4 percentage points. According to the actual operating environment of the shrimp shell inspection test bed, two types of image datasets were produced, namely bare flesh shrimp and shelled shrimp. The GhostNet-YOLOv4, YOLOv3, YOLOv4, and MobilenetV3-YOLOv4 models were used for testing. The results show that the GhostNet-YOLOv4 model achieved detection accuracy and speed of 95.9% and 25 frames/s, respectively. The GhostNet-YOLOv4 model outperformed all other models in terms of detection speed under the condition of guaranteed detection accuracy. The performance of the GhostNet-YOLOv4 network model was evaluated to identify the shrimp shells for four treatments with the changes in light brightness, speed, shrimp posture, and shrimp species. The shrimp shell detection test showed that the average accuracy of shrimp shell recognition reached 90.4%, fully meeting the operational requirements. It indicates that the test bench was suitable for installation on mobile-embedded devices. The GhostNet-YOLOv4 network model still shared excellent generalization performance, when identifying other species of shrimp shells outside the sample set, with an average accuracy of 87.2%.

    参考文献
    相似文献
    引证文献
引用本文

陈学深,吴昌鹏,党佩娜,梁俊,刘善健,武涛.基于改进轻量化YOLOv4模型的虾只肉壳辨识方法[J].农业工程学报,2023,39(9):278-286. DOI:10.11975/j. issn.1002-6819.202303076

CHEN Xueshen, WU Changpeng, DANG Peina, LIANG Jun, LIU Shanjian, WU Tao. Improved lightweight YOLOv4 model-based method for the identification of shrimp flesh and shell[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2023,39(9):278-286. DOI:10.11975/j. issn.1002-6819.202303076

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-03-13
  • 最后修改日期:2023-04-06
  • 录用日期:
  • 在线发布日期: 2023-05-26
  • 出版日期:
文章二维码
您是第位访问者
ICP:京ICP备06025802号-3
农业工程学报 ® 2024 版权所有
技术支持:北京勤云科技发展有限公司