基于改进Tiny-YOLO模型的群养生猪脸部姿态检测
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家高技术研究发展计划(863计划)资助项目(2013AA102306);国家自然基金面上项目资助(31772651);山西省重点研发计划专项(农业)(201803D221028-7)


Detection of facial gestures of group pigs based on improved Tiny-YOLO
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    生猪脸部包含丰富的生物特征信息,对其脸部姿态的检测可为生猪的个体识别和行为分析提供依据,而在生猪群养场景下,猪舍光照、猪只黏连等复杂因素给生猪脸部姿态检测带来极大挑战。该文以真实养殖场景下的群养生猪为研究对象,以视频帧数据为数据源,提出一种基于注意力机制与Tiny-YOLO相结合的检测模型DAT-YOLO。该模型将通道注意力和空间注意力信息引入特征提取过程中,高阶特征引导低阶特征进行通道注意力信息获取,低阶特征反向指引高阶特征进行空间注意力筛选,可在不显著增加参数量的前提下提升模型特征提取能力、提高检测精度。对5栏日龄20~105 d的群养生猪共35头的视频抽取504张图片,共计3 712个脸部框,并标注水平正脸、水平侧脸、低头正脸、低头侧脸、抬头正脸和抬头侧脸6类姿态,构建训练集,另取420张图片共计2 106个脸部框作为测试集。试验表明,DAT-YOLO模型在测试集上对群养生猪的水平正脸、水平侧脸、低头正脸、低头侧脸、抬头正脸和抬头侧脸6类姿态预测的AP值分别达到85.54%、79.30%、89.61%、76.12%、79.37%和84.35%,其6类总体mAP值比Tiny-YOLO模型、仅引入通道注意力的CAT-YOLO模型以及仅引入空间注意力的SAT-YOLO模型分别提高8.39%、4.66%和2.95%。为进一步验证注意力在其余模型上的迁移性能,在同等试验条件下,以YOLOV3为基础模型分别引入两类注意力信息构建相应注意力子模型,试验表明,基于Tiny-YOLO的子模型与加入相同模块的YOLOV3子模型相比,总体mAP指标提升0.46%~1.92%。Tiny-YOLO和YOLOV3系列模型在加入注意力信息后检测性能均有不同幅度提升,表明注意力机制有利于精确、有效地对群养生猪不同类别脸部姿态进行检测,可为后续生猪个体识别和行为分析提供参考。

    Abstract:

    The face of the pig contains rich biometric information, and the detection of the facial gestures can provide a basis for the individual identification and behavior analysis of the pig. Detection of facial posture can provide basis for individual recognition and behavioral analysis of pigs. However, under the scene of group pigs breeding, there always have many factors, such as pig house lighting and pig adhesion, which brings great challenges to the detection of pig face. In this paper, we take the group raising pigs in the real breeding scene as the research object, and the video frame data is used as the data source. Latter we propose a new detection algorithm named DAT-YOLO which based on the attention mechanism and Tiny-YOLO model, and channel attention and spatial attention information are introduced into the feature extraction process. High-order features guide low-order features for channel attention information acquisition, and low-order features in turn guide high-order features for spatial attention screening, meanwhile the model parameters don't have significant increase, the model feature extraction ability is improved and the detection accuracy is improved. We collect 504 sheets total 3 712 face area picture for the 5 groups of 20 days to 3 and a half months of group health pig video extraction, the number of pigs is 35. In order to obtain the model input data set, we perform a two-step pre-processing operation of filling pixel values and scaling for the captured video. The model outputs are divided into six classes, which are horizontal face, horizontal side-face, bow face, bow side-face, rise face and rise side-face. The results show that for the test set, the detection precision(AP) reaches 85.54%, 79.3%, 89.61%, 76.12%, 79.37%, 84.35% of the horizontal face, horizontal side-face, bow face, bow side-face, rise face and rise side-face respectively, and the mean detection precision(mAP) is 8.39%, 4.66% and 2.95% higher than that of the general Tiny-YOLO model, the CAT-YOLO model only refers to channel attention and the SAT-YOLO model only introduces spatial attention respectively. In order to further verify the migration performance of attention on the remaining models, under the same experimental conditions, two attentional information were introduced to construct the corresponding attention sub-models based on the YOLOV3-based model. The experiment shows that compared to the YOLOV3 submodel, the sub-model based on Tiny-YOLO increase by 0.46% to 1.92% in the mAP. The Tiny-YOLO and YOLOV3 series models have different performance improvements after adding attention information, indicating that the attention mechanism is beneficial to the accurate and effective group gestures detection of different groups of pigs. In this study, the data is pseudo-equalized from the perspective of loss function to avoid the data imbalance caused by the number of poses of different facial categories, and actively explore the reasons for the difference in the accuracy of different facial gesture detection. The study can provide reference for the subsequent individual identification and behavior analysis of pigs.

    参考文献
    相似文献
    引证文献
引用本文

燕红文,刘振宇,崔清亮,胡志伟,李艳文.基于改进Tiny-YOLO模型的群养生猪脸部姿态检测[J].农业工程学报,2019,35(18):169-179. DOI:10.11975/j. issn.1002-6819.2019.18.021

Yan Hongwen, Liu Zhenyu, Cui Qingliang, Hu Zhiwei, Li Yanwen. Detection of facial gestures of group pigs based on improved Tiny-YOLO[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE),2019,35(18):169-179. DOI:10.11975/j. issn.1002-6819.2019.18.021

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2019-08-16
  • 最后修改日期:2019-08-28
  • 录用日期:
  • 在线发布日期: 2019-10-12
  • 出版日期:
文章二维码
您是第位访问者
ICP:京ICP备06025802号-3
农业工程学报 ® 2024 版权所有
技术支持:北京勤云科技发展有限公司