Abstract:Intelligent feeding has widely been used to determine the amount of feed from a smart prediction about the hunger degree of fish, thereby effectively reducing the waste of feed in the modern aquaculture industry, especially for outdoor intensive fish breeding environments. However, redundant data collected by mobile monitoring devices has caused a huge calculation load for most control systems. An accurate classification of the hunger degree of fish still remains an unsolved problem. Taking the captive perch as the tested object, this work aims to design an image capture system for the perch feeding using MobileNetV3-Small of lightweight neural network. The system also consisted of 2 captive fonds, a camera, and a video recorder. In the test, 4202 perches were randomly fed with adequate or inadequate feed, where a camera was selected to record the water surface every day. 10 000 images were collected after 2-week monitoring to record the perch ingesting condition in the period of 80~110 seconds after per round feeding condition, where 50% belonged to “hungry” condition, and the rest was “non-hungry” condition. These initial images were then divided as training, validation, and testing set, according to a rate of 6:2:2. Four image processing operations were applied on the training set, containing random flipping, random cropping, adding Gaussian noise, and color dithering, thereby expanding the training set from 6 000 to 12 000 images. As such, the more generalized model greatly enhanced the image features and training samples. Next, a MobileNetV3-Small of lightweight Neural Network was selected to classify the ingesting condition of perches. The model was trained, tested, and established on the Tensorflow2 platform, where the images of the training set were selected as the input, whereas, the ingesting condition as the output. Finally, a 2-week feeding contrast test was carried out in the outdoor culture environment to verify the accuracy of the model. Two groups were set for 4202 perches in this test, 2096 of the test group and 2106 of the control group, where the amount of feed was determined according to the classification of model and conventional experience. Meanwhile, the total mass and quantity of the two groups were recorded at the beginning and end of the test, as well as the total amount of consumed feed. Correspondingly, it was found that the MobileNetV3-Small network model achieved a combined accuracy of 99.60% in the test set with an F1 score of 99.60%. The MobileNetV3-Small model presented the smallest Floating Point Operations of 582 M and the largest average classification rate of 39.21 frames/s, compared with ResNet-18, ShuffleNetV2, and MobileNetV3-Large deep learning models. Specifically, the combined accuracies of the MobileNetV3-Small model were 12.74, 23.85, 3.6, and 2.78 percentage points higher than that of the traditional machine learning models KNN, SVM, GBDT, and Stacking. Furthermore, the test group of perch was achieved a lower Feed Conversion Ratio of 1.42, and a higher Weight Gain Ratio of 5.56%, compared with the control group, indicating that the MobileNetV3-Small model performed a better classification on the ingesting condition in a real outdoor culture environment. Consequently, the classification of the ingesting condition can widely be expected for the efficient decision-making for the amount of fish feed, particularly suitable for the growth of fish. The finding can provide a further reference for efficient and intelligent feeding in an intensive cultural environment.