Abstract:A Fully Convolutional Network (FCN) was herein proposed to extract ridge centerlines of a maize field from Unmanned Aerial Vehicles (UAVs) remote sensing images. A global path planning was selected for agricultural robots walking between rows of cornfields. The concept of ridge area (R-area) was constructed to offer further solutions, where an area was obtained by sweeping a straight line with a fixed width vertically on the centerline of the ridge. The R-area was semantically segmented to form a defined semantic range without clear boundaries. A dataset was designed to extract the centerline of farmland ridge, while the FCN was used to extract the R-area. The centerline of maize was manually annotated and rasterized in remote sensing images. As such, a threshold extraction was implemented to obtain the annotated image after Gaussian blur. The annotated and original images were divided into blocks using the sliding window. At the same time, these divided images were also trained. It was found that the accuracy rate (each model under the training of each width for the stitching test set image), recall rate, and the harmonic mean were 66.1%-83.4%, 51.1%-73.9%, and 57.6%-78.4%, respectively. The FNC model was then utilized to predict the image of the verification field after training. The model presented excellent robustness to predict complex situations, such as weeds between rows, uneven growth, and sprinklers above the crops. The image was then replaced according to the original position. Afterwards, the R-area distribution map was obtained. The projection division was performed on the R-area distribution map to acquire the centerline of the ridge. 19 339 slices were obtained using the segmented projection, where the number of slices was the same as the pixel height of the original maize remote sensing orthophoto. The center point of each ridge was obtained after projecting each slice. The center points were closely connected to collect a centerline distribution map, which was directly applied for agricultural robot navigation. An experiment was designed to explore the effects of line width for the R-area on the model training and centerline. The experiment also compared the image of the confusion matrix after model training with different line widths of different ridges. The accuracy of the model was trained with different line widths of ridges within different error ranges. At last, the final results demonstrated that the best performance of the model was obtained, when the line width was 9 pixels. Fluctuations of line thickness made the data lower. The optimal accuracy of the ridge centerline was 91.2% within the deviation range of about 77 mm, and 61.5% within the deviation range of about 31.5 mm. Extracting the centerline of the ridge was transformed into the semantic segmentation of R-area of UAV remote sensing images. The FCN network can be expected to segment the ridge and semantic region without obvious boundaries. This finding can offer semantic segmentation networks in deep learning to perform global path plans for agricultural robots in intelligent farming.