Abstract:Abstract: An accurate extraction of the crop spatial distribution is of great significance for the decision-making on management measures in modern agriculture. Fortunately, the remote sensing images can be widely used as the important data sources for the spatial distribution of crops at present. It is a high demand to extract the high-quality features from the spatial distribution of crops using the remote sensing images. In this study, the Sentinel-2A images were selected to extract the high-precision spatial distribution of winter wheat, in order to avoid the data scale reduction and feature fusion. Firstly, the red edge resource was utilized to classify the important features of winter wheat. The visible light and red edge bands were also combined to effectively reduce the misclassification of pixels for the high accuracy. A downscale model Red Edge Down Scale (REDS) was then established to balance the spatial scale of the data in the Sentinel-2A images, due to the different band resolution between the red edge (20m) and the visible light (10m). The generative countermeasure network was constructed using the three red edge bands of B5, B6 and B7. More importantly, the spatial resolution of B11 shortwave infrared band was reduced from 20 to 10 m, in order to obtain the better consistence in the spatial resolution of visible light and red edge band. The edge blur of image was also prevented from the interpolation (nearest neighbor interpolation). Secondly, the inputs of REDS consisted of the low- and high-resolution channel, correspondingly to the spectral and texture information, respectively. The spatial structure information was then input from the high- into the low-resolution channel. As such, the improved model was achieved in the image data from the high-resolution red edge and short-wave infrared (SWIR) channel. Secondly, the original data was extracted, and then combined into the basic input data, including the three red edge bands after scaling down, the visible light band with a resolution of 10m, and three remote-sensing index products, namely Enhanced Vegetation Index (EVI), Normalized Difference Vegetation Index (NDVI), and Normalized Difference Red-Edge1 (NDRE1). Thirdly, the semantic feature extraction model was constructed as the Red Edge and Vegetation Index Feature Network (REVINet) using convolutional neural network. The coding and decoding units were constructed in the REVINet model using residual network. The linear model was used to fuse the multi-scale features for the output by the decoding units. SoftMax function was used as a classifier for the pixel-by-pixel classification. Finally, the segmentation, and the spatial distribution of winter wheat were generated to verify the REVINet model, compared with the ERFNet, U-Net, and RefineNet models. The experimental results show that the smoother contour edge was extracted from the planting area of winter wheat, particularly with the less misclassification. Meanwhile, the recall (92.15%), precision (93.74%), accuracy (93.09%), and F1 score (92.94%) were better than the rest models, indicating the ideal performance. The spatial distribution of the whole research area demonstrated that the winter wheat in China was mainly distributed in the south of the Great Wall in 2022. The relatively high accuracy of extracted areas was achieved with the better coincidence degree, compared with the standard released by the National Statistical Department in 2021. Therefore, the data organization and feature extraction can be expected to serve as the spatial distribution of winter wheat using the Sentinel-2A. The finding can also provide the technical reference for the Sentinel-2A data in the agricultural field.