Abstract:Remote sensing has been one of the most important means to interpret land cover types in recent years. A great challenge still remains on the scattered distribution of land use types, fragmented agricultural landscapes, and complex crop planting structures in most agricultural areas in China. Much effort has been focused on only a single data source and a few types of features. Therefore, it is a high demand to accurately and effectively classify the land use types, in order to reduce the type misclassification in previous cases. In this study, an accurate and rapid classification was carried out in the Google Earth Engine (GEE) environment for an image collection using multiple remote sensing data sources and multiple features. Taking the Nuomuhong area, Qinghai Province of China as the study areas, the distribution map of main characteristic types was captured to evaluate the agricultural planting and ecological security. The data sources were from the Sentinel-1 synthetic aperture radar data, Sentinel-2 and Gaofen-2 multispectral data. The band, vegetation index, texture and polarization characteristics were calculated to construct the required space of feature classifications. Feature optimization and Random Forest (RF) were selected to realize the supervised classification, where the data redundancy from multiple features was reduced to improve the calculation efficiency of the classifier. The spatial performance of constructed multi-features was evaluated to collaboratively classify the multi-source data. The results show that the overall accuracy and Kappa coefficient reached 97.62% and 0.971 6 in the collaborative classification under the band, vegetation index, and texture features using Sentinel-1 and Sentinel-2 data sources, which were higher than that using single data or partial features. In the classification accuracy, the overall accuracy was 95.91%, and the Kappa coefficient was 0.951 1. In the collaborative ground object classification, the overall accuracy and the Kappa coefficient reached 96.67%, and 0.960 2, respectively, when using the band, vegetation index, texture, and polarization feature extracted from Sentinel-1, Sentinel-2 and GF-2 data. In general, the multi-data source and multi-feature collaborative classification presented a higher accuracy than the single data source or the few feature classification. There were also different effects on the texture features in the images with various spatial resolutions during classification. Nevertheless, an optimal extraction of texture features can be achieved to predict the images at the appropriate resolution. Consequently, the combined multi-source data and multi-feature can be widely expected to classify the ground objects on the cloud platform, in order to effectively improve the classification accuracy of crops. The accurate crop-based extraction can provide the decision-making on the crop pattern change, yield estimation, and security early warning.