戴建国, 张国顺, 郭鹏, 曾窕俊, 崔美娜, 薛金利. 基于无人机遥感可见光影像的北疆主要农作物分类方法[J]. 农业工程学报, 2018, 34(18): 122-129. DOI: 10.11975/j.issn.1002-6819.2018.18.015
    引用本文: 戴建国, 张国顺, 郭鹏, 曾窕俊, 崔美娜, 薛金利. 基于无人机遥感可见光影像的北疆主要农作物分类方法[J]. 农业工程学报, 2018, 34(18): 122-129. DOI: 10.11975/j.issn.1002-6819.2018.18.015
    Dai Jianguo, Zhang Guoshun, Guo Peng, Zeng Tiaojun, Cui Meina, Xue Jinli. Classification method of main crops in northern Xinjiang based on UAV visible waveband images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(18): 122-129. DOI: 10.11975/j.issn.1002-6819.2018.18.015
    Citation: Dai Jianguo, Zhang Guoshun, Guo Peng, Zeng Tiaojun, Cui Meina, Xue Jinli. Classification method of main crops in northern Xinjiang based on UAV visible waveband images[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(18): 122-129. DOI: 10.11975/j.issn.1002-6819.2018.18.015

    基于无人机遥感可见光影像的北疆主要农作物分类方法

    Classification method of main crops in northern Xinjiang based on UAV visible waveband images

    • 摘要: 作物类型准确分类是大田作业和管理的基础。该文通过无人机遥感试验获取的可见光影像,利用色彩空间转换和纹理滤波构建了色调、饱和度和亮度的27项纹理和低通滤波特征;然后采用ReliefF-Pearson特征降维方法,剔除分类能力弱且相关性高的冗余特征;最后,基于优选特征训练分类模型,并结合人工分类结果对各模型进行精度比较和效果验证。结果表明:特征选择得到的H-CLP、H-Ent、I-Cor、I-CLP、I-Ent、S-CLP和I-Var是利用可见光影像进行北疆主要农作物分类的最佳特征,可在充分表征影像特征的同时降低数据冗余。支持向量机(support vector machine, SVM)分类方法精度最高,整体分类准确率达83.77%,ANN和KNN分类精度次之。通过在验证区进行像素级别作物分类,发现SVM分类方法效果最好,棉花、玉米、苜蓿和西葫芦作物分类精度均达到了80%以上。该研究可为基于无人机可见光影像的农作物种植信息普查提供参考。

       

      Abstract: Abstract: Northern Xinjiang is a significant base of agricultural production in China. Finding out the cropping structure of main crops is important for food security. Accurate acquisition of field crop planting information is the basis of agricultural production and the key to accurate estimation of yield. This paper selected some farmlands of the 8th Division of Xinjiang Production and Construction Corps as the experimental region and verified region. In August 2017, the CW-20 fixed-wing drone was equipped with a Parrot Sequoia camera, which captured the visible light image. The red, green and blue bands were employed to color space conversion and texture filtering, the texture and structural features of the image were deeply explored, and the automatic extraction of crop planting information was realized. Firstly, color space transformation and gray level co-occurrence matrix texture filtering were used to obtain 27 texture features of hue, saturation, and brightness. Secondly, classification weights and correlation coefficients of 24 texture features and 3 low-pass features were calculated by ReliefF-Pearson feature reduction method. The redundant features with weak classification ability and high correlation were rejected. Hue convolution low pass, hue entropy, intensity correlation, hue homogeneity, intensity convolution low pass, intensity entropy, intensity homogeneity, intensity dissimilarity, saturation convolution low pass, saturation correlation, and intensity variance were the optimal features for visible waveband image classification. Based on the optimal classification features, K-nearest neighbor (KNN), support vector machine (SVM), Naive Bayes (NB), artificial neural network (ANN), and decision tree (C5.0) classification algorithms were used to train model parameters and assess the classification effect. The overall classification of all methods is quite satisfactory, and the classification accuracy rate is over 80%. Among them, the SVM method has the highest classification accuracy, and the accuracy of testing set area classification is 83.77%, followed by the ANN and KNN methods. Finally, the SVM, KNN, and ANN classification methods were used to perform pixel-level supervised classification of images in experimental region and verified region, and the classification effects were evaluated based on visual interpretation maps. The results show that the SVM, KNN, and ANN classification methods are satisfactory in the classification of crop category, and the boundary of the field block is relatively complete, but the classification effect of the non-crop category is unsatisfactory. From the confusion matrix, the SVM method has the highest classification accuracy. Its overall accuracy is 80.74% and Kappa coefficient is 0.75. The classification accuracies of cotton, corn, marrow, and alfalfa crops are 82.78%, 85.49%, 92.65% and 80.84%, respectively, while that of the trees and other categories are only 64.58% and 59.28%, respectively. The overall accuracies of KNN and ANN methods are above 74%. In summary, based on the optimal color-texture features selected by ReliefF-Pearson, the SVM classification method is the most stable and reliable in the crop category classification and can be utilized to classify the main crops in northern Xinjiang. However, all classification methods are built on the pixel level. There are some differences in the canopy structure of the same crop, and some internal areas may be misclassified as others. Using object-oriented segmentation can improve the classification accuracy. In addition, there are differences in the growth period of different kinds of crops, and the difference in the optimal identification time differs greatly in the accuracy of each category. Using crop phenology information combined with the time-series images can get the expected classification effect. The method of this paper has a certain applicability for the UAV (unmanned aerial vehicle) visible waveband images to obtain planting information of the main crops in northern Xinjiang and provides a methodological reference for a large-scale crop planting information survey.

       

    /

    返回文章
    返回