Self-adaptive segmentation method of cotton in natural scene by combining improved Otsu with ELM algorithm
-
-
Abstract
Abstract: In order to segment cotton from background accurately and quickly under the unstructured natural environment, an improved scene adaptive optimization segmentation algorithm was proposed. Firstly, the traditional Otsu algorithm was adopted for cotton segmentation by analyzing the rule of gray histogram distribution of images acquired. During each segmentation of cotton image, the background pixel was set to zero and the gray histogram distribution of the target pixel was analyzed after segmented. The segmentation stopped when the histogram distribution appeared to be a steep unimodal distribution and the final segmentation threshold was the optimized threshold. There were still some background pixels in the segmentation results in the first step. Therefore, secondly, a threshold selection rule was utilized to segment cotton from background based on the statistical analysis of the RGB (red, green, blue) values. Cotton pixels were then extracted after morphology processing. Thirdly, cotton pixels with the RGB values were labeled as positive samples, and then background pixels with the same number of cotton were labeled as negative samples. The feature vectors which the positive and the negative samples shared in common were removed to improve the accuracy. A large-scale training samples were obtained automatically by using previous steps. In the fourth step, the samples were used to train the ELM (extreme learning machine) classification model for cotton segmentation. In the fifth step, the RGB values of all pixels in the test images were normalized and passed to the classification model and the output value will be 1 if the pixels belong to the cotton, or 0 if the pixels belong to the background. Another morphological procedure was employed to remove the noise segments of the output results. In the final step, the centroids of the connected regions which had a roundness of greater than 0.3 were extracted. If the pixel distance between 2 centroids was less than 40 pixels, a new centroid position was returned as a two-point center, so that some incomplete parts separated by the leaf were connected. Then the unsupervised sampling and segmentation of cotton under the unstructured light environment were achieved and the coordinates of cotton were derived. The algorithm proposed in this paper was verified by cotton segmentation experiment. The original datasets were collected on both sunny and cloudy days in the cotton breeding station located in Sanya, Hainan, China. The training datasets were composed of labeled samples generated from 20 images, 10 from sunny day and 10 from cloudy day. Another 60 images, 30 from sunny day and 30 from cloudy day, constituted the testing datasets. For each image in testing datasets, the accuracy is determined by the comparison of numbers between identified cotton and actual cotton. By applying the proposed method, the cotton was identified after segmentation. The average processing time is 0.58 s and the average recognition rates for the images on sunny and cloudy day were 94.18% and 97.56%, respectively. A comparison analysis was conducted between the proposed method and popular methods under Windows 10 64bits with an Intel Core i7-7400 CPU 3.00 GHz, 12 GB RAM, and MATLAB R2016a. For the segmentation of images on sunny day, accuracies by applying the proposed method, SVM (support vector machine) and BP (back propagation) with only RGB features as training samples were 94.57%, 77% and 83.35%, respectively. And the segmentation time was 0.58, 177.31 and 6.52 s, respectively. Accuracies by applying the proposed method, SVM and BP with Tamura texture and RGB features as training samples were 75.68%, 80.54% and 86.17%, respectively. And the segmentation time was 467.87, 678.35 and 551.01 s, respectively. The segmentation results of images on cloudy day were much like that on sunny days except slightly higher in accuracy. The results indicate that the proposed method exceeds popular methods in segmentation speed and accuracy. Moreover, the proposed method avoids texture feature extraction for each pixel, which guarantees better real-time performance.
-
-