Zhang Zhibin, Zhao Shuailing, Luo Xiwen, Wei Fengqi. Matching method of green crops based on SURF feature extraction[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(14): 172-178. DOI: 10.11975/j.issn.1002-6819.2015.14.024
    Citation: Zhang Zhibin, Zhao Shuailing, Luo Xiwen, Wei Fengqi. Matching method of green crops based on SURF feature extraction[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(14): 172-178. DOI: 10.11975/j.issn.1002-6819.2015.14.024

    Matching method of green crops based on SURF feature extraction

    • Abstract: At present three-dimensional (3D) reconstruction of crop images and binocular vision guidance of agricultural robot are still important hot research contents in the related field, in which the feature extraction and matching of green crops is one of key technologies for monitoring crop growth state or providing 3D guiding information for agricultural robot because they directly affect the accuracy of obtained 3D information of crops. A feature extraction and matching method of the green crops is proposed in this paper. We focus on the crop regions in an image that can provide enough crop information. Firstly, a pre-treatment process of the image in RGB (red, green and blue) space is employed to segment the green crop from the backgrounds. Then the morphology opening operation with an optimal size is used to filter the noises including some isolated points or small areas caused by weeds, small stone, shadows and residue etc. Considering the field application conditions, the SURF is adopted to obtain the featuring points because of its attractive performance including repeatability, distinctiveness and robustness without bad time consumption. And the reason is that its detector and descriptor are scale invariance and rotation invariance with the length of 64 dimensions. There are 2 steps to get the featuring points: the first step is to employ Hessian matrix to detect the featuring points, and then, the non-maxima suppression is used to search the extreme points and the interpolation arithmetic is to position them; in the second step, the featuring points are extracted by using a main direction vector which is the main factor of the invariance property. Finally, the Euclidean distances between each searching points are calculated in the left and right image to measure the similarity of searching point pair. The ratio of the nearest distance to the sub-nearest is used to determine whether the point is the matching one or not. If the ratio is larger than the set threshold, the matching is right otherwise wrong. At the same time, for improving the accuracy of matching, the complete constraint matching is implemented to restrain the wrong matching points. The constraint consists of 3 components: the first is the local epipolar line constraint, which requests the matching points must be located in a certain region; the second is left-right constraint that is the x coordinate of the point in left image must be larger than that of the corresponding point in right image; the third is a point of the left image is allowed to only match sole one of the right image. The experimental results show that each of the 3 constraints makes the accuracy of matching decrease when the ratio rises; but when the complete constraint matching is applied, the accuracy of matching has an inconspicuous variation. After the constraint procedure is implemented, the corresponding pairs will be sorted according to their distances. And the smaller the distance is, the more the probability of correct matching is. Thirteen groups of images under various illumination conditions about celeries, vegetables and cabbages are used to test the algorithm in this paper. The experimental results show that SURF is superior to SIFT and can be used to obtain the 3D information of crops for agricultural machinery vision system. And the mean of the extraction rate of featuring points for SURF and SIFT is 1.2%, 3.3%, respectively; and the mean of the accurate rate of matching of them is 94.8%, 92.4%, respectively; the time consumption is 4.6 s, 4.8 s, respectively.
    • loading

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return