Fruit tree row recognition with dynamic layer fusion and texture-gray gradient energy model
-
-
Abstract
Abstract: Row following operation has been one of the key technologies for smart management in an orchard. Machine vision can be widely expected to recognize the row in recent years. But the complex environment in the orchard can pose a great challenge to the image acquisition of the visual systems, such as the various lighting conditions and branches. There is also some interference of the visual system on the color image processing or depth information analysis of orchard images. In this study, a fruit trees row recognition was established to concurrently capture the texture features and depth information of the orchard image using machine vision. Two parts were included the color-depth images layers fusion and texture-gray gradient energy model, in order to promote the based image segmentation that combined with the color image and depth image. Firstly, a binocular stereo vision system was built to obtain the orchard color image and depth image at the same time with the stereo vision calibration, according to orchard planting features and row recognition requirements. The gray value of saturation (S) channel image of HSV color space was used as the dynamic fusion factor to realize the layer fusion of orchard color and depth image, in order to avoid the loss of texture features, color or depth information that caused by layer fusion with the static fusion factor. Secondly, the texture and gray gradient features were calculated in the fused orchard image. A comparative analysis found that there was less interference in the trunk regions. More importantly, there was an outstanding gray gradient without the texture feature in orchard trunk regions. An energy model was then established to combine the texture and gray gradient features. The trunk region and background in the channel-fused orchard image were segmented easily using the minimum energy of the model. As such, the region segmentation of the trunk was completed with the layer fusion and energy model. Specifically, the intersection points of trunks and ground were taken as the characteristic points for the row line fitting. The fruit trees row lines were fitted using the Least Square Method (LSM) with the detected characteristic points. Finally, a series of experiments were carried out to evaluate the accuracy of the model, including the static fruit trees row recognition and mobile platform machine-vision navigation, Six conditions were selected to acquire the images for the better adaptability of the different illumination conditions and driving directions, including the front, back, and the side lighting direction with strong or weak illumination. The image processing showed that the fruit trees row angle recognition accuracy of the model was higher than that of the common texture or gray gradient. The maximum and average deviations of fruit trees row angle between the identified and the actual values were 5.56° and 2.81°, respectively, compared with the fruit trees row recognition algorithm based on texture and gray gradient features, the recognition average deviation of the algorithm established in this paper was reduced by 2.37° and 1.25° respectively. Several times rows following driving experiments were carried out in actual orchards during the period of summer full leaf and autumn defoliation. The mobile platform machine-vision navigation showed that the maximum and average deviations between the driving track and the center of the orchard line were 12.2 cm, and 5.94 cm, respectively, when the speed of the mobile platform was 0.6 m/s. Consequently, the orchard machine-vision navigation can fully meet the navigation requirements of the orchard mobile platform. The findings can lay a sound foundation to optimize the orchard automatic navigation system using machine vision.
-
-