Luo Lufeng, Zou Xiangjun, Cheng Tangcan, Yang Zishang, Zhang Cong, Mo Yuda. Design of virtual test system based on hardware-in-loop for picking robot vision localization and behavior control[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(4): 39-46. DOI: 10.11975/j.issn.1002-6819.2017.04.006
    Citation: Luo Lufeng, Zou Xiangjun, Cheng Tangcan, Yang Zishang, Zhang Cong, Mo Yuda. Design of virtual test system based on hardware-in-loop for picking robot vision localization and behavior control[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(4): 39-46. DOI: 10.11975/j.issn.1002-6819.2017.04.006

    Design of virtual test system based on hardware-in-loop for picking robot vision localization and behavior control

    • Abstract: In the process of developing picking robot prototype, the traditional picking tests are usually performed in orchard, which are limited by certain factors such as the harvesting season, weather condition and venue. So, the investigated and designed algorithm for the vision and control system of picking robots can't be verified effectively and timely, and the prototype development cycle has to last longer. To test the vision and control algorithm of picking robot, a hardware-in-the-loop virtual experimental system based on binocular stereo vision for grape-picking robot was designed in this paper, which was composed of hardware and software units. The hardware units consisted of binocular camera, grape clusters, grape imitative leaf and stems, support structure of grape clusters and its guide rail, calibration board, and so on. The software units included vision processing part and virtual picking robot. Firstly, the spatial information such as the picking point and the anti-collision bounding volume of the grape cluster was extracted by binocular stereo vision. The picking point on the peduncle of the grape cluster was detected by using a minimum distance restraint between the barycentre of the pixel region of grape cluster and the detected lines in the ROI (region of interest) of peduncle. The anti-collision bounding volume of the grape cluster was calculated by transforming the spatial coordinates of the picking point and all detected grape berries into the coordinate system of grape clusters. Secondly, the three-dimensional models of the picking robot were constructed according to the picking robot prototype with 6 degrees of freedom which already existed in our laboratory. The Denavit-Hartenberg (D-H) method was adopted to establish the robot coordinate transformation. The direct and inverse solutions of the robot kinematics were solved by using the inverse transformation method, and then the only inverse solution was obtained. Thirdly, the moving path of picking robot was planned based on the artificial potential field theory. The collision between the robot manipulator and the grape clusters in the virtual environment was detected by using the hierarchical bounding box algorithm which can validate the reasonability of path planning. The motion simulation of the virtual picking robot was programmed by combining the modular programming and the routing communication mechanism. Finally, the spatial information of the grape clusters was extracted by programming the application code using Visual C++ and OpenCV (open source computer vision library), and the path planning and the motion simulation of the virtual picking robot were performed based on the virtual reality platform EON, Visual C++ and JavaScript. The hardware-in-the-loop virtual experimental platform was established by combining the binocular stereo vision and virtual picking robot. On this platform, 34 tests were performed by changing the position of the grape clusters under laboratory environment while the binocular cameras kept still. And every test included 3 steps, the first step was vision locating, the second was path planning and the last was clamping and cutting operation. In all the tests, 29 tests were successful in vision locating, and 5 tests were failed in vision locating. Among those 5 failed tests, 2 tests were wrong in picking point detection and 3 tests were failed in stereo matching on the picking point. There was one test failed in path planning when the grape clusters were located correctly, and all of the clamping and cutting operation for the grape clusters ran smoothly when the anti-collusion path was planned successfully. In general, the success rates of the tests on visual localization, path planning, clamping and cutting operation were 85.29%, 82.35%, 82.35%, respectively. The results showed that the method developed in this study can be used to verify and test the visual location and behavior algorithm of the picking robot, and then provide the support to the harvesting robot development, test and continuous improvement.
    • loading

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return