罗陆锋, 邹湘军, 程堂灿, 杨自尚, 张丛, 莫宇达. 采摘机器人视觉定位及行为控制的硬件在环虚拟试验系统设计[J]. 农业工程学报, 2017, 33(4): 39-46. DOI: 10.11975/j.issn.1002-6819.2017.04.006
    引用本文: 罗陆锋, 邹湘军, 程堂灿, 杨自尚, 张丛, 莫宇达. 采摘机器人视觉定位及行为控制的硬件在环虚拟试验系统设计[J]. 农业工程学报, 2017, 33(4): 39-46. DOI: 10.11975/j.issn.1002-6819.2017.04.006
    Luo Lufeng, Zou Xiangjun, Cheng Tangcan, Yang Zishang, Zhang Cong, Mo Yuda. Design of virtual test system based on hardware-in-loop for picking robot vision localization and behavior control[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(4): 39-46. DOI: 10.11975/j.issn.1002-6819.2017.04.006
    Citation: Luo Lufeng, Zou Xiangjun, Cheng Tangcan, Yang Zishang, Zhang Cong, Mo Yuda. Design of virtual test system based on hardware-in-loop for picking robot vision localization and behavior control[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(4): 39-46. DOI: 10.11975/j.issn.1002-6819.2017.04.006

    采摘机器人视觉定位及行为控制的硬件在环虚拟试验系统设计

    Design of virtual test system based on hardware-in-loop for picking robot vision localization and behavior control

    • 摘要: 因采摘机器人野外试验易受收获季节、气候和场地等诸多因素的限制,为辅助试验采摘机器人视觉定位及其行为控制算法,设计了基于硬件在环仿真的葡萄采摘机器人虚拟试验。该文先利用双目立体视觉提取葡萄串采摘点及防碰包围体等空间信息;然后以实验室已有的6自由度采摘机器人样机为原型,建立三维虚拟仿真模型,运用D-H法建立机器人坐标变换,求解虚拟环境下机器人运动学正解和逆解;再以实物视觉提取的葡萄串空间信息为基础,运用VC++、Javascript等编程语言在虚拟现实平台EON上对采摘机器人视觉定位及其采摘行为进行仿真设计和编程实现,设计出一套以实物视觉与虚拟采摘机器人相结合的硬件在环仿真平台。最后,在该平台上对葡萄采摘机器人进行了34次虚拟试验,试验中视觉定位、路径规划、夹剪果梗3个环节的成功率依次为85.29%、82.35%、82.35%。结果表明,该方法可很好地运用于验证和试验采摘机器人视觉定位及其行为算法。

       

      Abstract: Abstract: In the process of developing picking robot prototype, the traditional picking tests are usually performed in orchard, which are limited by certain factors such as the harvesting season, weather condition and venue. So, the investigated and designed algorithm for the vision and control system of picking robots can't be verified effectively and timely, and the prototype development cycle has to last longer. To test the vision and control algorithm of picking robot, a hardware-in-the-loop virtual experimental system based on binocular stereo vision for grape-picking robot was designed in this paper, which was composed of hardware and software units. The hardware units consisted of binocular camera, grape clusters, grape imitative leaf and stems, support structure of grape clusters and its guide rail, calibration board, and so on. The software units included vision processing part and virtual picking robot. Firstly, the spatial information such as the picking point and the anti-collision bounding volume of the grape cluster was extracted by binocular stereo vision. The picking point on the peduncle of the grape cluster was detected by using a minimum distance restraint between the barycentre of the pixel region of grape cluster and the detected lines in the ROI (region of interest) of peduncle. The anti-collision bounding volume of the grape cluster was calculated by transforming the spatial coordinates of the picking point and all detected grape berries into the coordinate system of grape clusters. Secondly, the three-dimensional models of the picking robot were constructed according to the picking robot prototype with 6 degrees of freedom which already existed in our laboratory. The Denavit-Hartenberg (D-H) method was adopted to establish the robot coordinate transformation. The direct and inverse solutions of the robot kinematics were solved by using the inverse transformation method, and then the only inverse solution was obtained. Thirdly, the moving path of picking robot was planned based on the artificial potential field theory. The collision between the robot manipulator and the grape clusters in the virtual environment was detected by using the hierarchical bounding box algorithm which can validate the reasonability of path planning. The motion simulation of the virtual picking robot was programmed by combining the modular programming and the routing communication mechanism. Finally, the spatial information of the grape clusters was extracted by programming the application code using Visual C++ and OpenCV (open source computer vision library), and the path planning and the motion simulation of the virtual picking robot were performed based on the virtual reality platform EON, Visual C++ and JavaScript. The hardware-in-the-loop virtual experimental platform was established by combining the binocular stereo vision and virtual picking robot. On this platform, 34 tests were performed by changing the position of the grape clusters under laboratory environment while the binocular cameras kept still. And every test included 3 steps, the first step was vision locating, the second was path planning and the last was clamping and cutting operation. In all the tests, 29 tests were successful in vision locating, and 5 tests were failed in vision locating. Among those 5 failed tests, 2 tests were wrong in picking point detection and 3 tests were failed in stereo matching on the picking point. There was one test failed in path planning when the grape clusters were located correctly, and all of the clamping and cutting operation for the grape clusters ran smoothly when the anti-collusion path was planned successfully. In general, the success rates of the tests on visual localization, path planning, clamping and cutting operation were 85.29%, 82.35%, 82.35%, respectively. The results showed that the method developed in this study can be used to verify and test the visual location and behavior algorithm of the picking robot, and then provide the support to the harvesting robot development, test and continuous improvement.

       

    /

    返回文章
    返回