Abstract:
Robot navigation is limited in the edible fungus factories, such as the narrow roads, GPS signal reception, sparse spatial distribution of feature points caused by shelf arrangements, as well as the perception blind spots from single sensors. However, the single-line LiDAR sensor cannot scan the entire mushroom rack, leading to incomplete navigation maps. It is often required for high navigation accuracy due to the absence of obstacles among the mushroom logs on the racks. In this study, a multi-sensor fusion was proposed for the robot navigation under spatial constraints in the edible fungus factories. Firstly, the error-state Kalman filter (ESKF) was used to fuse both encoders and IMUs sources, in order to improve the accuracy of the positioning. The noise and uncertainty were then reduced in the data collection from the encoders or inertial measurement units (IMUs). Then, a dual LiDAR data fusion was proposed to combine environmental information from different heights. Secondly, an improved Cartographer-based laser SLAM was used to construct a navigation grid map. The autonomous navigation framework was realized using Navigation2. The navigation of the robots was then utilized for the continuous switching among the planning, control, and recovery servers by calling the navigation tree server. Finally, the speed command was output to the microcontroller, which was controlled by the robot's movement. The Adaptive Monte Carlo Localization (AMCL) was used for the global positioning, while the Theta* algorithm was employed as the algorithm for the planning server. A dynamic algorithm of the window-based local path planning was applied to the control server in order to guide the movement of the robots. A comparison was performed on the constructed maps using the top and bottom LiDAR, as well as the fusion of both LiDARs. The top LiDAR failed to identify the gaps among the mushroom logs as obstacles, while the bottom LiDAR failed to scan the mushroom logs, only scanning part of the mushroom rack. The incomplete maps were constructed by a single LiDAR, while the dual LiDAR fusion was recognized as the mushroom logs that detected the mushroom racks. The results showed that there was an obstacle detection rate 2.07 percental points higher than that of a single LiDAR. In positioning accuracy tests, four target points were randomly selected along the longitudinal aisle. And then the positioning error was calculated to compare the robot's coordinates with the real coordinates of the target points. The encoder and IMU data were fused at a moving rate of 0.40 m/s in the robot. There were the maximum longitudinal, lateral, and angular deviations of 5.80 cm, 3.50 cm, and 3.00°, respectively, with the standard deviations of less than 1.47 cm, 1.17 cm, and 1.16°, respectively. The cumulative error of the encoder also increased gradually as the longitudinal displacement increased. In the navigation tests, the average longitudinal deviation, lateral deviation, and heading deviation between the actual and target navigation paths were 2.24 cm, 1.90 cm, and 2.04°, respectively, when the robot was navigated at 0.20 m/s. At a speed of 0.50 m/s, the average deviations were 4.10 cm, 2.64 cm, and 2.82°, respectively. At a speed of 0.70 m/s, the average deviations were 5.78 cm, 3.80 cm, and 4.00°, respectively. Overall, the robot's average longitudinal and lateral deviations were less than 5.78 cm and 3.80 cm, respectively, with the standard deviations of no more than 1.63 cm and 1.32 cm, respectively, and the average heading deviation was less than 4.00°, with a standard deviation of no more than 0.84°. Both positioning and navigation accuracies met the requirements for the robot operations. The finding can provide significant technical support to the intelligent development of the edible fungus industry.