Abstract:
Abstract: Various sensors are utilized to ensure safety in the vicinity of an intelligent robot while it is walking, with the development and wide application of intelligent navigation. Among them, stereo cameras are prevalent in recent years due to their capability of distance measurement. Due to the limitation of traditional detection methods in stereo vision, a method based on point clouds was presented to meet the demand of obstacle detection on a robot while planning its path. Point clouds of an environment's space with an obstacle involved was taken as the object, and a validity box was applied into the space to eliminate point clouds in irrelevant regions. From a top view, the area was divided into grids, and the number of points in each grid was denoted as the density of point clouds. Once the point cloud density in different ranges was calculated, a curve that fitted the distribution of density, decreasing due to the range was figured out. It indicated that obstacles far from the camera created fewer point clouds than the close ones, leading to a mismatch and the recognition of obstacles. In order to compensate for the sparse point clouds of obstacles in the long range view, the density was compensated for according to a curve of the density-range relationship. And obstacles were able to be recognized by setting a threshold, so the distance could be measured. The specific space that an obstacle occupied was confirmed by setting one more validity box. Therefore, its shape and size could be measured after projecting it onto a front view. Experiments were conducted on the campus of Nanjing Agricultural University, Nanjing, Jiangsu Province, and source codes were programmed and compiled by Matlab software. Experimental results showed that this method could restore the obstacle information of point clouds effectively, while in the distance measurement test, it showed a maximum detection range of 28 m and an average error of 2.43%. Several experiments in various environments and weather were conducted as well, which indicated its robust performance with illumination changing. While in the size measurement tests, it showed a maximum range of 10 meters and the average error in length and height were 2.59% and 2.01%, respectively. In summary, this article was based on a density map of a point cloud and density-compensation algorithm to measure both the distance and the size of obstacles in the vicinity of the camera. Unlike conventional image processing methods, it converted three dimensional point cloud data to two dimensional, using the density of point clouds and applying a grid map to significantly decrease the calculation amount. Additionally, it functioned well under different weather conditions, in both an indoor and outdoor environment, showing a robustness over traditional methods that separate obstacles from the background in image processing. Because there were still some deficiencies to be improved, the current method and programming platform are still too time-consuming to fulfill the demands of real-time detection, but this appears to be a fruitful approach for future study in this field.