A Depth Map Generation Method for Path Planning of Mobile Robot

Recently, the unmanned mobile robots have received broad applications, such as industrial and security inspection, disinfection and epidemic prevention, warehousing logistics, agricultural picking, etc. In order to drive autonomously from departure to destination, an unmanned mobile robot mounts different sensors to collect information around it and further understand its surrounding environment based on the perceptions. Here we proposed a method to generate high-resolution depth map for given sparse LiDAR point cloud. Our method fits the point cloud into a 3D curve and projects LiDAR data onto the curve surface, and then we make appropriate interpolations of the curve and finally implement the Delaunay triangulation algorithm to all the data points on the 3D curve. The experimental results show that our approach can effectively improve the resolution of depth maps from sparse LiDAR measurements.


Introduction
Recently, the unmanned mobile robots have received broad applications, such as industrial and security inspection, disinfection and epidemic prevention, warehousing logistics, smart factory, agricultural picking, etc. In order to drive autonomously from departure to destination, an unmanned mobile robot needs to mount different sensors to collect different kinds of information around it and further understand its surrounding environment based on the perceptions. The camera and the Light Detection and Ranging (LiDAR) are two typical sensors. A camera is to gather visual information while the LiDAR is to obtain the geometrical information about the target such as target distance, azimuth, altitude, speed, attitude, shape, etc. A depth map is a gray-level image that captures the information of the distance from the scene objects to the viewpoint according to the luminance strength. There exist broad applications for depth maps in computer vision and mobile robot such as semantic segmentation, 3D modelling and reconstruction, inspection, navigation, etc [1][6] [7]. In practice, a dense depth map is very hard to be generated directly from the sensing data because the sensing scope is limited. In general, there are three ways to generate the depth completion. Firstly, it is only based on the visual data collected by cameras [2]. The multi-view stereo (MVS) method is adopted to implement the 3D reconstructions. The visual 3D reconstructions are not robust as there are always short of obvious features and textures in the visual scene. Secondly, it is based on the LiDAR point clouds [9]. The LiDAR measurements can directly provide much physical information for depth completion where rich visual information is not necessary. However, this approach is hard to create dense 3D reconstructions because of the sparse measurements from the limited sensing scope of LiDAR. As a result, many researchers are interesting  [8]. The fusion of vision data and LiDAR measurements needs to calibrate the camera and LiDAR, that is, calculate the camera's internal parameters and the Camera-LiDAR external parameters. There is a certain degree of difficulty in this work. If the calibration parameters are not correct, the visual data cannot be matched with the LiDAR measurements. Here, we proposed a method to generate high-resolution depth map given sparse LiDAR measurements. We consider indoor environment and the scene objects are rigid. The Velodyne VLP-16 LiDAR is adopted in the experiments. Our method first fits the point cloud from LiDAR to the rigid objects into a 3D curve, and then projects all the LiDAR data onto the curve surface. Next, interpolations are applied to the curve to obtain more scene data. Finally, the Delaunay triangulation algorithm is implemented on all the data points on the 3D curve surface. The experimental results show that the proposed method can produce high-resolution depth maps.

Method to Densify Depth Map from LiDAR Point Cloud
Because the sensing scope for the LiDAR is very limited, the point cloud data from the LiDAR is not dense, in general, is sparse. Here a method to generate high-resolution depth map is proposed for sparse LiDAR data. In the study, indoor environment is considered and the scene objects are assumed to be rigid. This way we can simplify some processings such as curve fitting, triangulation, etc. This assumption is valid for many applications of the unmanned mobile robots such as warehousing logistics, smart factory, etc. In the proposed method, the point cloud from LiDAR to the rigid objects is first fitted with a 3D curve in the physical coordinate system, and then all the LiDAR data are projected onto the curve surface. Next, in order to get more dense data we make appropriate interpolations on the curve to create more scene data. Finally, the Delaunay triangulation algorithm is implemented on all the data points on the 3D curve to produce the depth map.

Curve Fitting and Projection
As we consider the rigid scene objects, the surfaces of the objects can be fitted with a 3D curve. The typical curves are quadratic surface and plane. Under indoor environment, there always exist daily necessities such as cup, bottle, box, furniture (such as table, cabinet), and floor, ceiling, wall, etc. The point cloud for a cup or a bottle can be fitted with a quadratic surface as follows: , , 0 On the other hand, the point cloud for floor, ceiling, and wall can be fitted with a plane surface as follows: , , 0 After the curve is fitted, all the point cloud data are projected onto the curve surface.

Interpolations
In order to densify the data points on the 3D curve, we interpolate appropriate data points on the 3D curve surface for those points if the distance between the points is greater some threshold. That is, given any two points and , the distance between and is | |,if Where is a threshold. Then insert a point on the curve, such that satisfies with the curve , , 0 or , , 0

Delaunay Triangulation
In this step, the Delaunay triangulation algorithm is implemented on all the data points on the 3D curve surface. As a consequence, the 3D curve surface is splitted into fragments, where ￚ Each fragment is a curved edge triangle. ￚ Any two such curved edge triangles on the surface either do not intersect or just intersect with a common edge. There are no two or more common edges at the same time. All the data points on the surface of a curved edge triangle are treated with the same distance to the viewpoint.

Algorithm for Depth Map Generation
The algorithm to generate the depth map consists of the following steps: 1) Fit a 3D curve with point cloud data; 2) Project all the point cloud data onto the curve surface; 3) Make interpolations for the curve; 4) Implement the Delaunay triangulation to the curve; 5) Create the depth map from curved edge triangles.

Experimental Results
Here we want to generate high-resolution depth map for the unmanned mobile robot such that the robot can perceive its surrounding environment and navigate autonomously. We assume that it's indoor environment and the scene objects are rigid. A Velodyne VLP-16 LiDAR is mounted on the robot to collect surrounding information, in particular, the distance information.

Experimental Setup
The LiDAR emits laser beam to the target and receives the signal reflected from the target. With the comparison between the received signal reflected from the target and the transmitted signal, the LiDAR can obtain the relevant geometrical information about the target, such as target distance, azimuth, altitude, speed, attitude, and even shape. The following Figure 1 shows the Velodyne VLP-16 LiDAR mounted on the unmanned mobile robot. The major measurement parameters for one Velodyne VLP-16 LiDAR are: measuring range -100 meters, measuring accuracy -±3 cm, vertical measuring angle range -30° (+15° to -15°), vertical angle resolution -2°, the measurement angle range in the horizontal direction -360°, and the angular resolution in the horizontal direction -0.1° to 0.4°.

Depth Map Generation
The

Conclusion
In practice, our proposed method can effectively improve the resolution of depth maps created from sparse LiDAR point clouds. This method works well for regular and rigid objects such as cups, levels, walls, boxes under indoor environment. Because of the difficulty of surface fitting and interpolation for irregular objects, it is impossible to create a high-resolution depth map with this method. In the future work, we plan to propose a method that can increase resolution of depth maps for irregular objects. Meanwhile, we will integrate the visual information into the 3D reconstruction to obtain better outcomes.