Research on laser-based mobile robot SLAM and autonomous navigation

SLAM (Simultaneous Localization and Mapping) and autonomous navigation systems, as fundamental components of mobile robots, largely determine their ability to accomplish tasks. While research on laser-based SLAM and autonomous navigation algorithms in simulated environments has achieved significant success, there is limited research on deploying these algorithms on physical mobile robots, conducting real-world experiments to provide practical data for validating the accuracy and applicability of theoretical models. This paper begins by presenting the system architecture of the mobile robot used in this study. Subsequently, it delves into the research on SLAM algorithms, analyzing and selecting the Gmapping algorithm as the solution to the SLAM problem. Furthermore, the paper introduces the research on autonomous navigation, providing a detailed explanation of the principles behind the A* global path planning algorithm combined with the Dynamic Window Approach (DWA) for local path planning. Finally, by controlling the mobile robot through a distributed communication network, the paper tests the performance of the mobile robot, constructs maps in a real indoor environment, achieving a relative map accuracy error of less than 3%, thus validating the effectiveness of the Gmapping algorithm for practical mapping. Autonomous navigation tasks are also performed, testing the mobile robot’s local obstacle avoidance capabilities using temporary obstacles. Overall, this research demonstrates that the designed mobile robot can effectively accomplish SLAM and autonomous navigation tasks. The results of this study can also assist beginners in rapidly and clearly building physical mobile robots and deploying SLAM systems.


Introduction
With the rapid development of intelligent and unmanned technologies in robotics, mobile robots have found extensive applications in autonomous inspection, search and rescue, logistics distribution, and various other scenarios.When facing complex and unknown environments, robots need to accomplish three essential tasks to achieve autonomy and intelligence: the first is localization, the second is mapping, and the third is path planning.The first two tasks are encapsulated in the concept of SLAM (Simultaneous Localization and Mapping), [1] which can be described as follows: in an unknown environment, a robot starts from an unknown position and, during its movement, generates real-time state estimates through sensor feedback to determine its own location.Simultaneously, it constructs an incremental map based on its self-localization.The third task is path planning, with the goal of rapidly finding a collision-free motion path within a workspace containing obstacles, allowing the robot to safely navigate around all obstacles and reach its destination.
In 2017, Chen Zhuo and colleagues from the Academy of Military Medical Sciences, Institute of Health Equipment, established a mathematical model for the SLAM problem and improved the Gmapping package within the ROS framework.[2] They conducted simulation experiments that effectively addressed issues related to real-time data association during robot movement and the computational complexity of updating the robot's state covariance matrix.In 2021, Mao Chengkai and others from Changshu Institute of Technology [3] designed a solution for a miniature mobile robot based on ROS.They conducted simulation experiments in Webots to test SLAM algorithms and local path planning algorithms, confirming the robot's ability to perform mapping, path planning, and navigation functions as proposed.However, these studies were limited to simulated environments.While simulation experiments are often more flexible and cost-effective, they may not fully replicate all aspects of real-world environments.Therefore, it is essential to deploy algorithms on physical mobile robots, conduct real experiments, and provide actual data to validate the accuracy and applicability of theoretical models.
This paper presents an autonomous setup for a mobile robot and conducts real experiments.In this experiment, we employ the Gmapping map construction algorithm and a combination of A* and DWA for path planning to test the robot's performance in an indoor environment.Through real experiments, the designed mobile robot in this paper demonstrates precise localization, map construction, and the ability to address autonomous navigation challenges.
The remaining sections of this paper are organized as follows: Section 2 introduces the basic system structure of the mobile robot.Section 3 outlines the fundamental principles of the Gmapping algorithm in SLAM.Section 4 provides a brief introduction to the A* algorithm and DWA algorithm used for path planning.Section 5 presents and analyzes the results of real experiments.Finally, Section 6 offers conclusions and prospects for the future of robot SLAM and navigation technologies.

Mobile Robot Design Concept
A comprehensive indoor mobile robot must fulfill the basic requirements for autonomous navigation.Based on these practical functional needs, the system design of the mobile robot consists of four main components: sensing, control, actuation, and monitoring.The framework of the mobile robot system is depicted in Figure 1, and the physical representation of the mobile robot is shown in Figure 2.
The sensing component primarily comprises devices such as laser radar, IMU (Inertial Measurement Unit), and odometry, which consist of fully functional external and internal sensors.These sensors continuously receive real-time environmental information from both inside and outside the robot.The control component, as the core of the mobile robot, processes the data obtained by the sensing component.It consists mainly of the main control module (Raspberry Pi) and the chassis controller (STM32).The actuation component, controlled by the control component, executes the robot's directional movements based on the control commands issued.It consists mainly of motor drivers and motors.To monitor real-time data and the robot's movement status during its motion, a monitoring component is set up.It is facilitated by a PC host connected to the main control module.(2) "Raspberry Pi," serving as the core controller; (3) "Motors and Encoders," used to acquire feedback on the vehicle's motion speed; (4) "STM32 Controller," responsible for controlling the vehicle's motion; (5) "Gyroscope," used to obtain attitude information; (6) "Power Supply System."

Mobile Robot Algorithm Steps
When executing autonomous navigation tasks, a mobile robot requires prior information in the form of environment maps and motion paths as prerequisites.The specific steps for this design concept are illustrated in Figure 3. Sensors such as the laser radar, IMU, and odometer within the sensing component collect information about environmental obstacles, the robot's own pose, and motion.This collected information is then transmitted to the control component.The STM32-based controller within the control component first integrates and processes the data from various sensors and sends it to the Raspberry Pi.The Raspberry Pi, equipped with a pre-programmed SLAM algorithm, handles the robot's pose localization and local map construction.As the robot traverses the entire environment, the global map continually updates and refines until it is complete.With the environment map fully constructed, a global path planning algorithm is employed to devise an efficient path from the starting point to the destination.Signals are sent to the motor drivers to control the robot's motion along the designated path.In cases where local obstacles are encountered, a local path algorithm is used to re-plan the optimal local path, ultimately allowing the robot to reach its destination and complete the autonomous navigation task.

SLAM Algorithm
The origins of SLAM (Simultaneous Localization and Mapping) can be traced back to 1987 when the Extended Kalman Filter (EKF) was first applied to solve the SLAM problem.EKF jointly represents the robot's pose and the locations of landmarks using a unified state vector.While EKF SLAM pioneered the application of probabilistic statistics to SLAM, it had some drawbacks, such as significant computational resource requirements, poor algorithm stability, and the inability to directly use the constructed feature maps for the robot's autonomous navigation.Improvements to the above issues were made with laser-based SLAM methods using particle filters.In 2003, Montemerlo et al. [4] introduced the Fast SLAM 1.0 algorithm, which decomposed the SLAM problem into estimating the robot's pose and the location of landmarks separately, solved using particle filters and Kalman filters, respectively.However, particle filters suffer from particle degeneracy, leading to the introduction of a method for computing importance functions in Fast SLAM 2.0.[5] To further optimize Fast SLAM, the Gmapping [6] algorithm was introduced.
Gmapping is primarily based on the Rao-Blackwellized particle filter principles, incorporating ideas from 2D laser SLAM algorithms.It is widely used in indoor robotics for laser-based SLAM.[7] This algorithm divides the SLAM problem into two parts: estimating the robot's pose and constructing the environment map using particle filtering and extended Kalman filtering.
In Gmapping, when performing simultaneous localization and map construction, it processes not only the external environment information from laser sensors but also the robot's odometry information.These two types of information are integrated and processed to construct a grid map.The SLAM problem is thus divided into two aspects: first, estimating the robot's pose based on odometry and laser data, and second, updating the map based on the robot's pose and laser data.The entire SLAM system can be represented as: In this equation, represents the observed data from the laser radar, represents the odometry information,  is the robot's pose, and  is the environment map.The SLAM problem is thus transformed into the product of two posterior probability densities.The first term on the right-hand side (| 1: ,  1: ) estimates the robot's pose based on laser radar observations and odometry information, while the second term ( 1: | 1: ,  1:−1 ) updates the map based on robot pose information and laser radar observations.
For the pose estimation of (| 1: ,  1: ), the general filtering process includes: (i) Sampling: Sample particles   () based on the proposal distribution , from the previous time step (ii) Compute particle weights: Calculate the weight  for each particle based on the importance sampling principle using Equation (2).
Normalize the weights using the normalization factor  = 1/(  | 1:−1 , (iii) Resampling: As the number of iterations increases, particle degeneracy can occur, [8] where the weights of particles tend to zero.To address this, some high-weight particles are retained, and copies of them replace particles with low weights to ensure a consistent total number of particles.
(iv) Update the map: Integrate the robot's poses  1: () from different particles at different time steps and update the map based on the current observation  1: () .
The motion model based on odometry is widely used in traditional proposal distribution sampling due to its simplicity and accessibility.However, mobile robots equipped with laser radar can obtain more precise external environmental observation data.Therefore, when calculating the proposal distribution, Gmapping incorporates the observation model of the laser radar [9], concentrating the sampling process in the meaningful peaks of the likelihood function (Equation 4).

𝑝(𝑥
The weights are updated using the recursive formula proposed by Doucet (Equation 5).[ From the filtering process, it is evident that resampling plays a crucial role in filter effectiveness.To prevent the phenomenon of particle degeneracy, where high-weight particles are replaced and removed by the filter, a method is needed to determine the quality of particles and whether resampling is necessary.Gmapping uses Equation (6) to make this decision.
In Equation ( 6),  ̃() represents  number of particles,   is an adaptive threshold for resampling, and  is the total number of particles.Resampling is performed when   is below /2, which yields optimal results.
Gmapping is an improved algorithm based on traditional RBPF (Rao-Blackwellized Particle Filter) algorithms.Its improvements effectively reduce the computational requirements for particle number during sampling, greatly enhancing computational efficiency.Gmapping has been implemented as an open-source SLAM package and is widely used in practice.

Path Planning
In recent years, path planning techniques have been widely utilized in the field of robotics and have become a key technology for achieving autonomous navigation.The specific description is as follows: In an environment with obstacles, a robot uses a certain planning algorithm and follows certain criteria (such as planning the shortest path length, using the least planning time, and minimizing energy consumption) to find an effective path to reach the target point [11].Depending on the robot's perception range of the external environment, path planning can be divided into global path planning and local path planning.However, both of these planning algorithms have limitations in different practical environments, and there is no perfect path planning algorithm that fits all environmental scenarios to date.Therefore, after completing map construction using the SLAM algorithm, the combined use of these two path planning algorithms not only complements each other under certain constraints but also enhances the efficiency of finding the optimal path.In this chapter, we will conduct research experiments on both planning algorithms based on the overall framework of path planning shown in Figure 4.

Global Path Planning
The global path planning method used in the navigation section of this paper is the A* algorithm, which is widely used not only in the navigation of automobiles and mobile robots but also in pathfinding in virtual game environments with virtual characters.
The A* algorithm is an extension of the Dijkstra algorithm, which induces the algorithm by adding an estimated cost from the already searched nodes to the target node to minimize the total number of nodes searched.Therefore, it combines the characteristics of both breadth-first search and depth-first (greedy) search algorithms.The path quality evaluation of the A* algorithm is represented by Equation (7): Where:  represents the current node.() is the actual cost from the starting node to the current node.ℎ() is the heuristic function, representing the estimated cost from the current node to the target node.Common forms include Euclidean distance, Manhattan distance, or Chebyshev distance [12].() is the sum of () and ℎ(), representing the total estimated cost from the starting node to the target node.In Equation (7), () evaluates the cost of reaching the target node from the current node, considering both the actual cost and the estimated cost.[13].It has since been primarily applied in the field of local path planning and dynamic obstacle avoidance for robots, as well as in entertainment games.Due to its excellent performance in local path planning, it has become one of the main obstacle avoidance algorithms in robot local path planning.

Local Path Planning DWA (Dynamic Window Approach) is an abbreviation for Dynamic Window Method and was originally proposed by Dieter and Sebastian
The main principles of the DWA algorithm involve defining a "window" within the robot's velocity sampling space, denoted as (, ).Within this window, multiple samples are taken and the obstacle information in the environment is assessed using external sensors.Subsequently, based on the sampled velocities and the gathered obstacle information, the robot's motion is simulated, and the predicted motion trajectory to reach the target location within a certain future time frame is computed.Finally, a set of multiple motion trajectories is evaluated using a specific evaluation mechanism to determine the optimal trajectory that controls the robot's motion.
Before performing velocity sampling for the robot, it is important to clarify that the robot used in this paper is a two-wheeled differential-drive robot, capable of forward and backward motion as well as rotation.Sampling the linear motion velocity of the robot is relatively straightforward, but when the robot performs rotational motion, a differential approach is required to perform velocity sampling, as shown in Figure 5.

Figure 5. Mobile Robot Motion Model
First, the entire process of the robot's rotational motion is divided into multiple time intervals.When the time  is small enough, i.e., when the rotation angle tends to zero, this time interval of motion can be considered as linear motion.Assuming the robot's velocity is , the angle the robot rotates from time  to  + 1 is , and the robot's displacement in the Cartesian coordinate system is  and .
At this point, the robot's pose and velocity can be represented as: Velocity sampling involves obtaining real-time information on the robot's velocity based on the motion model.With this information, the robot's motion trajectory can be predicted.To ensure the accuracy of the predicted trajectory, a large amount of real-time velocity sampling information is required.Depending on the real-time situation, the best motion trajectory is selected.However, there are certain limiting factors in the sampling process due to external environmental and robot-specific influences, which mainly include the following three aspects: 1. Robot Maximum and Minimum Velocity Constraints: 2. Impact of the Robot's Own Motors: In practical applications, different robots are equipped with different motors, each with its own performance characteristics, particularly in terms of the torque of the motor, which significantly affects the robot's acceleration and deceleration.The sampled velocity information must be real-time and accurate.Therefore, a certain range of velocity constraints based on the actual performance of the motor needs to be set, as shown in Equation (11).

Robot's Obstacle Avoidance Mechanism:
To prevent the robot from colliding with obstacles, a maximum deceleration range is set when encountering obstacles: Here, dist(, ) is the clearance function, representing the minimum distance to obstacles in different paths at the sampled velocity (, ).
After the velocity sampling process, many motion trajectories for the next moment are predicted and simulated.Among these trajectories, multiple effective motion trajectories may be generated.In such cases, each motion trajectory is evaluated, and the optimal trajectory is selected based on a specific evaluation mechanism.The evaluation formula is as follows: heading (, ) is the heading angle evaluation function, which evaluates the angle difference between the robot's arrival at the predicted trajectory endpoint and the goal navigation point.dist (, ) is the clearance function, which represents the minimum distance to obstacles in different paths at the sampled velocity (, ).velo  (, )is the velocity function, representing the magnitude of the robot's velocity.､､ are weight coefficients for the three functions.
Normalization is applied to these functions during the path planning process due to the intermittent nature of the data obtained from the robot's built-in sensors.The normalization formula is as follows: Where:  represents the normalized trajectory. represents the evaluated trajectory.After normalization using the above formula, the final path trajectory can be planned by adjusting the sizes of ､ and  to obtain a locally optimal path trajectory [14].

Deployment of a Distributed Communication Network
The mobile robot used in this experiment does not come equipped with external hardware devices such as screens, keyboards, or mice.Moreover, during the entire mapping and path planning process, the robot remains in motion and cannot promptly monitor the construction of the map or its own pose and movement information.Therefore, the deployment of a distributed communication network is necessary.ROS (Robot Operating System) operates based on a distributed development model, where each node functions as an independent unit.These nodes communicate efficiently through topics and services in a loosely coupled manner, allowing them to perform different tasks.To establish a distributed communication network, it is only necessary to install the same version of the Ubuntu 18.04 system on one's host computer and on the Raspberry Pi of the robot.Through environment variable configuration, remote visualization monitoring and operation of the robot can be achieved.

Performance Testing of the Mobile Robot
Over time, the performance of the mobile robot experiment platform may degrade due to issues related to its hardware and design.This can particularly affect its speed performance, where data accuracy in terms of linear and angular velocities may introduce some degree of bias into the robot's map construction and path planning processes.To prevent or mitigate the impact of such issues and ensure the accuracy of experimental results, it is necessary to conduct basic performance tests on the mobile robot before conducting experiments.
Performance testing primarily focuses on the accuracy of the mobile robot's linear and angular velocities during its motion.The linear velocity testing method, as shown in Figure 6, involves marking a 2-meter-long straight-line distance.The robot's linear velocity is then set by the Ubuntu host computer, and the robot is controlled to traverse this distance at the set speed.The time taken by the robot to reach the endpoint of this distance from its initial position is recorded.The experiment is repeated 20 times, and the average linear velocity is calculated, as shown in Table 1.The angular velocity testing method, as shown in Figure 7, involves setting the robot's angular velocity using the Ubuntu host computer while the robot remains stationary at a fixed position.The time required for the robot to complete three full rotations is recorded.The experiment is repeated 20 times, and the average angular velocity is calculated, as shown in Table 2.  From the results of the two tables, it can be observed that the mobile robot used in this experiment exhibits relatively small errors in both linear and angular velocities.The relative errors are all below 3%, indicating that the robot's speed performance is generally good.Any potential impact on subsequent experimental tasks is within an acceptable range, allowing for the continuation of further experimental tasks.

Robot SLAM Mapping and Autonomous Navigation Experiment Analysis
To validate the practical mapping effect of the SLAM algorithm, this study selected Room 303 in Building 8 of our institute as the indoor real-world environment for this experiment.This laboratory contains various experimental instruments, small desks and chairs, as well as some other miscellaneous items, making it suitable for the mapping requirements of the experiment, as shown in Figure 8.The mobile robot was controlled using a keyboard control node while simultaneously collecting data from the laser scanner and odometry data.The Gmapping algorithm node was then run on a PC, using the uploaded data to construct a map.By employing tf transforms in combination with the results shown in Figure 9, it is possible to represent the error between the odometry coordinate system and the pose optimized by the Gmapping algorithm.

Figure 9. Start of Mobile Robot Mapping
This error typically accumulates gradually as the mobile robot moves a certain distance due to the misalignment between the odometry coordinate system and the map coordinate system.However, during the map construction process, it is crucial to promptly correct this error to enhance the accuracy and precision of map construction.
Figure 10 represents the complete map of the entire experimental environment, showcasing the various areas and rooms within the experimental environment, including the positions and sizes of objects such as desks and chairs.The map's scale and complexity are clearly visible.This map was generated after the mobile robot completed real-time environment scanning.The completeness and accuracy of this map are essential for enabling the mobile robot's autonomous navigation and task execution in a real-world environment.

Figure 10. Completed Map of the Indoor Experiment
To precisely assess the mapping accuracy of the Gmapping algorithm, six points were selected for actual measurement within the laboratory, including the length and width of two office desks, the length of a cardboard box, and the length and width of a storage cabinet.Subsequently, using the Rviz To further test the effectiveness of the mobile robot's local path planning and obstacle avoidance capabilities, a trash bin was added as a temporary obstacle on the originally planned path.The robot's movements were observed, as shown in Figure 12.
In the robot's autonomous navigation task, feedback information related to the appearance of the new temporary obstacle in the environment was obtained through laser scanning.As seen in Figure 12 (c), it was observed that the robot effectively selected a method to navigate around the new obstacle upon encountering a straight-line obstacle.The robot employed the Dynamic Window Approach (DWA) algorithm to precisely detect straight-line obstacles ahead and determine local path planning based on feedback information.Moreover, the robot successfully planned a new path to circumvent the temporary obstacle and reached the target location successfully.This demonstrates that the robot can make effective decisions and ensure its safety and successful task completion when faced with complex dynamic environments, such as the appearance of new temporary obstacles.

Figure 3 .
Figure 3. Steps for Mobile Robot Map Construction and Autonomous Navigation

Figure 4 .
Figure 4. Path Planning Overall Framework

Figure 6 .
Figure 6.Mobile Robot Linear Velocity Testing

Figure 7 .
Figure 7. Mobile Robot Angular Velocity Testing

Table 1 .
Mobile Robot Linear Velocity Accuracy

Table 2 .
Mobile Robot Angular Velocity Accuracy