Position correction of mobile robot by detecting line landmark using Hough line transform

Mobile robots are used in a laboratory automation system to transport specimens or experiments to their destinations. When moving, the robot has some error positions because of the wheels skid on the floor and the inaccuracy direction of the robot. The robot needs to correct its position and return to the track. By reading the landmarks in the environment, the robot can correct its position. This research proposes to do the position correction of the robot using a camera by reading the orange lines as an artificial landmark on the laboratory environment floor. The camera detects the orange lines as two parallel edges. And by Hough line transform (HLT) it produces Δρ value between line edges. The robot calculates the error position using the Δρ value and the biggest ρ (ρMAX) as the distance prediction robot to the line. Compared to the original robot movement on the Webots simulator application, this method can help the robot keep its position on the track. On the goal position, the robot can reduce 90% the error position. This error improvement makes the error position of the robot becomes around 5 mm from the target position.


Introduction
At the beginning of laboratory automation technology, Edward Bush [1] reviewed the existing technology in the Regional Center Laboratory of Wellmont-Bristol, about a mobile robot named Rosie that transports specimens from one room to another, then places them in the desired place.In 2021 Anne introduced a workflow management system that integrated a mobile robot into an automation laboratory called transportation assistance system control (TASC).It can control and arrange requests and transportation of experiments from one instrument to another in the laboratory.This kind of system is called laboratory transportation technologies by Kerstin Thurow [2].
The robot needs to choose the best route to navigate itself to move into the goal position.Path planning will generate a movement route.These will assist the robot's moves to reach the goal position.Qidan Zhu [3] used recurrent fuzzy neural network path planning.Extended Kalman filter applied on this method to get an optimal path in a dynamic environment.Dijkstra path planning was developed by D. Esparza and J. Savage [4] and Lyle Parungao, et al [5].Lyle Parungao made the online path planning on the server and then sent the shortest route to the mobile robot (client).Syaiful Ardy Gunawan [6] used another method.It is smoothed gradient descent A* and produces a smoothed path than the original one.Farid Bonini et al. [7] proved that A* has a faster computation time than a modified artificial potential field (MPAF) method.
The robot uses odometry to read the motion over time.By capturing robot movements, it will be possible to predict the robot's location.When the robot's wheels skid on the floor, the odometry accuracy becomes lower.The robot needs to correct the position to reduce the position error by reading the environment using several sensors.It is ultrasonic, LIDAR, or a camera.Qin Li et al. [10] develop accurate ultrasonic sensors on robots.Jongil Lim et al. [11] made rotating ultrasonic to make the environment reading faster and decrease the ultrasonic needed in one robot.Rui Wang et al. [12] developed multi-sensor ultrasonic to become more accurate.To optimize the environment reading, Denysyuk et al. [13], Y. Phang, W. et al. [14], M. Takahashi et al. [16], and S. Premnath et al. [17] used LIDAR.But this method increases the cost of robot development.
The camera is cheaper than LIDAR.It will reduce the cost of the robot.Mikulová et al. [20] used it to read the landmark as a unique QR code on the ceiling room.The robot then knows its orientation and position in the room by reading the QR code.If the robot is not in the desired position, it will perform a correcting position.Okuyama et al. [21] used an artificial landmark as a QR code placed on the wall and used it as the correction position.Position correction can use lines as the simpler artificial landmark.The camera will capture the line and process it by the line detection method.
The Hough line transform (HLT) is one of the best line detection.Ismaïl El Hajjouji et al. [22], Ziqiang Sun [23].and Farid Bounini et al. [24] used HLT to read the line on the highway in an autonomous car.Ismaïl eliminated the horizontal line to make the computation better and lighter [22].Zinqiang used two cameras with different color filters to read two color highway lines, white and yellow [23].Farid used an adaptive Kalman filter on the ROI position.It can make the computation better [24].
The contribution of this research is reducing the error position of the robot using position correction by reading the line on the environment floor.The robot can achieve the goal with higher accuracy.The robot calculates the line distance to correct the position.The robot will predict it by calculating the Δρ value generated by the HLT method.The robot will be able to improve its position by up to 77% compared to the robot without position correction.And the robot can reduce the error position by 4 -5 mm from the desired goal position.

Correction position using line detection
By capturing the line using the camera, the robot can predict the robot's position.The robot will correct its position if the predicted current robot position has a shift distance from the desired position.Figure 1 shows the design of this currently new method, starting from image acquisition and making sure that the robot is in the right position and ending by continuing the movement.

Image accusition error position prediction using HLT
The robot moves to the correct position This method uses a grid map.It adds line distance for additional information on each grid.The robot will calculate the line distance every robot arrives in the new grid and when the robot is ready to move to the next grid.If the robot's position is not on the desired track, the robot will do the correction robot position before continuing to move.The Webots simulator application runs this method by simulation.It can help to do trial and error until the method can achieve the best correction position before applying it to the real world.Simulation on Webots uses an e-puck robot.The robot prediction position will compare with the Cartesian robot position on the stage to calculate the performance of this method.

Image identification by HLT
HLT is a line feature extraction technique in an image.It has the concept of mapping pixels in Cartesian space to Hough space.One point (x,y) will be defined by ρ and θ, where ρ is the distance from the line to the origin, and θ is the angle from the horizontal line.One pixel will be calculated using equation ( 1) [22].One pixel will be drawn as sinusoidal in the Hough space.These pixels with intersection curves greater than the threshold of many intersecting curves are defined as a line.

ρ=xcosθ+ysinθ
(1) The limitations of HLT took overmuch computational cost [24].The cost was reduced by keeping the orange line and removing the others objects in the image.A threshold operation on the image was applied to do this work [39].Converting the image to the HSV mode aimed to ease the threshold operation in keeping the orange line.Orange lines in HSV space had maximum threshold H = 25, S = 255, V = 255, and minimum threshold H = 14, S = 200, V = 200.The threshold operation produced a binary image of the line.Then the Canny edge detector was executed to detect the edge of the line.
Finally, HLT could be applied to this edge and obtained the distance of each edge to the origin (ρ) on Cartesian space.In Hough space, it was represented by a peak [36].A good-quality image produced two peaks only.As can be seen in equation ( 2), by subtracting maximum ρ (ρmax) and minimum ρ (ρmin) from all ρ values, the width of the line (ΔρL) can be obtained.

Multi-information A* path planning
This method modified the A* path planning by producing multi-information paths.Robots used this information to accommodate the needs of the position correction.There were five additional information in the path as mentioned in the tabel 1 below.
Tabel 1.Additional information on the multi information A* path planing.Figure 2 shows three grid levels based on the ΔρL and ρmaxL values that the robot should read in the center of the grid.Grid 1 st is the grid where the line was placed.On the grid 1 st , the robot should produce Δρ1 and ρmax1 by reading the line.On the grid 2 nd , the robot should produce Δρ2 and ρmax2 by reading the line.Grid 3 rd is the grid where the robot cannot read the line, so the value of ΔρL and ρmax should be 0.

No. Additional information
This path planning used a grid map with line position designs to accommodate the information to be added to the path.Figure 3 shows a grid map with a line-position design for this research.On the grid map, the available explored grid with an orange line has a value of 85-98.Each number represents a line position in the grid.The unavailable explored grid has a value of 99, and the available grid without an orange line has a value of 0. A* Path planning searched the route with a standard algorithm using a combination of heuristic searching and the shortest path-based searching [8].Once the path was made, the algorithm added the additional information to the path information.

Position correction using ΔρL and ρmax
The position correction is executed twice in each grid.The first one was performed when the robot arrived in the new grid by comparing ΔρL and ΔρA.The differences between the ΔρL and ΔρA called by the error position of the robot (eR).The robot moved forward or backward until the error position value was zero.Before the second correction was applied, the robot rotated (turned left or turned right) until the robot orientation was equal to the ORIEN value.The second correction was performed before the robot left the grid by comparing ΔρL and ΔρB.Then the robot moved to the next grid.

Testing and results
Testing and simulation were executed in the Webots application.This simulation helped to compare the corrected position results with the robot movement without position correction.Parameters Δρ1 and Δρ2 related to the position and direction camera on the robot.Figure 4 shows the robot specification and also the environment in the simulation.The camera on the e-puck robot has been rotated down to 0.48 radians.This camera position makes the line on the floor could be read clearly on grid 1 st and grid 2 nd .Δρ1 has a value of 15, ρmax1 has a value of 110, Δρ2 has a value 2, and ρmax2 has a value of 7. The map grid was designed with a size of 125x125 mm and the orange lines have a 5 mm width.
A grid that position correction is available.The test was done by running the robot using a path that had been made outside the Webots application to make the simulation lighter.Figure 5 shows the path used in the simulation, and the yellow grid is the grid where the position correction is available.The 1 st , 2 nd , and 3 rd test run with the same movement path.But each test had different random starting positions.Table 2 shows the testing result for each test.When the robot moves, it will produce some errors.The error will be recorded in millimeters (mm).Error robot with a correction position (eR) was compared with the non-correction one (eNC).The error position on all trials decreased up to become 5 mm.Compared with the movement without correction, the performance increased to 90%. Figure 6 shows a moving range chart for the error position from each trial compared with the noncorrection one.The chart showed that whatever the initial eR value, the robot can decrease the eR value.The 1 st test has the lowest initial eR value (9.9 mm).In the 1 st test chart, the eR value decreased when the robot did position correction.When the robot moved without any correction, the error increased, but the error will be decreased again when the robot did the position correction again.The 2 nd and 3 rd tests have different initial eR (21.4 mm and 27.7 mm) and showed the same result.Because the initial eR for each test is different, the 3 rd testing chart showed the correction in the beginning still has a higher error than the other (1 st and 2 nd tests).But after the robot did the position correction again at the end of the path, the eR decreased until similar to the other test (around 5 mm).

Summary/Conclusion
In this paper, the position correction of the mobile robot position was designed using HLT image processing to detect the orange line in the environment and use multi-information A* path planning.Various tests and step scenarios were applied to the simulation.Our experimental results show that this proposed method can reduce the error position of the robot movement compared to the original robot movement on the Webots simulator application.This method can help the robot keep its position on track and on the goal position the robot can reduce until 90% error position.This method needs to be tested again in the larger stage and possibly to add more variables in order to increase the performance of the position correction.

Figure 1 .
Figure 1.Robot position correction using the line detection method.

Figure 6 .
Figure 6.Moving range chart of error from the robot position.

Table 2 .
Comparison experiment results with position correction and without position correction.All error data is served in millimeters.