A graph optimization approach to range-based relative location

With the development of multi-robot cluster system, the research of multi-node relative pose estimation is in the ascendant. However, for the environment without GNSS or fixed anchors, the problem of relative pose estimation is still challenging. In this paper, a general algorithm and system framework based on graph optimization algorithm is proposed to fuse inter- and intra-robot measurements, as well as mobility models with sliding windows for relative attitude estimation. In order to verify the feasibility and universality of the algorithm, the algorithm has used to design a relative positioning system based on UWB positioning with a “single-node multi-tags” system structure, which can simultaneously obtain reliable relative positions and directions. The design of multi-source fusion can further improve the accuracy and robustness of relative positioning. Experiment results shows that in the harsh environment without GNSS and fixed anchor points, the relative pose estimation accuracy of this system can reach centi-meter level, which may be the best potential choice for relative positioning of multi robot cluster in complex environment.


Introduction
Multi-robot cluster is becoming a hot research issue [1].For the positioning and altitude determination of the nodes in the cluster, the existing technical solutions often rely on fixed anchor positioning technology [2], or satellite navigation [3,4].However, such schemes cannot be effectively deployed in open sites where infrastructure transformation is difficult or in non-outdoor scenes where GNSS is denied, which greatly affected the potential application of multi-robot clusters in some complex environments.
In order to solve the above problems, many beneficial attempts have also made in recent years.In the aspect of non-anchor positioning, the introduction of visual feature detection to determine the relative position and pose between nodes is one of the most popular and effective methods [5,6].Of course, the traditional relative positioning technology based on visual feature detection is still vulnerable to the interference of factors such as camera field of view, ambient brightness change, etc.Therefore, the positioning technology based on image information (such as visual-based SLAM) has been used to integrate with the traditional positioning technology based on range measurement [7][8][9], and the joint estimation is carried out through optimization algorithm instead of Bayesian filtering algorithm [10], thus obtaining better comprehensive performance.
Among various range-based positioning technologies, ultra-wideband technology (UWB) is more and more widely used in the field of relative positioning due to its advantages of high ranging accuracy and high multipath resolution [11].In order to solve the problem that the traditional UWB positioning solution cannot determine the attitude, the method of using multiple UWB tags to achieve the attitude determination of a single node is proposed and has been applied to the field of personnel cluster tracking.
In order to realize multi-node relative positioning and pose determination for multi-robot cluster applications in the environment without fixed anchors or satellite navigation, this paper proposes a relative positioning and pose determination algorithm framework based on the fusion of UWB ranging and visual odometer with graph optimization fusion algorithm.The framework deploys the ranging and odometer information as constraints in the sliding window at the same time, and estimates the relative position and pose information of multiple nodes through graph optimization, and verifies its good performance through field experiments.The main contributions of this paper include: 1) A framework of relative positioning algorithm for robot cluster application is proposed.The graph optimization algorithm is used as the core algorithm, and the high-precision estimation of relative position and attitude is achieved by deploying measurement constraints within and between nodes on the sliding window.
2) By designing the system structure of "single robot -multiple tags", the problem that position and attitude information cannot be obtained simultaneously is solved.Together with the proposed algorithm, the accuracy and robustness of the relative positioning system are guaranteed.

System & algorithm
As shown in Figure 1, the framework proposed in this paper includes two parts: algorithm and hardware system deployment.In terms of hardware system, one node in the robot cluster is selected as the master node (Robot 1), and the other nodes are selected as the slave nodes (Robot n).The master node deploys four UWB tags, and the slave node deploys two UWB tags.All nodes will deploy a camera and IMU to collect odometry information.In terms of algorithm, ORB-SLAM3 algorithm is used for visual odometer (VIO), UWB is used to provide ranging information, and the core algorithm of graph optimization with sliding window is used to fuse ranging information and VIO measurement to estimate the relative position and pose.As a necessary supplement to the algorithm, the algorithm also designs and introduces initialization and outlier suppression algorithms to further improve the accuracy and stability of the entire system.Figure 1.Structure of the proposed system.In the system, each robot is equipped with a stereo camera and IMU integrated module and multiple UWB tags.Each robot collects range and odometry measurements from neighbour robots for graph-based optimization with a sliding window.

System formulation
The symbols mentioned in this article are defined as follows.Transformation matrix w t i T means the 2D pose of robot ( [1: ])  i i N at time t in world coordinate system, which including the position vector w t i p and the rotation matrix w t i R .

MEIE-2023 Journal of Physics: Conference Series 2591 (2023) 012018
For convenience of description, superscript ( )   represents measurement in the robot body frame and superscript ( )   represents estimation in VIO local frame.Therefore, for a robot cluster, the problem of relative positioning and pose estimation for an arbitrary robot k can be seen as k T (as a measurement) obtained by VIO, and the pose of robot i in robot k 's local frame at time t can be expressed as ˆ

Measurement modelling 2.2.1. Inter-robot measurement.
The types of inter-robot measurements include range, angle, etc.Here we focus on range measurement and consider the multi-tag structure.The position of UWB tag g on robot i in world frame is expressed as where i g b i p is the calibrated extrinsic transformation between the body / tag frame.This representation can reflect the geometric relationship between UWB tags on the same robot and facilitate the construction of optimizer. , Assuming range measurement noise follows an approximately normal distributionn ~ 0, , the ranging information will be: (2) Considering that there are multiple range measurements between every two robots, all ranging information at the same time is expressed as , , ,

Intra-robot measurement.
Intra-robot measurements are about the ego-motion estimation of the robot.As mentioned in Section 1, VIO odometry has been used to obtain local pose estimation.Due to long-term drifting and differences in the local frames of each robot's odometry, we use the relative pose extracted from odometry in the pose solver, rather than directly using the original odometry, which can be expressed as (3) where noise of the relative odometry pose is assumed as Gaussian, n ~ 0, Σ .Furthermore, with the assistance of accelerometers and gyroscopes, the alignment of the z-axis of the local frame of all robots can be adjusted to match the direction of gravity.

Mobility modelling.
In practical applications, there are many situations where the intra-robot measurements such as odometry are unavailable or lost.However, UWB range has strong jitter, and the single shot estimation of relative pose relying on only range will be unstable.Therefore, it is necessary to impose constraints on the continuous pose of the robot according to the mobility model of the robot itself: ) where noise of the relative odometry pose is assumed as Gaussian, (0, ) and m Σ is the information matrix related to the movement characteristics of the robot.When the odometry measurement of some robot is lost, the mobility model is used to replace the odometry constraint in optimizer.

Graph optimization for relative pose estimation
The sliding window is introduced into the algorithm to process the relative position and orientation information between nodes.The full states k  and measurements k  in the window is defined as The poses and measurements in the sliding window form a graph , as shown in Figure 2.Each pose serves as a vertex in the graph optimization, representing the state to be estimated, and each measurement serves as an edge representing the constraints between states.The goal of the graph optimization is to find the optimal pose estimates that satisfy the constraints represented by the edges while minimizing the vertex errors. . .
( ,  /  respectively represent the set of all the ranging measurements and odometry measurements. residual of odometry measurements, ensuring the local consistency of the robot i 's poses, and when the odometry measurement does not exist, the item will be replaced by the mobility model, which is the same in form.At the same time, Huber norm ( )   is introduced to suppress possible outlier interference.The latest odometry measurement will be used to predict the robot pose, and naturally used to be the initial value of the core optimization algorithm:

Outlier rejection and fast initialization
In practical applications, UWB ranging may receive environmental impact, resulting in outliers in ranging failure.In addition to the method of suppressing outliers described in the previous section, another independent outlier suppression standard has also been introduced into the algorithm framework to further improve the robustness of the system, that is, according to the 3 rule: where T is the pose of robot i predicted by odometry.At the same time, even when the range is lost for a period of time, the robot state can still be predicted by odometry measurement using.
Our system supports fast initialization of all robots' pose in the reference frame.Generally, we set the initial local frame of the robot equipped with 4 UWB tags as the origin of the reference frame.When the system starts running, only the range edges connected to all robots are used to get the rough initial poses 1 In practice, since the optimization problem is highly nonlinear, our optimization strategy is to use the range measurements between each robot and the reference robot to get the initial poses of all robot.After this, the pose can be predicted by odometry and used as the initial pose value of sliding window estimator.

Experiment
The algorithm and system proposed in this paper are deployed on two identical AGVs (Auto Guided Vehicle Robot) to build a complete test system.As shown in Figure 3, Both AGVs are deployed with UWB tags, IMU integrated cameras and embedded computers.UWB tags are deployed at the edge of AGV, with a distance of 68cm.The camera model is RealSense D435i, and the computer model is Jetson AGX.The raw data from the IMU at 200Hz and visual information from the stereo camera at 30Hz are integrated to determine the visual-inertial odometry of an individual robot using ORB-SLAM3.All computational procedures are implemented and executed onboard the robot.The experiment is divided into two stages.In the first stage, two AGVs will move in parallel to verify the fast initialization capability; In the second stage, the two AGVs will move in a relative circle to verify the phase positioning and attitude determination ability.The true value is provided by GNSS-RTK system deployed on AGV.Unless otherwise specified, the subsequent evaluation criteria are root mean square error (RMSE).

Parallel motion test
As shown in Figure 4, two robots make nearly parallel movement to verify the fast initialization ability.With the help of "single robot -multi tags" structure, the relative pose initialization between robots can be completed without complex relative motion, which improves the adaptability and robustness of the system.The trajectory after fusing the odometry is closer to the ground truth.And the local trajectories show that the effect of range jitter on position estimation can be reduced by fusing odometry, which enables the system to run in complex environments.
The results of relative pose estimation obtained by using ranging-only and fusing odometry are shown in Table 1(a).We can see that, the position error is smaller in the direction formed by the two robots (x-axis), which is consistent with our common sense.Moreover, when only range is used, the relative positioning accuracy can also be close to 10cm.That is to say, only one robot or some robots need to be equipped with odometry to simplify deployment and save costs, which expands the application of our system.

Circle motion test
The second stage of the experiment focuses on verifying the system's relative positioning and pose determination ability.As shown in Figure 5, the two robots move in a counterclockwise direction in a 7-meter square field similar to the cycloid track, and run the algorithm for relative positioning and orientation.Ground Truth 1 and 2 are the GPS-RTK measurements of robot 1 and robot 2 respectively.In order to more intuitively present the relative positioning accuracy achieved by the system, we take the GPS-RTK trajectory of robot 2 as the reference quantity (which is Ground Truth 2) and compare it with the estimated trajectory of relative positioning obtained by robot 2 relative to robot 1.As shown in Figure 5, the tracks are almost identical.Table 1(b) shows the quantitative results, which prove that the relative positioning error can reach centimeter level based on the algorithm framework and system, which is the highest accuracy in similar work.The algorithm is implemented on an mailto:ARMv8.2@2.26GHzCPU (Jetson AGX) and the computer system runs Ubuntu 18.04.The project is built upon ROS and C++.All range and odometry measurements are employed for pose estimation, while the oldest measurements in the sliding window are discounted after optimization.The sliding window size is 20, and the optimization of the sliding window takes less than 10 milliseconds.This enables accurate, real-time estimation of the robot's relative pose.

Conclusions
In this paper, a graph optimization based relative positioning and pose estimation framework is proposed to fuse the inter-and intra-robot measurements over a sliding window.Based on the framework, a UWB + VIO fusion relative location system is designed, which provides a potential solution for multi-robot relative localization in infrastructure-free environments.The adoption of "single robot -multi tags" structure solves the issues of un-observability of relative pose, which enhances the system's adaptability for rapid response and high-precision formation.The good relative localization ability from range measurements alone can be used to cope with odometry loss, simplify deployment and extend application.The fusion of range and odometry measurements improves the accuracy and robustness of relative localization, which enables the system to run in some particular environments like NLOS condition.In the real-site experiments, the results with centimeter-level relative pose estimation accuracy proves the performance and application potential of our system.In future work, we still need to improve some aspects.Firstly, we need to refine our approach and apply it to multi-robot systems in 3D space.Another aspect is to integrate collaborative mapping with other functions to extend the application scenarios of the system.

Figure 2 .
Figure 2. Flow diagram of relative pose estimation algorithm with sliding window.The core optimization algorithm can be expressed as

2 
W means the squared Mahalanobis norm with the information matrix W . ( , ) the residual of prior item transferred from the previous window and p  is the covariance carried over from historical measurements is the residual of the range measurements.( , )

Figure 3 .
Figure 3. Schematic diagram of experimental environment, AGV robot and hardware deployment.

Figure 4 .
Figure 4. Results of the parallel motion test.Local trajectories show that the effect of range jitter on position estimation can be reduced by fusing odometry.

Figure 5 .
Figure 5. Results of the circle motion test.

Table 1 .
Accuracy comparison.Results obtained by using only range and fusing odometry.