High Precision Vehicle Location Method Based on Multi-sensor Fusion in the Urban Complex Environment

In this paper, a high-precision vehicle positioning method is proposed, which can achieve centimeter-level positioning accuracy in a complex environment. Based on the information of multiple sensors (GNSS, Lidar, and IMU), the multi-sensor fusion framework is used to fuse the positioning measurements from different sources with Kalman filter, which achieves high-precision positioning in the case of GNSS disconnection, the accuracy and robustness of the positioning system are significantly improved, and the effectiveness of the method is verified. The average RMS accuracy of 0.07m is realized, which meets the requirements of high-precision positioning for autonomous vehicles.


Introduction
Accurate positioning technology, as a key link of autonomous driving, is responsible for providing realtime movement information of the carrier (such as an autonomous driving vehicle), including position, speed, posture, and other information of the carrier [1] .At present, the Global Navigation Satellite System (GNSS) is an indispensable technology for vehicle positioning [2] .However, with the densification of urban buildings and three-dimensional traffic, in some sections (such as urban canyon, underground tunnel, and lower pavement of urban interchange, etc.), satellite signals are partially or completely blocked for a long time, resulting in the multipath effect or signal interruption of GNSS, which makes GNSS unable to provide accurate positioning information [3] .In order to solve the above problems, multi-sensor data fusion [4] technology is commonly used.Multisensor data fusion is a process in which the information described by different sensors to a certain object or environment feature is fused into a unified feature, which expresses information and data processing; because it does not need to store a lot of observation data.When new observation data are obtained, new data can be calculated at any time, which is convenient for real-time calculation.The data in high-precision positioning is undoubtedly a kind of data that can be processed by this method.At present, the most commonly used integration scheme for vehicle positioning is the combination of GNSS and Inertial Navigation System (INS) [5][6] , but GNSS/INS System still accumulates positioning errors over time when satellite signals are interrupted for a long time.Literature [7][8] proposed the integration of GNSS, INS, and visual sensors, which improved the positioning accuracy in the GNSS interruption interval to a certain extent.However, the system was greatly affected by illumination and the positioning result was unstable.This paper proposes a highprecision vehicle location method based on multi-sensor fusion in the urban complex environment.The method integrates GNSS, INS, and multi-line Lidar (Lidar), meanwhile, designs three different fusion models according to the application requirements of different scenarios to meet the requirements of vehicle location in the urban complex environment.The rest of this paper is organized as follows: firstly, the multi-sensor fusion positioning architecture proposed by the standard is presented; secondly, the multi-sensor fusion registration method and the multi-sensor fusion positioning method are described.Finally, the experimental results of the proposed method in a complex environment are verified by experiments.

Multi-sensor fusion architecture proposed in this paper
The input of the multi-sensor data fusion positioning system mainly comes from the GNSS system, INS, and Lidar-based map matching positioning system.After data preprocessing, data registration, and data fusion, the fusion positioning system can output the speed, position, and posture information of the vehicle.The multi-sensor data fusion positioning framework proposed in this paper is shown in Figure 1.

Figure 1. Multi-sensor fusion architecture proposed in this paper
Data preprocessing includes GNSS, INS, Lidar sensor initialization and calibration.Sensor initialization refers to the independent calibration of each sensor relative to system coordinates.Once the sensor initialization is completed, the sensors can be used to register the data collected from the common target.The so-called data registration is to merge the observation or point trace from one or more sensors with known or confirmed events to ensure that the probability of observation and point trace contained in each event set is larger from the same entity.Specifically, it is to match the observation or point trace of each batch of targets with the respective data in the event set.The specific process is described in Section 3.2.

Multi-sensor fusion registration method
During sensor registration, enough data points are collected to calculate the system deviation, and the calculated system deviation is used to adjust the subsequent sensor data.Then, sensor registration mainly includes time registration and space registration.

Time registration
Time registration is to synchronize the measurement information of different sensors on the same target to the same time.Because each sensor measures the target independently of each other, and the sampling period (such as star observation measurement unit and a large sampling period of Lidar) is often different, the time they report to the data processing center is always different ditto.In addition, due to the different delays of the communication networks, the time required to transmit information between each sensor and the fusion processing center is also different.Therefore, there may be a time difference in the sending time of data on each sensor.As a result, the asynchronous information should be registered to the same time before the fusion processing.In this paper, the sensor time synchronization mainly adopts the method of hard synchronization, which provides the same reference time for each sensor through a unique clock source.Each sensor calibrates its own clock time according to the reference time provided; and realizes time synchronization from the hardware.At first, set the unified clock source, since each sensor has its timestamp, the unified clock is used to synchronize the timestamps of different sensors.If the sensor supports hardware triggering, the GNSS timestamp can be used as the benchmark for hardware triggering.In this case, the timestamp contained in the data provided by the sensor is the global timestamp (GNSS timestamp) rather than the sensor timestamp.Secondly, a hardware synchronization trigger is used.As the sampling frequency of each sensor is inconsistent, for example, Lidar usually uses a 10Hz sampling frequency, the camera usually uses a 25/30Hz sampling frequency, and there is a certain delay in data transmission between different sensors, the nearest neighbor frame can be found by looking for adjacent timestamps.However, if the two timestamps differ greatly, and the sensor or obstacle is moving, then the final result will be a large synchronization error.In this case, the method of hard synchronization trigger can be adopted to alleviate the error phenomenon caused by searching the time stamp, and the natural frequency of the sensor can also be adjusted, such as adjusting the camera to 20Hz to reduce the time difference problem.

Spatial registration
Spatial registration is to estimate and compensate the errors of sensors based on the measurement results of the common objects in space by multiple sensors.Adopting different coordinate systems of each sensor measurement for the same system, positioning must convert them to the same data in a coordinate system, for a number of different subsystems, coordinate systems that each subsystem used are different.So before the fusion process between each subsystem, converting them to the same measurement coordinate system is necessary, After processing, the results need to be converted into the data of each subsystem coordinate system, and then transmitted to each subsystem.The registration process is shown in Figure 2. Due to the deviation of inclination distance and azimuth of sensor (∆ 1 、∆ 2 、∆ 1 、∆ 2 ), the real target appears two targets on the system plane, so spatial registration is required, and the registration process is as follows: If noise is ignored, then: Where, represent the real slant and azimuth of the target relative to sensor 2 respectively.∆ 1 ∆ 1 represents slant distance and azimuth deviation of sensor 1; ∆ 2 ∆ 2 represents the slant distance and azimuth deviation of sensor 2; The above beam formula is combined and the obtained equation is expanded in first-order Taylor series with respect to ∆r 1 ,∆ 1 ，∆ 2 ，∆ 2 .Then can get the following This formula provides the basis for the deviation estimation method independent of the target motion tracking.

Multi-sensor fusion positioning method based on EKF
The description of Vehicle State for autonomous driving generally includes position (generally 3d space coordinates x, y, and z), speed, and orientation (quaternion x, y, z, and w), which is a 10dimensional vector.The vehicle motion model is as follows, and its information comes from IMU, which contains acceleration and angular velocity in x, y, and z directions, and is a 6-dimensional vector.The calculation process of vehicle motion model is as follows: Where is vehicle position at time , ∆ is discrete time, is vehicle speed at time , is conversion matrix from navigation coordinate system to sensor coordinate system, f is specific force output by INS, g is gravitational acceleration, is vehicle quaternion at time , −1 is vehicle swing speed at time -1 , is the Euler Angle.
To apply EKF, we define Error State as follows, where is the matrix of 3*1,the Motion Model of EKF is as follows: Error state∶

GNSS measurement model in EKF:
= ℎ + = + = 1 0 0 + The multi-sensor fusion process of IMU+GNSS+Lidar for EKF is as follows: Step 1: Update the vehicle status using the input of the IMU.

Establishment of experimental platform
In order to verify the performance of the constructed multi-sensor fusion positioning method based on EKF, this paper built a real vehicle experimental platform on Buick Cruise, the experimental vehicle and sensor are shown in Figure3.

Figure 3. Experimental vehicle and sensor
The vehicle incorporates a low-cost NovAtel Superstar II GNSS receiver、MEMSIC's VG440CA-200 MEMS INERTIAL Measurement Unit and Sensors such as Sagitar Lidar for the information required for the combined positioning system.The position measurement accuracy of GNSS is 5m, the speed measurement accuracy is 0.05m/s, and the output frequency is 1Hz.The Lidar model is RS-Lidar-32.The measurement accuracy of the output point cloud is 0.02~0.03m,and the output frequency is 10Hz.The output frequency of MEMS INERTIAL Measurement Unit is 100Hz, and its performance parameters are shown in the following table.In addition, this paper adopts the high-precision SPA-CPT integrated navigation and positioning system developed by NovAtel Company to provide location reference information.Its positioning accuracy is 0.01m under normal working conditions, and 0.02m under the condition of GNSS signal interruption for 10s.

. Real vehicle experiment and result analysis
In order to verify the performance of vehicle integrated positioning system, this paper carried out several real vehicle experiments under urban conditions.The experimental path is located in Jiangning District, Nanjing, as shown in Figure 4.This path is located in the open urban highway area, and the satellite signal is well received.During the whole experiment, GNSS received more than 5 satellites.The whole path is about 18km, and it takes 30 minutes to complete an experiment.This path covers typical urban driving conditions, including turning, waiting for traffic lights and going straight, etc.At the same time, in order to make driving conditions more complicated, frequent acceleration and deceleration are carried out artificially.In order to evaluate the performance of the integrated positioning system in the case of GNSS interruption, six simulated GNSS interruption intervals were inserted into the path.Each GNSS interruption interval lasted for 40s, covering road conditions such as right-angle curves, curved curves and straight lines, as shown in the red circle in the Figure 3 below.

Performance analysis of integrated positioning system when GNSS is interrupted
Based on the sensor data recorded during the experiment, the performance of the integrated positioning system with occlusion is evaluated.Figure 5 shows the comparison diagram of three-dimensional position and reference trajectory of the output of the combined system: Table 2 shows the statistics of the plane position error of the fusion positioning system in the case of GNSS failure, where Max represents the maximum value of the plane position error.03 According to the analysis of the actual vehicle test results, the maximum error average value of the multi-sensor fusion scheme proposed in this paper is 0.09m and the average value of R.M.S is 0.07m in the interval of GNSS interruption for 40 seconds, which meets the requirements of unmanned vehicle positioning on urban complex roads.

Conclusion
Aiming at the high precision positioning requirements of unmanned vehicles in urban complex environment, this paper designs a multi-sensor fusion positioning algorithm that integrates GNSS, INS and Lidar, and carries out real vehicle experiments.Experimental results show that in the interval of GNSS interruption of 40 seconds, the maximum positioning error of this algorithm is 0. 18m, which can meet the positioning requirements of unmanned vehicles in urban complex roads.

Figure 2 .
Figure 2. Schematic diagram of the registration process In the figure above, 1 and 1 represent the slant and azimuth measurements of sensor 1 respectively.2 and 2 represent the slant and azimuth measurements of sensor 2 respectively.( 1 , 1 ) represents the position of sensor 1 on the navigation coordinate plane; ( 2 , 2 ) represents the position of sensor 2 on the navigation coordinate plane; ( 1 , 1 ) represents the measured value of sensor 1 in the navigation coordinate system; ( 2 , 2 ) represents the measured value of sensor 2 in the navigation coordinate system.The following basic equation can be derived from the right side of the figure: 1 = 1 + 1 sin 1 1 = 1 + 1 cos 1 2 = 2 + 2 sin 2 2 = 2 + 2 cos 2

1
Figure 2. Schematic diagram of the registration process In the figure above, 1 and 1 represent the slant and azimuth measurements of sensor 1 respectively.2 and 2 represent the slant and azimuth measurements of sensor 2 respectively.( 1 , 1 ) represents the position of sensor 1 on the navigation coordinate plane; ( 2 , 2 ) represents the position of sensor 2 on the navigation coordinate plane; ( 1 , 1 ) represents the measured value of sensor 1 in the navigation coordinate system; ( 2 , 2 ) represents the measured value of sensor 2 in the navigation coordinate system.The following basic equation can be derived from the right side of the figure: 1 = 1 + 1 sin 1 1 = 1 + 1 cos 1 2 = 2 + 2 sin 2 2 = 2 + 2 cos 2 16) Step 3: When GNSS or Lidar measurement results arrive, go to Step 4; Otherwise, go to Step 1. Step 4: Calculating Error State.= − (17) Step 5. Predicting the status.By the way, this paper represents a posterior estimate, represents a prior estimate.

Figure 5 .
Figure 5. Three-dimensional position of the output of the combined positioning system under GNSS occlusion

Table 2
Error statistics of integrated positioning system during GNSS interruption in path 1 (unit: m)