Control of autonomous ground vehicles: a brief technical review

This paper presents a brief review of the developments achieved in autonomous vehicle systems technology. A concise history of autonomous driver assistance systems is presented, followed by a review of current state of the art sensor technology used in autonomous vehicles. Standard sensor fusion method that has been recently explored is discussed. Finally, advances in embedded software methodologies that define the logic between sensory information and actuation decisions are reviewed.


Introduction
Autonomous vehicle technology has been a major research and development topic in the automotive industry during the last decade. The technologies developed in the automotive industry also has direct applications in construction, mining, agricultural equipment, seaborne shipping vessels, and unmanned aerial vehicles (UAVs). Significant R&D activities in this area date back three decades. Despite the heavy intensity of investment in technology development, it will still take a few decades before an entirely autonomous self-driving vehicle navigates itself through national highways and congested urban cities [1]. People have been trying to make self-driving cars, since the invention of the car. The history of automated driving from early stages to automated highways and automated vehicles, are reviewed in [2]. A brief history of major autonomous cars development is shown in Fig. 1. The past, present and potential future of driver assistance systems are reviewed by Benglar et al [8] and summarized in Fig. 2. Early driver assistance systems were based on sensors that measure the internal status of the vehicles. These sensors enable the control of vehicle dynamics so that the trajectory requested by the driver is followed in the best way possible. In 1995, additional dynamic driving control systems such as electronic stability control (ESC) was introduced. The Second generation of driver assistance systems was introduced around the 1990s based on sensors that measure the external state of the vehicle with the focus of providing information and warnings to the driver.  The following sensors that measure conditions outside the vehicle, and vehicle position relative to its environment are essential in driving assist systems and autonomous vehicle technologies: Vision, lidar, radar, ultrasonic range, GPS, and inter-vehicle communication. The latest generation of Driver Assist Systems (DAS), also called Advanced Driver Assist Systems (ADAS), defines and controls trajectories beyond the current request of the driver, i.e. overriding driver commands to avoid a collision. All these sensors have overlapping and complementary capabilities. Data fusion strategies that combine real-time information from these multiple sensors is an important part of the embedded control software. Actuation technologies, i.e. computer control steering, throttle, transmission, and braking, are mature and do not present any R&D challenges. The embedded control software development (that includes data fusion from sensors, inter-vehicular communication, real-time cloud computing support) is the key technical challenge.
The on-road vehicle automation requires a standard set of regulations and terminology with a taxonomy and definitions. Some regulation and standard have been released [9]. The new standard J3016 from SAE International simplifies communication and facilitates collaboration within technical and policy domains. According to the standard as shown in Fig. 3, the levels of driving automation can be divided into Conditional, High, and Full Automation. The standard does not provide complete definitions applicable to lower levels of automation (No Automation, Assisted, or Partial Automation). Active safety and driver assistance system that intervene to avoid and/or mitigate an emergency situation and then immediately disengage are also not included for the various levels of automation.
The short-term goal is to automate driving in select well-defined situations as implemented by some of the technology companies, for example testing of self-driving taxi cabs in well-defined suburbs. The long-term goal is to achieve door-to-door automated driving in any situation. Some current ADAS examples include traffic jam assist, collision avoidance assist, pedestrian and oncoming traffic detection systems.

Sensors: Principles & Limitations
Different sensors and systems are used for navigation and control of the autonomous vehicle as shown in Fig. 4. Prominent sensors such as LIDAR, GPS, radar, vision, ultrasonic and inertial measurement unit, their working principle and usage are discussed in the following sections.

LIDAR
LIDAR (Light Detection and Ranging) transmits a beam of light pulse from a rotating mirror, part of that light will reflect back to the sensor to detect any non-absorbing object or surface (Fig. 5). The target distance is calculated as speed of light times the measured time period at multiple angles. It scans the field of view in 3D with a finite spatial resolution and frequency. Using the calculated distance, the sensor   5). Their performance is poor in rain, snow, dust and foggy environments. A high-end LIDAR sensor can measure multiple distances per laser pulse which is helpful to see through dust, rain and mostly transparent surfaces such as glass windows and porous object like wire fences. To reduce the signal to noise ratio, a higher power laser generation is desired but in order to prevent damage to the human eye, a laser power of 905nm is used to achieve desired range with low duty cycle [10]. Current cost of LIDAR sensor is relatively high [11,12] and there are some issues with the long-term reliability of their mechanical scanning mechanisms. They have been used heavily in research applications, but not widely used in automotive OEM safety systems until recently.

Radar
Radar (Radio Detection and Ranging) transmits electromagnetic pulses and senses the echoes to detect and track objects. Echoes sensed vary in frequency depending on the speed of the object. Radar can measure the relative distance, velocity, and orientation of the object [13,14]. In case of monostatic radars in which the transmitter and receiver are located at the same location, range of the target is measured by using the round trip travel time of a pulse, times the speed of light divided by two. They are typically available for short (≈ 30), medium (≈ 60) and long-range (≈ 200 m) distances and range from 3 MHz (Very long range) to 100+ GHz (Short range) frequencies (Fig.4). Radar requires less computing resources compared to vision or LIDAR. Typical application include lane keeping, advance cruise control, object detection, etc [15]. and broadcast reference signals. A GPS receiver on earth deciphers the actual location within meters accuracy. However, the so called "differential GPS" can pinpoint a location within centimeter accuracy which is necessary for navigation of autonomous vehicle. GPS consists of three segments: space, control, and user segments. The space segment includes the satellites, control segment manages and controls them and user segment is related to development of user equipment for both military and civil purposes [16]. GPS based navigation application are greatly used to accurately predict the vehicle location with respect to map which is known as localization. However GPS based navigation are inaccurate and can lead to ghosting phenomena.

Vision
Vision system is composed of a camera and image processing unit (Fig 6). A typical camera is a combination of focusing lens and array of photo-detectors for each pixel in the field of view (FOV). The array of photo-detectors send pixel information to image processing unit. This unit processes the information based on certain algorithms to detect desired objects. Vision sensors capture more visual information, hence tracking the surrounding environment more effectively than other sensors. They are categorized into mono and stereo types. Mono camera systems are often used for lane marking, lane edge detection, basic object detection, road sign detection and localization. Multiple or Stereo camera systems provide depth for objects detection. Their primary advantage are their low-cost, off-the-shelf components and their software implementation. Their primary disadvantage is handling a full range of ambient and uncontrolled conditions such as lighting, shadowing, reflection, weather, dust, smoke. Also processing data from these sensors in real-time require a large amount of computational resources. Even with these limitations they are extensively used in autonomous vehicles.

Inertial Measurement Unit (IMU)
A moving vehicle can experience linear and rotational motions along x, y, and z-axes: lateral, longitudinal, and vertical which can be measured by inertial measurement unit including linear and angular accelerations. This information is used to improve GPS measurements. IMU includes accelerometers and gyroscopes as shown in Fig. 7.

Ultrasonic Range
Ultrasonic sensors are short range sensors (typical ≈ 2 m) which send an ultrasonic pulse wave and detect echoes returned from the obstacles using transmitter-receiver pair. These are mainly used in relatively low speed ADAS modules like parking space detection and assistance [17] and obstacle detection during congested traffic conditions [18]. They are reliably detect under any weather conditions like rain, snow and winds. However they have range limitation which restricts them to be used as OEM vehicle sensors.

Sensor Fusion
In sensor fusion techniques, raw sensor data is received. After receiving the data, feature extraction, clustering and object detection hypotheses are conducted. These hypotheses are then associated with tracks showing state estimates of objects detected. The associated information is used by state estimators like Bayesian filters, to predict the current states [19]. The order of sensor measurements availability can be different from the order of raw data acquisition by the sensors. Buffering of information until all the data is available results in an unneeded dead-time which can degrade the performance of control system. To avoid performance degradation pseudo-measurements that are aligned in time or asynchronous tracking systems that employ every measurement upon availability can be used.

Embedded Software: Current Research and Development issues
Embedded software for control of the vehicle are codes written mainly to control both longitudinal and lateral maneuvers. This control algorithm in both includes a higher level of strategy control and a lower level of vehicle control. The strategy control involves decisions based on information from all vehicles affected by the maneuver and the infrastructure. The lower level vehicle control involves control of vehicle steering, throttle and brake systems.

Longitudinal Control
Four types of information are necessary for longitudinal control: Speed and acceleration of host vehicle, the distance to preceding vehicle, the speed and acceleration of the preceding vehicle and in case of platoon speed and acceleration of the first vehicle. The speed and acceleration of the host vehicle can be measured by speed sensors and accelerometers (On-board OEM vehicle sensors). The distance to preceding vehicle can be measured using range sensors like LIDAR, vision, radar, and ultrasonic. Radar has been the most common range sensor for this case [20]. There are two ways to measure speed and acceleration of the preceding vehicle. One way is to derive it from the host vehicle and the measurement from the range sensors. Another way to obtain the speed and acceleration of the preceding vehicle is by communicating this information between the vehicles [21]. The same method can be used in platooning i.e. the speed and acceleration of the lead (first) vehicle of the platoon is transmitted to vehicles in the platoon. It should be noted that the communication reliability can not be completely trusted [22].

Lateral Control
The strategic level evaluates the environment for lane change maneuver, like the presence of vehicles in the current and adjacent lane and their dynamics. A strategic level model called MOBIL -Minimizing Overall Braking Induced Lane changes, was proposed to deduct lane changing rules for any optional and mandatory lane changes for different car following models [23]. Different lane change trajectories (circular, the cosine approximation to the circular, the polynomial, and the trapezoidal acceleration) were studied, among them trapezoidal acceleration trajectory was the most desirable for best transition time and passenger's comfort [24]. Two different approaches are presented at the vehicle control level [25]. One approach is to treat the maneuvers as a tracking control problem, another approach uses the unified lateral guidance algorithm. In tracking control, a virtual desired trajectory is generated considering the lateral acceleration and jerk using a sliding mode controller. As for unified lateral guidance approach, a yaw rate generator generates the desired yaw rate for a desired maneuvers, either lane change or lane following maneuvers. Commands for steering angle are generated using a reference yaw rate signal and a yaw rate controller for the lane change [26].

Conclusions and Future Research
This paper presents a review of sensors and current advances in the control methodologies for autonomous vehicle applications. First, a short history of the ADAS was presented and major contributions were summarized. It was followed by highlighting the principles of major sensors used in the autonomous vehicles including LIDAR, GPS, radar, vision, ultrasonic and inertial measurement unit. Sensor fusion was reviewed and its importance in level-3 self-driving cars was highlighted. Finally, advances in the embedded control of autonomous cars were presented and classified as longitudinal and lateral control schemes. As advances in sensor technology, computing and communication takes place, more complex embedded control system will make its way into the autonomous vehicles. More integration of developmental tools, processing, sensing, connectivity, mapping, algorithms and security will be seen in the future driverless cars. Further research from this literature involves but not limited to testing the lane change maneuver in driverless cars using different control algorithms and simulation of ADAS features in various driving scenarios.