Fusion of Virtual Reality Technology and Multi-sensor Mobile Robot

In recent years, with the rapid development of virtual reality technology, VR, AR and MR can be widely used in the creation of learning scenarios for various majors to realize simulation training, and then solve the requirements of situational and natural interaction of learning media. VR technology is very intuitive and realistic in the research of multi-sensor mobile robot navigation, which can provide an ideal research platform for the research of autonomous navigation technology of mobile robot. Research on a new algorithm combining virtual reality and sensors to improve the performance of the entire system.


Overview of Virtual Reality Technology
(1) Definition of virtual reality Virtual reality technology is the electronic signal generated by computer technology, using the data in real life. It combines the data in real life with various output devices and transforms it into a phenomenon that can be felt by people. These phenomena can be objects that really exist in real life, or substances that are invisible to our naked eyes. They can be reflected by three-dimensional models, so as to get the same feelings with the real world through vision, hearing and touch. That is to say, VR is a comprehensive integration technology, involving the following fields, such as: computer graphics, human-computer interaction technology, sensing technology, artificial intelligence, etc. It uses computer to generate realistic three-dimensional sense of sight, hearing, smell and so on, so that people as participants can interact with it in real time through natural ways such as language and gesture, creating a kind of humanized multi-dimensional information space.
(2) Definition of augmented reality Augmented reality (AR) is a new technology that uses computer system to generate three-dimensional information to enhance users' perception of the real world. It is generally believed that the emergence of AR technology [1] originates from the development of virtual reality (VR), but there are obvious differences between them. Traditional VR technology gives users a completely immersive effect in the virtual world, which is to create another world; while AR technology brings the computer into the user's real world, enhances the perception of the real world by listening, seeing, touching and smelling virtual information, and realizes the transformation from "people to adapt to machines" to "people-oriented" technology.
(3) Definition of mixed reality As a further development of virtual Reality technology, Mixed Reality (MR) is to build an information bridge of interactive feedback between the virtual world, the real world and the user through presenting the information of the virtual environment in the real scene, so that the user can get a sense of real experience.

Multi-sensor Mobile Robot
Multiple integration of information technology is the ability to integrate the information (sights, sounds, smells and tactile sensations) detected by various functional organs of the human body (eyes, ears, nose and limbs) with the acquired knowledge so as to make judgments about the surrounding environment and the events that are taking place.This process is complex and adaptive. It transforms all kinds of information (image, sound, smell, physical shape, description) into valuable interpretation of the environment. It requires a lot of different intelligent processing and a knowledge base suitable for explaining the meaning of combined information. Multiple integration of information technology is actually a functional simulation of human brain's comprehensive processing of complex problems. In a multi-sensor system, the information provided by various sensors [2] may have different characteristics: time-varying or non-time-varying; Real time or not real time; Quickly or slowly changing; Vague or definite; Precise or incomplete; Reliable or unreliable; Supporting or complementary to each other; Contradictory or conflicting. The basic principle of multi-sensor information fusion is like the process of comprehensive processing of information by the human brain. Multiple sensor resources are fully utilized, and various sensors and the observed information are reasonably controlled and used. The complementary and redundant information of this kind of sensors in space and time are integrated to explain the consistency of the observation environment. The goal of information fusion is to separate the observation information based on the sensors, and to derive more effective information through the optimal combination of information. Its ultimate goal is to take advantage of the common or joint operation of multiple sensors to improve the effectiveness of the entire sensor system. Intelligent robot has become an important trend of robot development. The organic combination of multi-sensor and information technology and traditional robots has formed intelligent robots. An important way for robots to obtain external information depends on sensors. The basis for achieving robot intelligence is the fusion of sensors and information technology.

The Integration of Virtual Reality Technology and Mobile Robot
First, virtual reality technology can accurately create an environment similar to robot operation and maintenance, especially in many difficult professional training. At the same time, AR and VR can also be widely used in the creation of learning scenarios for various majors to realize simulation training, so as to solve the requirements of situational and natural interaction of learning media. Therefore, we should attach great importance to its broad application prospect in the field of vocational education from the ideological and cognitive aspects. The second is to integrate resources, jointly carry out the development and application of virtual reality technology in computer application technology, construction engineering technology, real estate sales, landscape design and engineering construction and other key professional or curriculum fields, condense and cultivate vocational skilled talents, build technology innovation sharing platform, establish virtual reality Vocational Education Alliance, and promote the implementation of virtual reality technology in vocational education widely. The fusion of multi-sensor and information technology is to integrate incomplete information distributed in different locations and incomplete information provided by sensors in local environment,eliminate the surplus and contradictions that may exist between the multi-sensors, and compensate to reduce the uncertainty, so as to form a system environment. The relatively complete and consistent perception description of the intelligent system improves the speed and correctness of decision-making, planning, and reflection of the intelligent system, and reduces the risk of decision-making at the same time. The key of information technology fusion is to separate the observation information based on each sensor, and the optimal combination of information will generate more effective information. Common methods of multi-sensor fusion include [3]: weighted average method, Bayesian reasoning and D-S evidential reasoning, Kalman filtering, statistical decision theory, neural network and fuzzy reasoning, and production rule with confidence factor. Multi-sensor and information technology convergence is one of the key technologies of intelligent mobile robots. Multi-sensor and information technology convergence can effectively use multi-sensor information to overcome the incompleteness and uncertainty of information; it can more accurately and 3 comprehensively recognize and describe the measured object, so as to make correct judgments and decisions. Mobile robots use multi-sensor information fusion technology to achieve target recognition, autonomous obstacle avoidance, tracking, positioning, and perception of their own conditions of mobile robots [4]. At present, many scholars in the field of mobile robot information fusion technology research is very extensive. In this paper, information fusion method is used to process multi-sensor or multi-type information comprehensively on mobile robot. Information fusion can be carried out at different levels, such as data layer fusion, feature layer fusion and decision layer fusion. Sensor fusion is mainly to realize the information fusion among ranging sensor information, internal dead reckoning system information and global positioning information. As shown in Figure 1, the control system of the car is composed of functional modules such as behavior control and path planning, and the modules are closely connected. The following is a brief analysis. The sensor is the input of the entire system. The sensors configured on the car include: angular velocity gyroscope, accelerometer, trip meter, infrared sensor and so on. They are the perception unit of the car body, used to perceive the virtual reality environment information and the state of the vehicle itself [5]. This module interacts with navigation/positioning and perception modules. The sensing module is responsible for processing the information from the sensor. Its output includes the map of virtual reality environment, the posture of the car, the state of the car and so on. The first mock exam module not only acts with many modules such as behavior control, navigation and location, but also directly interacts with the behavior control module of the system to produce reactive decision based on response. Virtual reality environment map is the core of control system and the result of information synthesis. Its basic elements usually include: the current position of the car, attitude, target position, the distribution of obstacles around the car and other information. All kinds of car body planning and behavior are based on virtual reality environment map. Virtual reality environment map synthesizes the results of perception and location module, which provides global information for path planning and task planning. The navigation and positioning module realizes the positioning of the car. Its output mainly includes the two-dimensional coordinates of the car in the horizontal plane, vehicle driving angle and so on. It has effect on virtual reality environment map and path planning module. The core module of the autonomous driving control system is the path planning module. It integrates the location of the car, virtual reality environment and other information to make the deployment planning for the collision free path of the vehicle to reach the target point.It is the upper module of behavior control module and the key point to realize the function of autonomous obstacle avoidance. The main task of the behavior control module is to synthesize the results of virtual reality environment information and upper-level path planning to generate motor control instructions [6]. It directly faces the car body motion mechanism and is the underlying control module. A reasonable system structure is the fundamental to ensure system autonomy and real-time. The behavior planning module of the system is composed of path planning control module and motor control module. Path planning control module is divided into two modes: barrier free path planning and barrier path planning. Different modes adopt different control schemes, and the fuzzy controller and knowledge base are different [7]. This structure ensures that the bottom decision-making has a high speed and can meet the requirements of real-time obstacle avoidance. The output of path control module includes speed and angle control information. The angle control information is directly given by the steering mechanism control quantity of the car, which can directly control the front wheel angle of the car. The precision of this control scheme is limited, but the real-time performance is good, and it is suitable for obstacle avoidance. The sensor is the perception unit of the car, and its task mainly has two kinds, one is to sense the state of the car itself, the other is to sense the virtual reality environment around the car. The state of the car itself mainly includes the driving speed, angular speed, acceleration, power battery voltage, remote control signal and so on. Virtual reality environment information mainly refers to the distribution of obstacles around the car in the process of driving. The sensor of smart car is shown in Table 1. Table 1. Brief introduction of trolley sensor.

Conclusion
Virtual reality (VR) technology integrates multimedia technology, sensor technology, human-computer interface technology, artificial intelligence, computer graphics and human behavior research and many other key technologies. It provides an ideal platform for the research of autonomous navigation technology of mobile robot. The main development trend in this field is to develop new algorithm of sensor information fusion to further improve the performance of fusion system, and to develop software and hardware with parallel computing ability to meet the requirements of the fusion of multi-sensor and information technology with large amount of data and complex calculation.