Design of a teleoperated and mixed reality-based electric power live line working robot

This paper presents a novel teleoperation force feedback approach combining graphic and haptic feedback for enhanced telepresence. Visual force feedback is modeled through a fitting process, with the force direction established through coordinate transformations. To construct operational environments, a new scene recognition and reconstruction technique is proposed, based on the principles of binocular stereovision. This method is primarily designed for live-line work scenarios, focusing on power line wires. It involves image preprocessing, target image recognition and extraction, binocular stereo measurements, generating depth point clouds, and ultimately reconstructing three-dimensional models of the target objects. Ultimately, a novel electric power live line working robot system with a strong sense of telepresence is achieved, based on teleoperation and mixed reality.


Research Overview
Remote operation technology refers to a remote control technique that enables an operator to control a robot's operational mode from a remote location, providing the operator with relevant sensory information (such as images, forces, sounds, etc.) [1,2] .It ensures that the operator's decision and command information are promptly conveyed to the robot, allowing the robot to operate as per the expected requirements.Research in teleoperation technology aims to enhance the operator's sense of telepresence and realism, laying the foundation for robots to perform more complex and semiautonomous tasks [3] .The study of robot teleoperation theory holds significant practical importance, as it can replace humans in environments hazardous to life and health (such as nuclear radiation contamination and chemical reaction facilities), high-cost environments (outer space), and situations with scale limitations (large-scale operational scenarios, micro and nano environments, one-to-many control), among others [4] .
Many advanced fields in defense and civilian sectors have an urgent need for teleoperation technology, especially in recent decades, where teleoperation technology has made significant progress in various areas of robotics applications.In the nuclear energy and space sectors, European and American countries have conducted in-depth research for many years [5,6] .In the medical field, remote robotic surgery has become a hot topic of research and application in recent years.In the security sector, teleoperated robots are often used as bomb disposal robots.In the micro and nano domains, teleoperated robot technology is required for processing and controlling nanoscale components and the assembly of microelectromechanical systems (MEMS).In the field of electrical power, many countries have developed live-line maintenance teleoperated robots to perform maintenance without power interruption, minimizing losses, and ensuring human safety.All these indications foreshadow that in the future, teleoperation technology will penetrate numerous areas of robotics applications, transforming various aspects of human production and daily life.
In recent years, applying teleoperation technology to robotics has been one of the hot research topics.In our research on teleoperated dual-arm robots for live-line work in the distribution network, the teleoperation system also plays a crucial and indispensable role.Researching teleoperation technology based on Mixed Reality (MR) to enhance the telepresence experience during robot operation holds significant importance.It contributes to improving the robot's capability to perform replacement tasks, the usability of operators, and the overall stability of the robotic system [7] .
Existing electric power live-line working robot products often follow a process of "environment modeling-robot's autonomous target recognition-path planning" to complete their tasks.However, due to the complexity and diversity of the objects involved in live-line work, as well as the limited working conditions, achieving full autonomy for these robots remains a significant challenge.In practical usage, issues such as unsuccessful autonomous wire gripping, suboptimal grasping positions, and difficulties in clamping wires can often arise.This necessitates manual adjustments at various steps of the operation.As a result, the time taken by the robot to perform the task often far exceeds the time it would take for a human operator, which contradicts the initial goal of improving production efficiency.

Extremely Safe Extremely Safe
From Table 1, it can be observed that this solution makes beneficial additions to the traditional approach from the perspectives of perception, decision-making, and execution.While ensuring the safety of robot operations, it enhances operational efficiency.This can be considered as the future direction for the development of electric power live line working robots.Therefore, this solution proposes innovation in terms of the technical roadmap.It is based on a master-slave arm operation system, combined with mixed reality technology, and introduces an electric power live line working robot implementation plan based on mixed reality, force feedback, and teleoperation.This approach fundamentally eliminates the hazards of live-line work, significantly improves operational efficiency compared to traditional robot work solutions, and reduces operational complexity.

Figure 1.
The system design in the master-slave structure.The whole system design can be seen in Fig. 1.The left part in Fig. 1 is the master station and the right part in Fig. 2 is the slave station.The operator in the master station grasping the master robot controls the slave robot with two arms in the slave station.Due to the high-power lines to be done in the slave part, the structure can ensure the safety of the operator.The teleoperation system established in this paper consists of two parts.The first part is the groundbased master station control terminal (client), and the second part is the airborne slave robotic arm terminal (server).Each end is equipped with an industrial computer, and communication between the computers is achieved through the TCP/IP network protocol, utilizing a Wi-Fi network, as depicted in Fig. 2. The primary control terminal computer is connected to haptic feedback equipment (Touch), a head-mounted display (VRHMD), and an integrated teleoperation control interface.It can send task instructions to the server through button-based controls.The control process is as follows: The slave server is equipped with environmental data collection sensors, including a gimbaled sensor and cameras.Visual and distance data, among others, are processed and enter the data collection mixed reality processing host after basic data processing.The sensor information is integrated within the host to form a live-line work environment that is recognizable by human operators, creating a visual mixed-reality work environment.On this basis, the software integrates various types of data, such as points, temperature, humidity, wind speed, and wind direction, into the mixed reality environment and writes them into a UI interface for control.This information is then fed back to the operator's head-mounted display device.After observing the mixed reality information, the operator can make real-time control decisions using head-mounted display gyros, handheld control devices, and body posture sensors, which are then transmitted to the front-end collection devices.The entire system forms a closed-loop control, allowing the operator to respond quickly and intuitively to the work environment, reducing operational complexity, and improving operational efficiency.

Implementation of Teleoperation and Mixed Reality Systems
Human-machine interaction mainly consists of three major components: Mixed Reality and Teleoperation Human-Machine Interface (Helmet Display + Master Hand + Voice Interaction), Multimedia Display and Control Interface (Screen + Keyboard + Mouse + Foot Pedal), and Precise Position Control Remote (Spider Car Joystick + Laptop).This paper's teleoperation system is divided into two parts: the first part is the ground-based master station control terminal (client), and the second part is the airborne slave robotic arm terminal (server).Each end is equipped with an industrial computer, and communication between the computers is facilitated through the TCP/IP network protocol, using a Wi-Fi network.The primary control terminal computer is connected to haptic feedback equipment (Touch) and a head-mounted display (VRHMD), with an integrated teleoperation control interface.It can send task instructions to the server using buttonbased controls.
∂ To meet the operational requirements mentioned above, the human-machine interaction interface architecture is built on top of the system software.Experience has shown that most live-line maintenance tasks require high levels of flexibility and precision from robots.For example, a common maintenance operation like installing a socket wrench onto a bolt may seem straightforward for a human, but it demands a high level of precision from a robot.The tolerance for error in such a fitting relationship is less than 2 mm.When robots use insulating rods for operations, they also need to adjust the force applied to the insulating rod in real time to ensure effective coordination.Direct force feedback, which is achieved through actions felt by the operator's hand, can introduce significant feedback delays when the user perceives danger or makes an error.Therefore, a graphics-based force feedback technology is proposed here to complement direct force feedback.
The establishment of graphic force feedback means that users can receive information about forces through visual display and rendering while operating.As users operate remote robots from the ground via VR head-mounted displays (HMD), they observe the robot's environment, which involves a stereoscopic visual image from both eyes.Therefore, by integrating graphic force feedback into the left and right eye images of the stereoscopic vision, users can obtain real-time feedback on the graphical force information.
To present the graphic force feedback information concisely and conspicuously within the stereoscopic image rendering, three pairs of progress bars are provided to represent the real-time feedback force's magnitude by their length, as shown in Fig. 3.These progress bars dynamically change in length based on the force at the end effector of the robot, and they display different RGB colors in real-time within various force ranges.As the force increases, the progress bars sequentially display colors ranging from green to orange to red.The operation principle and implementation process of the progress bars are as follows: Force sensors at the robot's end effector sense the applied force, undergo zero calibration, and transmit force data information through the Socket communication protocol to the computer.The computer, within the 3D development engine, receives this information and translates it into the coordinate system of the simulated environment through translation and rotation transformations.The force is then decomposed along the XYZ axes, and the length of the progress bars is determined based on the magnitude of the force.
In this paper, displaying the force progress bars prominently on the VR head-mounted display could create substantial interference in the user's field of view.This is because the operator cannot simultaneously focus on the central field of view and monitor the graphic force feedback display using peripheral vision.Therefore, the design places these progress bars near the robot's end effector, with a switch for attachment to the end effector.This attachment-based display, to some extent, can prevent the progress bars from obstructing the view of the robot's end effector, thus minimizing their impact on the operator's field of vision.
Furthermore, in addition to graphic force feedback, auxiliary direct force feedback can be employed.This means that the operator generates direct tactile / force feedback sensations through a teleoperation controller while simultaneously sending real-time operational commands.In this case, the raw tactile / force feedback data needs to undergo improvement through force feedback compensation algorithms and simulation algorithms, before being transmitted to the force feedback device.This enables the realtime force and torque information from the end effector or joint to be sent back to the control terminal, preventing damage to high-voltage lines, tools and equipment, and the robot itself, and effectively eliminating the occurrence of safety incidents.This electric power live-line working robot consists of multiple robotic arms integrated into a single working platform.It is a complex configuration robot designed for tasks such as stripping insulation from conductors, cleaning insulation residue, and installing line clamps.The system's implementation involves structural design at the end effector, as well as insulation protection design for the robot.These processes are analyzed through modeling and simulation of a virtual prototype, followed by the fabrication and assembly of the robot's structure, resulting in the creation of the electric power live-line working robot.

Experimental Testing and Results
The structure of this master-slave robot system mainly consists of four parts: the master station structure, the slave station structure, and others (quick-change device and end effectors).The entire system is designed with a modular approach.The master-slave robot systems constitute the main hardware components of the heterogeneous master-slave teleoperation system, as shown in Fig. 4. The master station is integrated into the control room and primarily consists of a height-adjustable portable monitor stand, display screen, force feedback controller, and more.The stand can suspend the display and has height adjustment capabilities to accommodate different heights.Additionally, with the use of adapters, it can also support material trays and other accessories.When changing transport vehicles, the stand and the display can be easily moved as a whole, making it convenient and efficient.
Practical experiments conducted by operators have shown that when performing wire clamping and wire stripping positioning using teleoperation, the combination of visual force feedback and direct force feedback significantly improves the efficiency and safety of operations.For two novices, after simple training, when using only direct force feedback for teleoperation, the probability of collisions during the same task ranged from 50% to 60%.In other words, out of 20 experiments, 10 to 12 failed.However, when combining direct force feedback with visual force feedback, the success rate increased to 18 out of 20, achieving a success rate of 90%.

Conclusion
This paper addresses scenarios such as live-line wire stripping and live-line disconnection of leads.It employs mixed reality (MR) technology, multi-sensor fusion teleoperation technology, haptic feedback technology, low-latency data transmission technology, and multi-joint robot control technology.It utilizes a master-slave robotic arm teleoperation mode, constructs a deep visual perception environment using mixed reality technology, and seamlessly integrates the real operating environment of the robot with the virtual operating environment of the operator.Additionally, it uses force feedback technology and fine-grained remote-control techniques, enabling operators to engage in live-line work in an immersive and safe environment.This approach aims to overcome the limitations of long modeling times and restricted operation types in current automated electric power live line working robot operations.

Figure 2 .
Figure 2. Control Process Flowchart.The teleoperation system established in this paper consists of two parts.The first part is the groundbased master station control terminal (client), and the second part is the airborne slave robotic arm terminal (server).Each end is equipped with an industrial computer, and communication between the computers is achieved through the TCP/IP network protocol, utilizing a Wi-Fi network, as depicted in Fig.2.The primary control terminal computer is connected to haptic feedback equipment (Touch), a head-mounted display (VRHMD), and an integrated teleoperation control interface.It can send task instructions to the server through button-based controls.The control process is as follows: The slave server is equipped with environmental data collection sensors, including a gimbaled sensor and cameras.Visual and distance data, among others, are processed and enter the data collection mixed reality processing host after basic data processing.The sensor information is integrated within the host to form a live-line work environment that is recognizable by human operators, creating a visual mixed-reality work environment.On this basis, the software integrates various types of data, such as points, temperature, humidity, wind speed, and wind direction, into the mixed reality environment and writes them into a UI interface for control.This information is then fed back to the operator's head-mounted display device.After observing the mixed reality information, the operator can make real-time control decisions using head-mounted display gyros, handheld control devices, and body posture sensors, which are then transmitted to the front-end collection devices.The entire system forms a closed-loop control, allowing the operator to respond quickly and intuitively to the work environment, reducing operational complexity, and improving operational efficiency.
The system software functionality primarily consists of two main components, which are: Mechanical Arm Motion Control and Environmental Information Collection Software.∂ Mixed Reality Fusion and Modeling Software.∂ Bidirectional Force Feedback Teleoperation Control Software.The Mechanical Arm Motion Control and the Environmental Information Collection Software run in a Windows + ROS robot operating system environment.Its primary functions include kinematic forward and inverse calculations for the robotic arm and the initial processing and serialization of environmental information.The Mixed Reality Fusion and Modeling Software interfaces with the former, blending environmental information and the robotic arm's structure into a 3D real-time video stream.This software utilizes 3D modeling software such as V-REP and Unity and transmits the integrated stream to the operator's end through communication devices.The Force Feedback Teleoperation Control Software is used for collecting controller control signals and feedback force signals.It primarily operates on embedded devices.

Figure 3 .
Figure 3. Multi-Modal Control and Graphic Force Feedback Illustration.Experience has shown that most live-line maintenance tasks require high levels of flexibility and precision from robots.For example, a common maintenance operation like installing a socket wrench onto a bolt may seem straightforward for a human, but it demands a high level of precision from a robot.The tolerance for error in such a fitting relationship is less than 2 mm.When robots use insulating rods for operations, they also need to adjust the force applied to the insulating rod in real time to ensure effective coordination.Direct force feedback, which is achieved through actions felt by the operator's hand, can introduce significant feedback delays when the user perceives danger or makes an error.Therefore, a graphics-based force feedback technology is proposed here to complement direct force feedback.The establishment of graphic force feedback means that users can receive information about forces through visual display and rendering while operating.As users operate remote robots from the ground via VR head-mounted displays (HMD), they observe the robot's environment, which involves a stereoscopic visual image from both eyes.Therefore, by integrating graphic force feedback into the left

Figure 4 .
Figure 4. Model and Test Photos of the Electric Power Live Line Working Robot System.This electric power live-line working robot consists of multiple robotic arms integrated into a single working platform.It is a complex configuration robot designed for tasks such as stripping insulation from conductors, cleaning insulation residue, and installing line clamps.The system's implementation involves structural design at the end effector, as well as insulation protection design for the robot.These processes are analyzed through modeling and simulation of a virtual prototype, followed by the fabrication and assembly of the robot's structure, resulting in the creation of the electric power live-line working robot.

Table 1 .
Design Solution Comparison.