Pointer instrument positioning and indication recognition algorithm based on YOLOV5s

The substation inspection robot has low accuracy in positioning and indication reading of the pointer instrument. An algorithm design for indicator recognition is proposed. The algorithm mainly includes dial area extraction, scale line extraction and dial center determination, zero scale line determination, and indicator recognition. An experimental system for pointer instrument positioning and indication recognition is constructed to verify the proposed algorithm. The results indicate that the positioning accuracy of the two instruments is 90.5% and 100% respectively. Ten positioned instruments are selected, and the reading recognition error is 0.126. It shows that the proposed pointer instrument positioning and indication recognition algorithm can improve the efficiency of substation instrument recognition.


Introduction
Based on autonomous navigation, the substation inspection robot integrates multiple sensors to realize instrument reading recognition and equipment thermal anomaly recognition.The pointer instrument has the advantages of low price, simple structure, convenient maintenance, and strong stability.Because of these advantages, the pointer instrument is widely used in substations.Currently, the pointer instrument has no data interface and cannot realize automatic indication acquisition.The indication records of the pointer instrument are mainly collected manually, and the working state of the instrument is checked according to the instrument value.This method has disadvantages such as a heavy workload and a big subjective influence on the staff, which may cause omission and misreading.Substation inspection robots carrying detection devices can automatically identify the pointer instrument reading accurately and quickly.The robots can reduce the inspection workload and improve instrument identification efficiency.Because of the environmental impact or instrument type, the instrument recognition accuracy cannot meet the requirements of application promotion.Therefore, it is very significant to study research automatic positioning and reading of pointer instruments for substation inspection robots.
The common detection methods of pointer instruments are mainly based on machine vision technology and artificial intelligence technology.Zhang et al. [1] used the visual saliency region of the pointer instrument to construct the instrument detection model, used the undirected graph sorting algorithm to suppress the interference of non-pointer images, and extracted the instrument pointer well.Liu et al. [2] preprocessed the instrument image with super-resolution, such as non-local denoising and depth denoising.The pointer was segmented and located using the example segmentation algorithm Mask-RCNN.Finally, the indication was read using the angle method.Li et al. [3] used the Faster R-CNN algorithm to detect the pointer dial area and detect the pointer.The pointer was extracted according to the Hough transform on the obtained pointer area.
Although the research on the positioning and indication recognition of pointer instruments has made great progress, the positioning of pointer instruments in substation inspection still relies on manual work, which is very time-consuming and labor-intensive.The pointer instrument reading recognition based on artificial intelligence has limited reasoning speed due to the complexity of its network.At the same time, due to the lack of computing power, the existing model cannot be directly deployed to embedded development platforms.Aiming at solving these problems, this paper uses the YOLOv5s target detection model with the smallest model in YOLOV5 with superior performance as the basis to point the pointer instrument.Then it uses digital image processing technology to identify the instrument to achieve better reading results.

Design of pointer instrument detection algorithm
The overall process of the automatic positioning and reading recognition system is as follows: firstly, the data set is collected and established through the laboratory and the network, the YOLOv5s model is trained, and then the inspection robot reaches the specified detection point.Then the trained model performs real-time positioning of the pointer instrument position, extracts the instrument dial area according to the instrument position selected by the box, and finally reads the indication.
Redmon et al. proposed the YOLO algorithm by summarizing object detection as a regression problem [4].The YOLO algorithm is based on a separate end-to-end network, completing the input of the original image to the output of the object location and category.With the deepening and improvement of the algorithm, the algorithm is widely used in target detection, and YOLOv5 was updated to version 5.0 in April 2021 [5].The YOLOv5 algorithm has advantages such as high precision, small file memory usage, and high speed.YOLOv5 is easier to deploy on embedded development boards and is suitable for mobile terminals such as inspection robots.YOLOv5s has the smallest volume, and the weight of the latest 5.0 version is only 14 MB.The instrument detection algorithm in this paper adopts the YOLOv5s 5.0 version.

The Network model of YOLOv5s
The YOLOv5s network structure includes four parts, and the functions are introduced as follows.
Input: Random scaling, cropping, and arrangement using Mosai data enhancement can adapt well to small target detection, enrich the data set, and reduce the computational requirements of training GPUs.The optimal anchor frame value in different training sets is adaptively calculated.The filling of black edges at both ends and the computation are reduced using adaptive image scaling, and the target detection rate is improved.Backbone: Focus structure and CSP structure are used.The slicing operation is used in the Focus structure.After slicing operation and convolution operation with 32 convolution kernels, the original image with a size of 608×608×3 is transformed into a feature map with a size of 304×304×32.Two CSP structures are designed in YOLOv5s.The CSP1_X structure is used in the Backbone network, and the CSP2_X structure is used in the Neck network.
Neck: FPN and PAN structures are adopted.Strong semantic features are conveyed from top to bottom in the FPN structure.A feature pyramid containing two PAN structures is added to the FPN structure to convey strong positioning features from bottom to top.The fusion ability of network features is enhanced by feature aggregation of different detection layers from different backbone layers [6].
Prediction: The error between the predicted value and the real value is evaluated by using GIOU_Loss as the bounding box's loss function to solve the problem of boundary mismatch.The prediction box with a lower score is discarded by Non-maximum suppression [7].

Settings of training parameter
The network training parameters are set as shown in Table 1.

Evaluation indicators
The evaluation indexes of the experiments are the precision rate P, the recall rate R, the average precision mean (mAP_0.5),and the harmonic mean (mAPP_0.5:0.95).The precision rate is the proportion of the true instrument detected as an instrument, shown in equation (1).The recall rate is the proportion detected in all instrument samples, shown in Equation (2).The larger the values indicate the better instrument recognition effect.
where QP represents that both the target and the detection are instruments.TP represents that both the target and the detection are not instruments.TN represents that the target is the instrument, but the detection is not an instrument.As shown in Figure 2, the precision rate and recall rate change with the iteration number.The accuracy and recall rate tend to be stable above 95% around 200 training rounds.
The precision mean is the curve area value surrounded by the precision and recall rates.The average precision mean is the mean value of the average precision of all learning categories [8].It can be found from Figure 3 that mAP tends to be stable around 200 training rounds, in which the average precision mean is stable around 1.0, and the model training results are ideal.

Design of pointer instrument number recognition algorithm
After the positioning of the pointer instrument is completed, the identification and display of the dial indicator are carried out, including the dial area extraction, the scale line extraction, the dial center determination, the zero scale line determination, and the indication identification.

Dashboard area extraction
After detecting the instrument type and instrument indication through the YOLOv5 algorithm, the instrument area of the original image is cut according to the detection result.After the target detection, part of the background has been removed.Hough circle extracts the dial area to read the instrument number more accurately [9].The mean filtering and grayscale conversion of the instrument image is performed, and the instrument dial area is obtained after detecting the center position and circle radius by the probability Hough circle.Finally, the instrument dial area is extracted on the instrument cutting image to obtain the dial ROI area, and the process images of dial area extraction are shown in Figure 4.

Scale line extraction and determination of the center of the instrument dial
Under the actual working conditions, there is a big deviation between the center position found by the Hough circle transformation and the center point of the instrument dial.The scale line contour is extracted using characteristics of the connected region of the scale line contour.The center point of the instrument dial is determined by fitting the scale line and finding the intersection point.The binary image is obtained by median filtering, graying, and adaptive threshold segmentation of the dial area image.The scale line contour is screened by the area, length-width ratio, and position characteristics of the scale line connected domain.The scale line contour is fitted into a straight line based on the least square method.Half of the line set is randomly divided into two parts, and the intersection point of the two-part line sets is obtained.The point set is stored in a two-dimensional 0 array with the image size.Then the position of the point in the two-dimensional array is added by 1. Finally, the center of the instrument dial is found, the point with the largest value.The instrument calibration line and the dial center test results are shown in Figure 5.

Scale line extraction and determination of the center of the instrument dial
After the binary image of the instrument dial is removed from the scale line area, the pointer contour still cannot be directly extracted from the dial text, shadow, and instrument pattern.The instrument pointer is extracted by Hough line detection.The pointer line is obtained by skeleton refinement and least square linear fitting.The pointer extraction is shown in Figure 6.

Scale line extraction and determination of the center of the instrument dial
There are manual selection methods and maximum-minimum scale line determination methods for the zero scale line determination of the pointer instrument [10].The manual selection method wastes manpower, and the maximum and minimum scale line determination method cannot meet the circular full-scale line instrument.An instrument indication recognition algorithm is proposed based on zeroscale line marking.This method is suitable for reading fixed and a small number of easily marked instruments.In instrument image detection, the mouse marks the zero scale line.There are the angle and the distance methods in the instrument indication calculation, and the angle method is used in this paper.The instrument reading result is shown in Figure 7.

Instrument reading recognition error test
To test the accuracy of the algorithm, 10 instrument images with different numbers are collected for the instrument reading recognition experiment.The instrument reading recognition results are shown in Table 2.It can be found from the table that the proposed algorithm has a high recognition ability for clear pointer instrument images, and the average error of reading recognition is 0.126.The experiment results reveal that the recognition ability and accuracy of the proposed algorithm are strong.

Conclusion
According to the characteristics of current substation inspection and pointer instrument positioning and indicator recognition, a pointer instrument positioning and indicator recognition algorithm is proposed for substation inspection robots.The artificial intelligence-based YOLOv5s algorithm is adopted to locate the pointer instrument, greatly reducing the labor burden.Using digital image processing technology, this paper designs an algorithm to identify the indicator, which avoids the subjective errors of inspectors and is more accurate and efficient.
Data set A pointer instrument data set was made, and 1538 pointer instrument pictures (1076 training sets and 462 test sets) were collected from the laboratory and network.Partial test images are shown in Figure 1.

Figure 4 .
Figure 4. Screenshot of the dial area extraction process.
Firstly, an automatic positioning and data acquisition algorithm of a pointer instrument based on YOLOv5s is proposed.Through a selfmade instrument data set, the YOLOv5s network model is trained.The model can automatically identify instrument types and locate instrument positions in images.The model has been tested with high accuracy.Secondly, after the instrument is identified by the target detection algorithm, a pointer indicator recognition algorithm based on image processing technology is proposed, which mainly includes dial area extraction, scale line extraction, determination of dial center point, pointer extraction, and determination and reading of zero scale line.The test results show that the reading recognition error is 0.126, which indicates that the proposed algorithm has good recognition accuracy.

Table 2 .
Recognition results of the instrument reading recognition results.