Optimization of Edge Extraction Algorithm for Objects in Complex Background

In the field of artificial intelligence, machine vision is expected because of its low cost and easy popularization. Feature extraction is one of the core steps of machine vision. When we use Canny algorithm, which extracts edges by gray level change, we cannot directly separate the object from the background. The edge information of the object will also be lost when the Gaussian filter smoothes the image. The research proposes an improved edge extraction algorithm based on Canny algorithm. We use the image difference method and object envelopment method to separate the object from the background, so the data to be processed is greatly reduced and the speed and accuracy of the algorithm are improved. Then we use bilateral filtering instead of Gaussian filtering, which increases the weight of gray difference on the basis of Gaussian filtering, so we can to retain more edge information. Finally, we improve the gradient operator, by determining the gradient operator coefficient according to the inverse ratio of the Euclidean distance between the pixel and the center pixel. The average performance of the new gradient operator in the center of the picture is better than that of Sobel operator. Experimental simulation shows that the algorithm has good detection accuracy. Compared with traditional algorithms, the improved algorithm is not affected by complex backgrounds, and reduce the influence of light. The improved algorithm has clear edges in the center area of the image.


Introduction
In the era of industry 4.0 times with intelligent manufacturing as the core, computer vision has a wide range of uses in quality management, quality inspection, three-dimensional measurement and other aspects. In computer vision application, image pre-processing and feature extraction are indispensable pre-processing parts in two-dimensional image processing. The efficiency and accuracy of preprocessing will affect the speed and accuracy of two-dimensional image processing. In the process of image pre-processing, the filter can get the target information from a pile of information, A strict filter will smooth out the useful information and make the image edge information missing.
Image feature extraction is a concept of feature separation for two-dimensional images in computer vision. Feature extraction judges every pixel in an image and determines whether it belongs to a feature, and it includes: isolated special points, regular continuous curves or similar continuous regions. At present, in the edge extraction technology at domestic and abroad, the recognized operators include Robert operator, Prewitt operator, Sobel operator, Laplace operator, LOG operator and Canny operator, etc [1]. The most classic edge detection algorithm is Canny edge detection operator, which is an image edge detection algorithm developed by John F. Canny in 1986 [2]. The algorithm can effectively remove the irrelevant information and extract the edge contour information from the background image.

Improved edge extraction algorithm
In actual application, improving the algorithm according to environmental conditions will increase the speed of measurement and reach the standard accuracy. In industrial applications and scientific research, when collecting data, it is also important to control the scene.
In computer vision processing, the choice of light source is also very important. The sunlight is relatively uniform, but the intensity and direction of sunlight are not controllable, and the measurement is not stable, so the operation is generally carried out indoors. When collecting images indoors, in order to reduce shadows and distinguish objects from the background, it is necessary to use diffuse white light. This can reduce the shadow effect and increase the contrast between the object and the background, and separate the object and the image under test.
Under the conditions of the appealed experiment, after the image is collected, the measured object 3 and the background image are segmented before the image is filtered. The image difference method separates the measured object by comparing the background image with the image of the object in the same background. The basic operation of the image difference method is to subtract the two images. When the pixel gray level is the same, the calculated value is 0 or close to 0, and it is suppressed and regarded as background information and removed. The remaining pixels retain the original information, and finally eliminate the same background in the image. The algorithm is shown in the following formula (1) and formula (2): om Sub(i,j) is the value that the measured object image minus background image. Com(i,j) is the gray value of the pixel of the image containing the measured object. Back(i,j) is the pixel gray value of the background image. Tar(i,j) is the pixel gray value of the segmented target object. In the actual situation, the gray value difference of the background point in the two images is allowed to be close to the nonzero threshold, thereby improving the error tolerance rate. By setting a reasonable small threshold T, it is determined whether the pixel in the object image is a background pixel. The image difference method can quickly extract the target object when the colour difference between the object and the background is obvious. When the color of the measured object and the background are close, T needs to be adjusted to obtain the object image. Therefore, a scene with a large color difference from the object is generally used as the background to quickly and completely separate the measured object.
After the object to be measured is extracted, the object needs to be pre-processed. Gaussian filtering will cause the image to become blurred, it only pays attention to the position information in the filtering process. In the edge area where the pixel gray value has a jump, this method will be counterproductive and lose useful edge information.
Bilateral filtering is a kind of linear filtering, adding pixel gray value weight items on the basis of Gaussian filtering. In the flat area of the image, the change in pixel value is very low, and the pixel spatial weight is close to 1. The image is equivalent to Gaussian blur. In the edge area of the image, the gray value of the pixel changes greatly, and the gray value weight of the pixel becomes larger, thereby maintaining the edge information. The algorithm is shown in the following formula (3) and formula (4): Formula 3 is a spatial weighting formula, and p and q are Euclidean distances; Equation 4 is a weighting formula of gray similarity, and Ip and Iq are pixel gray values.After the image object is extracted and filtered, the preprocessing operation before edge extraction is basically completed.

Experimental verification of algorithm
In order to verify the accuracy and speed of the algorithm in this paper, we compared the improved algorithm with other edge extraction algorithms. We judge the effect of the improved algorithm by comparing the calculation time of the algorithm, the number of false edges, the number of missing edges, and the edge integrity. The research uses MATLAB and OpenCV in the python environment to implement the algorithm, and record the relevant data of the simulation experiment, finally compare the experimental results.

Image pre-processing
Under the premise of uniform indoor lighting, the study uses diffuse reflection light source to directly illuminate the object. In order to remove the influence of the background, the background is first removed by the image difference method. First, the object image and the background image are subtracted to obtain the absolute value of the difference in pixel gray value, and then the result is processed by histogram to obtain the threshold. Pixels whose absolute value is bigger than the threshold are object points, which are extracted, and finally the envelope of the object point is drawn by function fitting. In order to reduce the influence of light, when the light is illuminating the object forward, the pixel information in the center of the object may be close to the background, so the envelope method is used to preserve all the areas within the edge of the object outline. The flowchart of the algorithm of the graph difference method and the envelope method is shown in Figure 1  Com(i,j) is the gray value of the pixel of the image containing the object, and Back(i,j) is the pixel gray value of the background image. Sub(i,j) is the absolute value of the result of subtracting Back(i,j) from Com(i,j). Thresh is a strict threshold for judging whether a pixel is a target object point. P is a scatter plot drawn with points that meet a strict threshold, and E is the envelope of the scatter plot P. Tar(i,j) is the object image with the background removed. By traversing all the points of the D image for difference judgment and envelope judgment, the background information is removed and an image containing only objects is obtained.
Then the object image is bilaterally filtered to remove noise information and preserve the edges. The research uses the Otsu method to obtain the threshold of bilateral filtering. The principle of OTSU is to automatically determine a threshold to maximize the difference between the target class and the background class.

Edge extraction
After filtering, we use new 3×3 gradient operator to calculate the gray gradient in the horizontal and vertical directions, and obtain the magnitude and direction of the gradient from this. The calculation formulas of the horizontal gradient operator Gx and the vertical gradient operator Gy are as follows: Gx and Gy are gradient operators in the horizontal and vertical directions. G is the two-dimensional gradient amplitude based on the horizontal and vertical directions, and θ is the gradient direction. The gradient direction θ ranges from 0 to 2π. Compared with Sobel operator, new operator is better at the center of the image, and the denoising effect is better at places far from the center of the image.
After obtaining the chessboard image with ZED 2 camera, the results of using the new operator and sobel operator are shown in Figure 2 below.

Figure 2. the results of using the Sobel operator and the new operator
In order to test the effects of different gradient operators, image difference method and object envelope method are not used to remove the background. The left picture is a chessboard extracted by canny operator with Sobel operator as gradient operator, and the left two or three squares of the second and third rows of chessboard have edge loss phenomenon. The right picture is the chessboard extracted by canny operator with gradient operator as the new operator. The edge of chessboard is complete, and the useless background information around the chessboard is less than Sobel operator. Therefore, we use the new gradient operator as the gradient operator of the improved the canny algorithm. Through the above steps, we can get the binary edge map of the object. We use the improved edge extraction algorithm to extract the edge of a pillow, and the result is shown in Figure 3 below.

Experimental results
When the background is more complex, the improved algorithm has great advantages and can completely remove the background contour information. When the background is a monochrome background, the edge information of the improved algorithm is relatively clear and complete, and there will be no overlap of multiple edges, and the impact of texture and lighting is also minimized. By comparing the specific parameters of the algorithm, the pros and cons of the algorithm can be seen more intuitively. The results are shown in Table 1: According to the specific requirements of edge extraction, the algorithm is compared with multiple parameters, such as efficiency, noise part, fake edges, clarity and integrity of edges. In addition to the slower calculation speed, no matter what aspect, the improved edge extraction algorithm has huge advantages. Especially for edge extraction in a complex background, the improved algorithm can well remove background information. This is not considered by the general edge extraction algorithm.

Conclusion
This article applies the principles of computer vision to extract the edges of objects in complex backgrounds, and optimize and improve the whole process. First, the background is removed by the image difference method and object envelope method, so we can optimize the calculation and remove useless information. Then use bilateral filtering to filter the image after removing the background to remove noise while retaining details. New gradient operator is used in edge extraction, and the edge information of the central part is more complete. The algorithm is implemented through the OpenCV toolbox working in the python environment. Compared with traditional algorithms, the improved algorithm has higher accuracy and wider application scenarios. In the future, we need to improve the speed of feature extraction, reduce the false edges of metal parts due to gloss changes, and reduce the impact of parts holes.