Research on Traditional Image Segmentation Method Based on Oil Drilling Pipe Defects

This study explores the feasibility and efficacy of conventional image seg-mentation technology in diagnosing failures in oil drilling pipe images. Simultaneously, it envisions an intelligent approach to diagnose defects in oil drilling pipes. The present paper examines and scrutinizes traditional image segmentation methods in light of the characteristics of oil drilling pipe defect images. It devises experiments tailored for these defect images and employs various traditional image segmentation methods to facilitate comparison and evaluation. The experimental findings illustrate that the traditional image segmentation methods possess a discernible impact on detecting defects in oil drilling pipes, with the image segmentation effect based on the Canny operator method of edge detection proving to be the most effective. The experiments are specifically devised for defective images of oil drilling pipes, utilizing diverse traditional image segmentation methods for comparison and evaluation. The experimental results demonstrate that the traditional image segmentation methods exhibit a certain degree of efficacy in detecting defects in oil drilling pipes, with the image segmentation effect based on the Canny operator method of edge detection being the most optimal.


Failure of Petroleum Pipe
Petroleum tubing denotes a conduit material utilized for the transportation of oil and gas.Typically fabricated from steel, alternative substances like plastics and composites may also be employed.Petroleum pipes necessitate robustness, corrosion resistance, and high-pressure endurance to uphold their integrity and stability during the conveyance of oil and gas.Often, these pipes are buried underground to safeguard against natural elements and human-induced harm.The significance of petroleum pipes lies in their pivotal role in extracting and transporting oil and gas.The failure of petroleum pipes arises from ruptures, oil leaks, deformations, corrosion, and other issues that occur during their usage, resulting in functional inadequacies.Due to the continuous development of petroleum resources exploration and development technology, the use of petroleum pipes and equipment continues to increase, and the failure accidents of petroleum pipes and equipment occur from time to time, bringing a lot of hidden dangers to the efficient and safe development of petroleum resources, resulting in a lot of human, material and time losses.By analyzing the failure causes of the failure parts, we can put forward targeted measures to prevent similar accidents and effectively reduce and avoid the occurrence of failure accidents.Typically, image inspection is conducted manually, making it susceptible to external factors that may introduce errors in the analysis.In recent times, the application of computer vision techniques in micro-image analysis of materials has emerged as a popular field of research in materials science.Remarkable progress has been made in developing highly robust and efficient segmentation techniques, with machine learning emerging as a po-tent tool for image segmentation.These methodologies have demonstrated outstanding performance across various material applications.

Defect Detection
Defect detection involves the process of identifying and localizing flaws or irregularities in materials, products, or systems.Its purpose is to identify potential defects, damages, deficiencies, or anomalies using diverse techniques and methodologies while comparing them to specific standards or specifications.Defect detection finds widespread applications in numerous domains, including manufacturing, materials science, electronics, and medical imaging.In manufacturing, defect detection is employed to ensure the quality of products, guaranteeing compliance with specifications and customer requirements.In the realm of materials science, defect detection aids in assessing the structural integrity and performance of materials.In the 1980s, Academician Li Hailin of the Institute of Engineering Materials at China National Petroleum Corporation established the State Key Laboratory of Service Behaviour and Structural Safety of Petroleum Pipes and Equipment Materials, along with the National Petroleum Pipe Quality Supervision and Inspection Center.These platforms played a pivotal role in failure analysis related to petroleum piping and equipment materials, providing crucial technological support for the development of China's petroleum industry failure analysis technology system [1][2].During the utilization of petroleum pipes, various defects may occur, such as cracks, corrosion, fatigue, and more.By effectively segmenting macroscopic defect images, it becomes possible to distinguish and locate defects from other regions, aiding engineers in accurately analysing the position and morphology of these flaws.The specific morphological characteristics of defects visually manifest the failure phenomena caused by service conditions or internal factors, supplying precise image information for subsequent failure analyses.Analysing the segmented defect images enables accurate identification and diagnosis of specific failure behaviours and types.However, current macroscopic defect images often contain considerable noise, which may compromise the accuracy of intelligent failure diagnosis.To mitigate the influence of irrelevant content in image information, this study proposes enhancing the processing efficiency of petroleum pipe defect visual diagnosis by employing image segmentation methods, thereby providing stream-lined and reliable image feature information [3].

Image Segmentation
Historically, image segmentation methods were initially applied to medical image processing, segmenting targets in medical images for subsequent analysis and diagnosis.Given the distinct contrast between the background and the target in medical images, threshold-based methods predominantly perform coarse pixel-level segmentation in this domain [4][5].As visual tasks encompass increasingly complex scenes, the technical requirements for segmentation become more stringent.Consequently, segmentation methods based on thresholds, edges [6], regions [7][8][9], clustering [10], graph theory [11], and specific theories [12] have emerged, enhancing segmentation efficiency.With the ongoing progress in deep learning, the fusion of deep learning and image processing has become a prevailing trend.Deep learning-based image segmentation primarily relies on end-to-end training and learning to acquire semantic features of each pixel, enabling classification.This approach obviates the need for cumbersome preprocessing, leveraging iterative learning of image features to avoid significant data loss.As a result, deep learning methods yield more effective and efficient images.The accuracy of image segmentation achieved through deep learning approaches has reached or surpassed that of tradition-al methods, rendering them applicable across diverse visual scenarios [13][14].

Traditional Image Segmentation Methods
Conventional image segmentation algorithms primarily encompass threshold-based segmentation, edge detection, region growing, as well as algorithms grounded in graph theory and specific theory [15].Among these, threshold-based segmentation stands as the simplest and most widely employed algorithm.It classifies pixels in an image into two distinct categories by employing a threshold value for segmentation purposes.Conversely, the region growing algorithm seeks out similar pixels within the pixel neighbourhood and merges them to form cohesive regions.Edge detection algorithms, on the other hand, execute segmentation based on the edge information present in the image, with commonly utilized algorithms comprising the Canny operator and Sobel operator.Graph theory-based algorithms leverage the pixel neighbourhood within the image for segmentation, and commonly employed algorithms include the minimum cut algorithm and maximum flow algorithm.

Image segmentation algorithm based on threshold segmentation
In fact, traditional machine vision usually includes two steps: image preprocessing and target detection, while image segmentation is the key way to communicate the two, among which image binarization methods are most commonly used.Gray-scale threshold segmentation is actually binarization processing, i.e., selecting a threshold value and transforming the image into a black and white binary image for image seg-mentation.Image segmentation algorithm based on threshold segmentation is a very commonly used traditional image segmentation algorithm due to its intuitive application, simple implementation and fast computation speed.Usually, for a given grayscale image in a visual task, it is assumed that the image consists of the target object and the background pixels, and extracting the target object from the given image is the main task of image segmentation, and the common method to realize this task is to set a fixed threshold to separate the target object from the background pixels, with the region as the target object region and the region outside the region as the background region.When the target region and the background region has a large image gray level difference, the threshold image segmentation method is more effective, for such images will be greater than a critical gray level value of the pixel gray level is set as the gray level of the maximum value, less than the value of the gray level of the minimum value, so as to achieve the image dichroism.In addition, the thresholding method is also divided into Global Method and Local Method, also known as Adaptive Thresholding.In this paper, we mainly implement global thresholding, and the selection of thresholds is based on the following three methods: mean method, bimodal method and OTSU method [16].

Iterative Method.
The iterative method represents an image segmentation algorithm rooted in the concept of approximation.Its specific procedure entails selecting an initial threshold, denoted as T0.Based on this threshold, the image is divided into two distinct parts, namely A and B. The average gray values of A and B, denoted as A_average and B_average, are determined.Subsequently, T1 is calculated as (A_average + B_average)/2.A comparison is then made: abs(T1-T0) < 1 (or a smaller value).If the result proves valid, the iteration ceases, with T1 serving as the segmentation threshold.Otherwise, T1 is assigned to T0 (T0 = T1), and the process continues from the second step.As the iteration progresses, the final convergent value is adopted as the segmentation threshold.According to specific experiments conducted using MATLAB, the final converged gray threshold is determined to be 148.8358.To enhance the speed of convergence, careful consideration is given to selecting the initial threshold, denoted as T. When the target and background areas are comparable in size, it is advisable to set T as the aver-age gray value of the entire image.However, when there is a significant disparity in size between the target and background areas, a more optimal choice is to set T as the midpoint between the maximum and minimum gray values.The contribution should contain no more than four levels of headings.The following Table 1 gives a summary of all heading levels.In 1996, Prewitt introduced the histogram bimodal histogram method, which entails selecting the gray level corresponding to the valley between bimodal peaks as the threshold point when the gray level histogram exhibits a distinct bimodal shape.The bimodal method represents one of the frequently employed threshold segmentation techniques.When there is significant contrast between the gray level values of the object and the background, the histogram exhibits bimodal characteristics, representing the foreground and background of the image, respectively.The valley between these peaks comprises a relatively small number of pixel points situated near the edges.Generally, this minimum point serves as the optimal threshold for achieving effective binarization and ensuring well-defined separation between the foreground and background.The fundamental concept underlying the bimodal method is the assumption that if an image contains discernible targets and backgrounds, its gray level histogram follows a bimodal distribution.When the gray level histogram exhibits bimodal characteristics, the gray level corresponding to the valley between the two peaks is chosen as the threshold value.If the background gray value can be reasonably regarded as constant throughout the image and all objects possess a similar contrast to the background, selecting a correct fixed global threshold yields superior outcomes.The bimodal method is typically applied to images where the gray scale histogram displays typical bimodal features, resulting in enhanced segmentation effectiveness and algorithmic efficiency.However, it is important to note the limitations of this algorithm.It is only suitable for images with typical bimodal histogram features, and it is susceptible to noise interference.The method is unsuitable for images with significantly large differences between bimodal peaks or when the valley between the peaks is wide and flat.Additionally, it does not apply to single-peak histograms.As depicted in Figure 1, the histogram thresholding process is exemplified.When there is noticeable contrast between the gray values of the target object and the background content, the gray scale histogram exhibits double peaks, signifying the foreground and background of the image.The valley between these peaks comprises a relatively small number of target pixel points situated near the edges.Typically, this gray level minimum serves as the threshold point for optimal binarization, facilitating a clear separation between the foreground and background.The OTSU method, also known as Otsu's method, was introduced in 1979 by the esteemed Japanese scholar Otsu.Its fundamental concept revolves around maximizing inter-class variance.It divides the image into two classes, C0 and C1, and when the inter-class variance reaches the maximum, this gray level is the optimal threshold.The segmentation principle is as follows: the image size is M×N, the image gray level range is [0,L-1],ni is the number of pixels of the image gray level i, and the probability of gray level i is pi= ni /M×N; It is assumed that pixels with gray level lower than t in the image constitute class C0, that is, pixels with gray level [0,t] are classified as class C0, and pixels with gray level [t+1,L-1] are classified as class C1.If P0(t),P1(t) represents the probability of occurrence of class C0 and C1; u0(t),u1(t) represents the average gray level of class C0 and class C1.We can get: Then the interclass variance δ, (t) of the image can be expressed as： () = 0 () 0 2 () + 1 () 1 2 () When the inter-class variance reaches the maximum, this gray level is the optimal threshold, that is, the Otsu threshold: In reality, when there is varying contrast between the target and background regions within an image, conventional threshold seg-mentation may prove inefficient.Hence, utilizing different thresholds based on the local feature distribution of the given image can be employed for image seg-mentation.This approach involves dividing the image into multiple subregions, dynamically selecting the threshold value for each point within a specified neighborhood range, thereby accomplishing comprehensive segmentation.The OTSU method involves traversing all the gray values of the image and determining the inter-class variance at each iteration.The maximum inter-class variance is assigned to a variable, obtained through this iterative process.Subsequently, the threshold T is set based on the largest inter-class variance, enabling image seg-mentation accordingly.MATLAB calculations reveal that the specific grayscale threshold yielding the maximum inter-class variance is set at 148, as illustrated in Figure 2.

Image Segmentation Algorithm Based on Edge Detection
Edge detection represents a significant approach to image segmentation, primarily focused on identifying areas of the image where there are notable variations in gray level or structural changes, indicating the presence of discontinuous edge regions.Various levels within the image exhibit distinct gray values, with pronounced edges typically manifested along borders.Leveraging this characteristic enables effective image segmentation.Typically, the initial step involves deter-mining the edge pixels within the image, followed by connecting these pixels to form the desired boundary of the region.Differential operators, Canny operators, and LOG (Laplacian of Gaussian) operators are commonly employed for edge detection.Among these, the Sobel, Roberts, and Prewitt operators are widely utilized.

Differential operator method.
The Sobel edge detection algorithm, while relatively straightforward, demonstrates practical efficiency surpassing that of the Canny edge detection.Although it may not provide the same level of accuracy as Canny detection, the Sobel edge algorithm remains the preferred choice in many practical applications.The Sobel operator combines Gaussian smoothing and differential operations, resulting in a robust anti-noise capability and wide-ranging utility.Particularly when high efficiency is required, and fine texture is of less concern, the Sobel edge detection algorithm shines.In contrast, the Roberts operator utilizes a gradient calculation method with diagonal deviation scoring.The magnitude of the gradient indicates the strength of the edge, with the direction of the gradient being orthogonal to the edge's orientation.Meanwhile, the Prewitt operator serves as another type of first-order differential operator.Differing from Roberts, it employs a 3x3 template.By utilizing the gray-level differences between the pixel's upper and lower, left and right neighbors, the Prewitt operator achieves more pronounced edge detection results in both the horizontal and vertical directions.Through experiments conducted with MATLAB, the edge detection images produced by these three distinct operators can In 1986, John F. Canny introduced a sophisticated multilevel edge detection algorithm known as the Canny operator.This pioneering algorithm gave birth to the theory of edge detection, providing a computational approach to identify the crucial points of transition within an image.The underlying principle revolves around uncovering local maxima in the image gradient.Despite its age, the Canny algorithm continues to stand as a benchmark in the realm of edge detection, finding extensive application.Its primary objective lies in attaining the optimal solution for edge detection or pinpointing the locations of the most pronounced changes in gray intensity within an image.The key criteria for evaluating the excellence of edge detection techniques predominantly revolve around achieving low error rates, precise localization, and minimal response.The LoG (Laplacian of Gaussian) operator represents an enhancement over the Laplace operator.The Laplace operator is a straightforward second-order derivative operator characterized by its scalar nature, linearity, displacement invariance, and a frequency domain transmission with an origin at 0. When images undergo filtering with the Laplace operator, their mean gray level becomes zero.However, a drawback of this operator lies in its susceptibility to noise, rendering it impractical for edge detection purposes.Therefore, in practical applications, it is customary to first smooth and filter the image, followed by employing the La-place operator to detect edges within the image.Fig. 6.LOG operator

Region-based image segmentation algorithm
The region-based image segmentation method represents a profound technique that explores regions through the intrinsic spatial information embedded within the image.This method entails categorizing pixel points and forming cohesive regions based on their similarity in image content, akin to the concept of boundary-based image segmentation.Notably, the region-based image segmentation methods commonly employed encompass the region growing method and split merging method [17][18].

Region growing method.
The region growing image segmentation method involves the progressive amalgamation of pixels or sub-regions into larger coherent regions based on specific criteria.The crux of this method lies in the precise selection of suitable growth criteria and the initial growth starting point.It is evident from comparative experiments that different growth criteria and starting points have a discernible impact on the overall process and outcomes of region growing.Typically, the growth starting point can be chosen either manually or automatically through an algorithm, while the growth criterion is established based on the color, texture, and spatial information inherent in the given image.As illustrated in Figure 7, the image segmentation results obtained through the region growing method at thresholds of 60, 80, and 100 are presented.Nonetheless, the fundamental region growing segmentation method still harbors ample room for improvement and optimization.For instance, incorporating multiple seed points and employing adaptive thresholding techniques could yield enhanced results.Moreover, in this method, the selection of the threshold value regulates the growth conditions.It ensures that the disparity between the gray values of two pixels does not exceed the threshold value, highlighting a distinction from the threshold segmentation method wherein the significance of the threshold value is attributed to the pixel's gray value itself.Fig. 7. Comparison of regional growth methods

Split-merge method.
The split-merge image segmentation method entails the iterative process of dividing and combining to obtain distinct sub-regions within an image.Its fundamental concept bears resemblance to an infinite seg-mentation approach, wherein regions satisfying the similarity criterion are subsequently merged post-segmentation.The challenge lies in establishing the appropriate similarity criterion during the initial division as well as the subsequent splitting and merging operations [19].

Clustering-based image segmentation algorithm
For a given set of samples, divide the sample set into K clusters according to the size of the distance be-tween the samples.Let the points within the clusters be as closely connected as possible, and let the distance between the clusters be as large as possible.Take a color image as an example: establish a spatial Cartesian coordinate system based on the three RGB channels of a color image as the xyz-axis, then each pixel point on a pair of images establishes a one-to-one mapping (bijective) relationship with that spatial Cartesian coordinate system.

3.Conclusion
Researchers in the same field have conducted numerous studies on how to precisely and thoroughly assess the impact of image segmentation [20,21], largely due to the fact that the objective criterion for the success of algorithmic segmentation has not yet been determined.As a result, the evaluation of segmentation quality of image segmentation algorithm has gained significant research significance.Right now, the two main categories of evaluation methodologies are direct method and indirect method.While the indirect technique tests, compares, and evaluates the segmentation results, the direct method concentrates on the examination of the algorithm's principle and performance, with the drawback that the application context of the algorithm is not taken into account [22].In actuality, the practitioners in the petroleum industry can evaluate the segmentation quality of the test image in accordance with the application requirements specified in advance or their own experience, interests, and hobbies, and complete the evaluation in the form of quality scores.This is because the evaluation of image segmentation methods in this paper is primarily based on human visual effects as the evaluation standard.
In conclusion, the following deductions can be made: the threshold-based image segmentation methods exhibit slight disparities in the segmentation outcomes for the case of oil pipe failure images, with the optimal threshold value being 148.Among the image segmentation techniques based on edge detection, the Sobel operator demonstrates the most favourable results among the differential operators.
The efficacy of image segmentation in region-based and clustering-based methods depends on the appropriate threshold values and the number of clusters employed.Overall analysis indicates that the Canny operator method, among the traditional image segmentation approaches, yields the most exceptional image segmentation outcomes.

Fig. 1 .
Fig. 1.Comparison of iterative methods Figure (a) showcases a grayscale macro image of a failed oil drilling pipe, while Figure (b) represents the corresponding grayscale histogram.The horizontal axis represents the gray-scale values ranging from 0 to 250, while the vertical axis represents the probability of occurrence for each gray-scale value.

Fig. 8 .
Fig. 8. Image segmentation results with different number of clusters