Research on Residual Strength Prediction Model of Corroded Pipeline Based on Improved PSO-BP

In this paper, an improved particle swarm optimization (PSO) BP neural network model is proposed. For standard particle swarm optimization algorithm (PSO) is the "premature" and lack of diversity in the late, this paper puts forward an improved particle swarm optimization algorithm (IPSO), using the improved particle swarm optimization algorithm to optimize the BP neural network, which improves the learning ability of the BP neural network and accelerates its convergence speed. The IPSO-BPNN prediction model was constructed by using the BP neural network optimized by improved particle swarm optimization, and was applied to the prediction of residual strength of corroded pipeline. The experiment was carried out by the real pipeline test blasting data set. The results show that the prediction accuracy of IPSO-BPNN is better than that of BPNN and PSO-BPNN.


Introduction
In recent years, with the increasing demand for natural gas in China, the laying mileage of natural gas pipelines is also increasing year by year. However, with the increase of pipeline service life, the pipeline is subject to various kinds of corrosion, and serious corrosion may even lead to leakage, the pipe explosion and other dangerous accidents [1]. As far as corroded pipeline is concerned, the evaluation of residual strength of pipeline is an indispensable link. The most commonly used evaluation methods [2][3] of residual strength of corroded pipeline mainly include formula calculation and finite element analysis. However, the accuracy of formula calculation is low and not universal, and the finite element analysis is too complicated. To solve the above problems, an improved particle swarm optimization BP neural network model is proposed in this paper.
The main research work of this paper is as follows:  Aiming at the problems existing in the standard particle swarm optimization algorithm, an improved particle swarm optimization algorithm is proposed. First, the nonlinear decreasing weights are designed to update the particle velocity. Secondly, the concept of crossover operator is introduced.


Since the weights and thresholds of BP neural network are randomly initialized, the convergence speed of the network is slow and it is easy to fall into the local optimal. In view of the above problems, this paper uses an improved particle swarm optimization algorithm to optimize the BP neural network to help it obtain the initial optimal weight and threshold.
 The BP neural network optimized by the improved particle swarm optimization was used to construct the IPSO-BPNN prediction model, which was applied to the prediction of the residual strength of the corroded pipeline, and the real pipeline test blasting data set was used for the experiment. Meanwhile, the model was compared with BPNN and PSO-BPNN [4][5]. The results show that the prediction accuracy of IPSO-BPNN is better than that of BPNN and PSO-BPNN, which verifies the effectiveness of the method in residual strength evaluation of corroded pipelines.

Improvement of inertial weights
In the standard particle swarm optimization (PSO), the inertia weight is initially selected as a fixed value, namely a constant, which will lead to the following problems: in the initial stage of evolution, if the inertia weight is too small, it is easy to fall into the local optimal value, and the search range will also be reduced; In the later stage of evolution, if the inertia weight is too large, it is not conducive to the convergence of the algorithm. Therefore, this paper adopts the nonlinear decreasing weight strategy, and its algorithm formula is shown as follows: 1 Where, w max and w min are the minimum and maximum values of inertia weight respectively. Generally, w max = 0.9, w min = 0.4, t max is the maximum number of iterations, and t is the current number of iterations.

Improvement of inertial weights
In order to solve the problem of 'precocity' existing in traditional particle swarm optimization (PSO), the idea of hybridization in genetic algorithm is used for reference. During each iteration of particles, a certain number of particles are selected according to the hybrid rate and put into the hybrid pool. The particles in the pool are randomly combined in pairs to produce the same number of offspring particles s, and the offspring particles are used to replace the parent particle f, so as to enhance the diversity of particles. The position of the child generation is crossed by the position of the parent generation. The crossover formula is as follows: * 1 1 * 2 2 Where, f x represents the position of the parent particle; s x represents the position of the descendant particles; i is a random number between 0 and 1.
The velocity formula of the offspring is: Where, f v represents the position of the parent particle; s v represents the position of the progeny.

Test functions experiment
In order to verify the effectiveness of the improved PSO algorithm proposed in this paper, four benchmark functions are selected: Sphere: The performance of the improved algorithm was tested and compared with the Discrete Particle Swarm Optimization (PSO), the Linear Declining-weight Particle Swarm Optimization (LPSO), the Adaptive weighted particle swarm optimization (APSO) and the Random Weight Particle Swarm Optimization (RPSO) in the optimization process and calculation results. In this experiment, MatlabR2014b was used in the Windows10 operating system to conduct experimental simulation of the above algorithm. 10 dimensions were selected for each function. Each algorithm was run for 30 times, and the number of iterations was set as 1000. The average value of the 30 times was taken as the iteration result to compare the performance of each algorithm. The IPSO represents the improved particle swarm optimization algorithm in this paper.

Experimental procedure
In essence, this experiment is an optimization problem, that is, and it is necessary to find the minimum value of four test functions. The main steps of the experiment are as follows:  selection of fitness function,  initialization of particle swarm,  update of particle swarm, and  crossover operation of particle swarm.

Simulation comparison
Based on the four benchmark test functions in Table 1, the improved PSO algorithm in this paper was used for the verification test and the comparative analysis of the results. The optimal value (Best), Mean optimal solution (Mean) and variance (SD) of each algorithm were compared. In these indexes, Best and Mean were used to evaluate the optimization accuracy of the algorithm, while SD was used to evaluate the optimization stability. The experimental results of 10 dimensions are shown in Table 1 respectively. As can be seen from Table 1, after 1000 simulation operations, the improved PSO algorithm shows great advantages over the other three algorithms in terms of both optimization accuracy and stability.

Improved PSO algorithm to optimize BP neural network
In the traditional BP neural network algorithm, the weights and thresholds of the network are randomly initialized, which will lead to blind training of the network, easy to fall into the local optimal, and thus leading to poor generalization ability and low reliability of the training network. Using the improved PSO algorithm (IPSO) proposed in this paper to optimize the threshold and weight of BP neural network can improve the learning ability and speed up the convergence rate of BP neural network, and give full play to the strong nonlinear mapping ability of BP neural network. In this paper, IPSO is adopted to optimize the BP neural network model. The algorithm steps are as follows:  Step1. A three-layer network topology is established to determine the number of nodes at the input layer, the hidden layer and the output layer.  Step2. Initialize particle swarm parameters.  Step3. Conduct IPSO algorithm training, calculate the fitness value of each particle, update the speed and position of each particle, and update the weight of each particle.  Step4. Cross over. According to the hybridization probability, a certain number of particles were selected for hybridization, and the position and velocity of the sub-particles were calculated.  Step5. Repeat steps Step3 and Step4 until the algorithm error meets the requirements or the training number reaches the maximum number of iterations, then go to Step6.  Step6. The global optimal value in Step5 is assigned to the initial weight and threshold value of BP neural network. Train the neural network, output the predicted value of the network, calculate the network error, propagate the error back to the output layer and adjust the weight and threshold. The above process is repeated until the termination condition of BP algorithm is satisfied or the maximum number of iterations of BP is reached.

Data sources
In this paper, 79 groups of real pipe blasting experimental data collected in literature [6] were used. Five kinds of pipe residual strength, namely pipe steel grade, pipe diameter, pipe wall thickness, defect depth and defect length, were selected to mainly affect the parameters, and the blasting pressure, that is, the residual strength, was predicted. See Table 2 for specific data.
Table2. Experimental data of corroded pipeline.

Experimental procedure
The main steps of the experiment are as follows:  fitness function selection,  data input and processing,  network construction,  IPSO algorithm optimization neural network,  network training and prediction construction.

Comparative analysis of experimental results
According to the sample data of petroleum corrosion pipeline, 66 groups were randomly selected as training samples and the remaining 13 groups as test samples. BPNN model, PSO-BPNN model and IPSO-BPNN model were established respectively. The predicted output values and error analysis results of the three prediction models are shown in Table 3.
Table3. Output values and error analysis results of each prediction model.