Research on intelligent control system of permanent magnet motor for high-speed flywheel energy storage system based on deep learning

With the continuous development of society, more and more people pay attention to energy issues, and the realization of energy storage has become a hot research direction today. Despite advancements, the control system of the high-speed flywheel energy storage system’s permanent magnet motor still encounters issues in effectively regulating the magnetic suspension bearing and motor speed. In addressing this issue, a technical solution involves the implementation of an intelligent control system for the high-speed flywheel energy storage system’s permanent magnet motor, utilizing deep learning principles. This innovative approach employs deep neural networks to model, optimize, and regulate the flywheel energy storage system. The essence of flywheel energy storage lies in the conversion of electrical energy into mechanical energy, followed by its reconversion into electrical energy during output. It has the advantages of high energy density, high power density, long cycle life, fast charging and discharging, maintenance-free and environmental protection. A permanent magnet motor is a motor that uses permanent magnets to generate a magnetic field. It has the characteristics of high efficiency, high power density, and low rotor loss. It remains the most widely utilized motor in flywheel energy storage systems. An intelligent control system is characterized by its use of artificial intelligence technology to adapt, self-learn, and self-organize complex systems. This system is distinguished by its robust nonlinear processing capabilities and resilience to faults. The high-speed flywheel energy storage system permanent magnet motor intelligent control system based on deep learning can improve the performance, efficiency and reliability of the flywheel energy storage system, reduce costs and risks, and is suitable for electric vehicles, rail transit, power grid frequency regulation and other fields. In this paper, the convolutional neural network and PSO algorithm are used to obtain the PSNN neural network structure to predict the speed of the motor, so as to achieve its control. And the reliability of the structure is verified by experiments.


Introduction
With the development of science and technology, energy storage resources are widely used in all walks of life, but according to research and surveys, there are instability and intermittent problems in new energy power generation.With the continuous construction of new energy power stations, the original stable and reliable power grid is also developing into a fluctuating and unbalanced state, which makes the difficulty of power grid regulation and safe operation increasing year by year.In order to solve this problem, it is necessary to store and release electric energy through energy storage devices to achieve balance and optimization of the power grid.An energy storage device is a device that can convert electrical energy into other forms of energy and store it so that it can be converted back to electrical energy when needed.Common energy storage devices include batteries, supercapacitors, flywheels, pumped hydro storage, compressed air energy storage, etc. [1].Diverse energy storage devices possess unique traits, encompassing capacity, efficiency, lifespan, and cost considerations.The choice and fine-tuning of these devices should be tailored to specific needs and requirements.
The permanent magnet synchronous motor [2] is a high-efficiency, high-performance, highreliability motor.It replaces the conventional field winding excitation with permanent magnet excitation, simplifying the motor's structure, reducing processing and assembly costs, and eliminating the need for collector rings and brushes, which are prone to reliability issues.This change enhances the motor's operational reliability.Moreover, since no excitation current is required, there is no excitation loss, resulting in improved motor efficiency.Permanent magnet synchronous motors come in two main structures: surface-mounted and built-in.These structures are classified based on the position of the permanent magnet within the rotor core.In the context of surface-mounted permanent magnet synchronous motors, the permanent magnet is directly mounted onto the rotor's surface.This design offers advantages such as a simple structure, lower manufacturing costs, and reduced moment of inertia.As a result, surface-mounted permanent magnet synchronous motors find widespread applications.
The combination of energy storage devices and permanent magnet motors is an effective method to improve energy storage efficiency and performance.It can realize direct drive or power generation of energy storage devices without adding intermediate links such as transmissions or converters, reducing energy loss.and cost increases.The synergy between energy storage equipment and permanent magnet motors takes diverse forms, such as flywheel energy storage, compressed air energy storage, pumped water energy storage, and others.Flywheel energy storage specifically converts electrical energy into mechanical energy, preserving it on a high-speed rotating flywheel.When the electrical energy needs to be released, the flywheel converts mechanical energy into electrical energy.Flywheel energy storage has the advantages of fast response speed, long cycle life, and no pollution, but there are also disadvantages such as low rotor strength, large loss, and large volume.The outer rotor highspeed permanent magnet synchronous motor, tailored for flywheel energy storage, offers an optimized solution.By amalgamating the flywheel with the rotor of the permanent magnet synchronous motor, it creates a unified structure where the flywheel functions both as an energy storage unit and a motor component.This innovative design enhances overall system integration and efficiency.Notably, the outer rotor high-speed permanent magnet synchronous motor operates at high speeds within a vacuum environment.According to its specific structure type, working environment and key issues, it is necessary to systematically analyze the design method of the motor, rotor strength, loss and temperature rise performance, etc. Research and Analysis.
From the early 1970s to the mid-1980s, analog control was the main method, mainly using methods such as constant voltage frequency ratio control, open-loop estimated speed control, and closed-loop speed feedback control [3].The control performance is poor, and there are steady-state errors and Slow dynamic response and other issues.From the mid-1980s to the early 1990s, digital control was the main method, mainly using methods such as vector control, direct torque control and sliding mode variable structure control.The control performance has improved, but there are still parameter dependence, nonlinear compensation and Complex debugging and other issues.From the early 1990s to the early 21st century, intelligent control was the main focus, mainly using methods such as fuzzy control, neural network control, genetic algorithm optimization, and adaptive control.Insufficient stickiness and other problems.Since the beginning of the 21st century, it has been dominated by integrated control, mainly using methods such as multi-objective optimization, multi-sensor fusion, multi-level coordination, and multi-mode switching, and comprehensively utilizing various advanced control theories and technologies to achieve high-speed intelligent control systems for permanent magnet motors.Performance, high reliability and high intelligence, but there are still problems in the control of the magnetic suspension bearing and the speed of the motor.Building on this foundation, this paper proposes the PSNN neural network structure, crafted through the integration of convolutional neural networks and the Particle Swarm Optimization (PSO) algorithm.The objective is to predict motor speed, enabling precise control over the motor's velocity.
The input of PSNN is the voltage, current and other parameters of the motor, and the output is the motor speed at the next moment.The output of PSNN is used as the reference value of the model predictive controller (MPC) [4], which is used to generate the optimal voltage vector to drive the inverter to control the motor.Finally, the reliability of the model is verified by experiments.

convolutional neural network
The convolutional neural network, a sophisticated evolution of the perceptron, is distinguished in the realm of neural networks by its ability to reduce the number of network weights significantly.This reduction streamlines the network's training and learning processes, making them more efficient and rapid compared to other neural network architectures.One of the notable advantages of CNNs is their relatively simpler model establishment process, which inherently lowers the risk of network overfitting-a common challenge in more complex networks.
A typical CNN architecture encompasses several layers: the input layer, output layer, convolutional layers, pooling layers, and fully connected layers.The convolutional layer is responsible for feature extraction, where the convolutional kernel, or filter, plays a crucial role.These kernels are adept at isolating various features from the trained objects.The idea is to apply multiple filters, each designed to extract different feature sets, to the input image.By doing so, superfluous information is eliminated, leaving behind the essential feature data needed for the task at hand.
The depth of the layer within the CNN has a direct correlation with the type and scope of the features extracted.In simpler terms, shallower network layers tend to extract lower-level features, which represent more local, immediate information.Conversely, as one delves deeper into the network, the extracted features become increasingly complex and abstract, encapsulating a broader range of information and offering a more comprehensive understanding of the input data.
The pooling layer follows the convolutional layer and is instrumental in reducing the number of parameters.This reduction is achieved through pooling operations, which effectively condense the feature data while retaining critical information.This step not only decreases computational load but also helps in mitigating the risk of overfitting.
In the final stages of a CNN, the fully connected layer comes into play.Here, the features refined through the convolutional and pooling layers are utilized for classification purposes.This layer essentially interprets the processed features and translates them into meaningful outputs, relevant to the specific task the network is designed for.
The convolution layer mainly uses the convolution kernel to extract the characteristics of the trained object.The convolution kernel here can also be called a filter [5].Generally speaking, a convolutional neural network is to establish multiple filters that can extract different features, use these different filters to filter the input image, remove those useless information, and obtain the required feature information [6].The shallower the network layer, the lower the extracted features, and the more local information represented by these features; the deeper the layer, the higher the extracted features, and the more representative the information represented by these advanced features sex.Equation 1 is the output expression of the convolutional layer.
After the convolutional layer, there is the pooling layer.Its function is to extract the main features of a certain area, compress the amount of training parameters, and alleviate the over-fitting phenomenon.
The pooling operation is a very simple process.The original data is divided into multiple small areas, and then the data in the area is compressed.The output value can be the maximum value of the small area, or the average value of the small area.sothat the amount of data is greatly reduced, only useful data is retained, and the amount of calculation is greatly reduced.As shown in Figure 1.

Figure 1. Pooling calculation process
The final layer in the neural network is the fully connected layer, primarily tasked with classifying input information.When the layer preceding the fully connected layer is a convolutional layer, the fully connected layer can be conceptualized as a convolutional layer with a convolution kernel dimension matching the height and width of the preceding layer.Conversely, if the upper layer preceding the fully connected layer is itself a fully connected layer, the fully connected layer can be viewed as a convolutional layer with a 1*1 convolution kernel.Equation 2 outlines the output expression of the fully connected layer.

PSO algorithm
American social psychologist Kennedy and electrical engineer Eberhart proposed the PSO algorithm [7] in 1995, which was inspired by observing the foraging behavior of birds.It envisions a scenario where a flock of birds seeks a single food source.Although the exact location of the food is unknown to the birds, they possess information about the distance between their current position and the food source.Birds exchange distance information with nearby companions, identify the closest one, and iteratively compare distances with other areas until the optimal choice is determined.This iterative process is commonly known as the optimization process.
In PSO, a random group of particles is initially set, and the optimal solution is iteratively sought.During each iteration, particles update themselves by considering two key values: the individual best solution, called p_best which each particle has found, and the global best solution, referred to as g_best which is the best among the entire population.
The formula for speed update is: The formula for location update is: Where  is the inertia factor; k is the current number of iterations; _1 and _2 are acceleration factors and are non-negative; _1 and _2 are random values belonging to the interval [0, 1]; v_i is the i-th particle velocity; p_best is the particle The individual extremum of _is the group extremum.
The specific optimization process of the PSO algorithm is as follows: determine the network parameters of the PSO algorithm according to actual needs, and calculate the error of the current position of all particles through the objective function.The initial position  of all particles is recorded as   , and the initial position of the particle with the smallest error is recorded as   .Update the current positions and velocities of all particles according to Formula 3 and Formula 4, and obtain the value and error of the objective function of each particle after updating.If the termination condition is not satisfied, compare the current error of all particles with the previous error, and then enter the step.If it is satisfied, the iteration ends, and   is the global optimal solution.If there is a particle whose current error is smaller than the error of the particle   , then the particle is the new   ; if the minimum error of all particles is smaller than   , the position of the particle is the new   , and then according to formula 3 and formula 4 Update the current position and velocity of all particles, and the iteration ends, and   is the global optimal solution.
The PSO algorithm is actually an algorithm for randomly searching the global optimal solution, and it can quickly tend to the global optimal solution.Practice has proved that its advantages can be seen more when it is required to optimize multiple objectives and change with the change of the target value.Compared with the previous optimization calculation, the calculation speed of the particle swarm optimization algorithm is faster.And it is easier to find the global optimum.

PSNN neural network structure
In this paper, a neural network structure called PSNN is introduced, which combines the principles of convolutional neural networks (CNNs) and the Particle Swarm Optimization (PSO) algorithm.The core idea is to use radial basis functions as the foundation for the hidden layer units [8].These radial basis functions create a hidden layer space where input data is transformed.The goal is to map input data from a lower-dimensional space to a higher-dimensional space [9].This mapping process effectively converts the originally linearly inseparable problems in the lower-dimensional space into linearly separable ones in the higher-dimensional space.Figure 2 shows the basic structure of the PSNN neural network.It can be seen from the figure that the network is a typical feedforward threelayer network system.

Figure 2. PSNN neural network structure
The initial layer of the PSNN network is the input layer, consisting of nodes representing the input signals.It directly transmits the input sample to the subsequent hidden layer.In this context, the connection between the input layer and the hidden layer can be viewed as having a fixed weight of 1.
The second layer is the hidden layer, and the nodes of the hidden layer are determined according to specific problems.Compared with the general multi-layer feedforward network, its transformation function is different from the traditional local response function, the function value is positive, nonlinear, and the center point is radially symmetrical [10].The most commonly used transformation function is the Gaussian function, and the function expression is as follows.

Figure 3. Gaussian Radial Basis Function
It can be seen from Figure 3 that the function presents a normal distribution and is symmetrical about the output axis.The radial basis neuron determines a value according to the distance between the input and its weight.The farther the distance is, the obtained value is closer to 0, and the farther to a certain distance, the obtained value is 0; on the contrary, the closer the distance is, the obtained value is toward 1. Close, when the distance is 0, the output value is 1.That is, the hidden layer only responds to input vectors close to the center of the neuron, which is the so-called local response.
The last layer is the output layer, which mainly responds to the input vector, and takes the weighted sum of the output of the previous layer as input to this layer, that is: In formula 6,   is the output of the q-th node of the layer,  = 1,2, … ;   is the weight between the p-th hidden layer node and the q-th output layer node.

Experiment settings
Parameter initialization: including selecting an appropriate clustering algorithm and the number of clusters to determine the center and width of the radial basis function, and selecting an appropriate method (such as random initialization or orthogonal initialization, etc.) to determine the network weight and bias.Parameter adjustment: including selecting an appropriate optimization algorithm (such as gradient descent method or Newton method, etc.) and learning rate to update network weights and biases, and selecting an appropriate stopping condition (such as the maximum number of iterations or minimum error threshold, etc.) to terminate training process.Performance evaluation: including selecting appropriate evaluation indicators (such as accuracy rate or mean square error, etc.) to evaluate the performance of the network on the training set and test set, and selecting appropriate methods (such as cross-validation or self-help method, etc.) to avoid overfitting fit or underfit.
Collect 2000 sets of motor parameters under the influence of motor current and motor voltage, 80% of the sample data are used for network training, and the remaining 20% are used for network verification.It is also the training and learning of the neural network in MATLAB.It should be noted that in order to make the neural network better train and learn the sample data, the samples must be normalized, and the mapminmax function is used here.

Figure 4. Simulation results of PSNN neural network when the current is unstable
In the process of neural network training, the essential parameters of the hidden layer undergo finetuning through iterative procedures.This continues until the node weights of the hidden layer sufficiently reduce the output error, with the designated threshold set at E=0.0001.To validate the effectiveness of the training, test samples are employed for simulation.The training results for two different operating conditions, specifically motor current and motor voltage, are depicted in Figure 4 and Figure 5, respectively.
It can be seen from the figure that as the number of iterations increases, the RMSE gradually decreases, and finally reaches the accuracy required by the experiment, and the accuracy of the prediction of the rotational speed exceeds 90%, which meets the design requirements.

Summary
This paper introduces the technical scheme of the intelligent control system of the permanent magnet motor of the high-speed flywheel energy storage system based on deep learning.The scheme uses deep neural network to model, optimize and control the flywheel energy storage system, improve the efficiency of flywheel energy storage, and reduce costs and risks.In this paper, the convolutional neural network and PSO algorithm are used to obtain the PSNN neural network structure to predict the

Figure 5 .
Figure 5. Simulation results of PSNN network under the influence of motor voltage Table1presents the prediction accuracy achieved by each model on the dataset.When compared to the other models, the proposed PSNN model exhibits superior prediction accuracy.The table indicates that the accuracy of PSNN surpasses CNN by 0.046 and exceeds LSTM by 0.031, outperforming the remaining three models as well.This highlights the high accuracy rate and the effectiveness of the PSNN model as an optimization algorithm.Table1.The accuracy of PSNN and other models in the data set prediction

Table 1 .
The accuracy of PSNN and other models in the data set prediction