Stabilization of first integrals in optimal control problem

The paper considers methods for solving the variational problem of implementing control that stabilizes the optimal movement of the rocket in gravitational fields, finding the most accurate trajectories, and applying the results to solve various practical problems of flight dynamics. The relevance of the study of these problems is due to the fact that space guidance and tracking of the object’s trajectory during the entire flight is the most important basis for a successful space maneuver of ships, satellites, and missiles. Building and implementing in practice the laws of Autonomous guidance in application to modern types of propulsion technology is an acute unsolved problem today, which depends on the ability of aircraft engines to produce the necessary thrust for flight. It takes into account the fact that in reality the object does not fly along the specified trajectory, but has a certain deviation due to inaccuracies in the parameters of the flight model and the propulsion system. Therefore, it is necessary to set the stabilization problem in order to find optimal control conditions. The problems of stabilization and traffic control are important both from a theoretical point of view and because of numerous technical applications. From a theoretical point of view, these problems are important primarily because they relate to complex problems of mechanics, and each time they require new approaches and methods for their solution. In this case, the nature of the problem of motion stabilization depends significantly on the additional conditions that are imposed on the dynamic system.


Introduction
When implementing the optimal trajectories in different tasks we will inevitably encounter substantial difficulties, namely the inability, first, to ascertain the real system (or the control object, a missile) in the initial state, second, accurately implement the optimal control, third, to accurately predict the external conditions of functioning of the system-the rocket (approximation of the original mathematical model). All this leads to the need to solve the problem of correcting the law of optimal control in the operation of any dynamic system (or object), and to determine the most accurate trajectory of a particular body, namely a rocket. Thus, the purpose of this work is to build a model of optimal control of a dynamic system-a rocket, taking into account the stabilization of connections. The methods described in this paper allow us to find the optimal control for solving problems of stabilization of a dynamic system.

Problem statement
Traditionally, the classical control theory considers two main problems: the problem of determining the program motion of a dynamic system and the problem of designing controllers that implement a given program motion of the control object (the stabilization problem). The main focus of this work is 2 on solving the stabilization problem, which is usually solved using linear dynamic models. In comparison with static systems, in dynamic systems, the process evolves over time and control is generally also a function of time. When solving the stabilization problem, various methods that form the basis of modern control theory can be used. In the control theory, the behaviour of a system is described in the state space and the control of the system is reduced to determining the optimal, in a sense, control actions on the system at each moment of time. Moreover, mathematical models of continuous dynamic systems are usually systems of ordinary differential equations in which time is the independent variable. When solving the stabilization problem, control optimality is understood in the sense of the minimum of a certain optimality criterion, which is written as a certain integral. The optimality criterion can characterize various aspects of control quality: control costs (energy, fuel, etc.), control errors (for various state variables) etc.
Mathematical models of dynamic systems can be constructed in various forms. A distinctive feature of the mathematical description of any dynamic system is that its behaviour develops over time and is characterized by functions ( ), . . . , ( ), which are called state variables (phase coordinates) of the system. The movement of a dynamic system can be controlled or unmanaged. When implementing controlled motion, the behaviour of a dynamic system also depends on the control functions ( ), . . . , ( ). In our case, we consider 4 controls for the speed and mass of an object: ⃗ = ( , , ) and . Let's formulate the optimal control problem. As a mathematical model of a dynamic system, we will consider a system of ordinary differential equations written in the normal Cauchy form: where ( , ) is a given function. It is necessary to determine such a control that will ensure the optimal movement of the rocket near a certain trajectory and allowing only small deviations. Let's add control to this function: Let's consider a mechanical constraint Then we can write the following expression where is an arbitrary constant matrix. Therefore, we can then get the constraint equations in the form: ̇= [1][2][3][4]. Since the real movement of the system inevitably differs from the set one, this fact led to the concept of undisturbed and perturbed movements of Lyapunov A. A. Thus, any movement of the system, regardless of whether it is optimal or permissible, is called undisturbed movement. The perturbed movement is estimated with some deviations from the undisturbed movement. Therefore, the perturbed motion will be described by the following expression: It is necessary to minimize the deviation. Therefore, must be a small quantity and not increase more than a certain value. Equations that take into account deviations are of great importance in control theory. Based on these equations, a large number of optimization problems of practical interest are formulated. One of these tasks is the stabilization task stated above. When solving this problem, you need to determine how to select corrective control actions to reduce deviations in the best possible way. Most often, when solving the problem of stabilizing the movement of a system or control object, linear dynamic equations in deviations are used, which are obtained later. Then we add control functions ⃗ = ( , , ) and . We obtain the following equation It is known in advance that this expression is without control. Let's add control And the last one: Since all 4 control functions can be found from the first 3 integrals, we can not use the last one. Let's express in (4) as a diagonal matrix = ( , , , Possible solution of this system can be written in the following form: Here the expressions for the speed control of the rocket. Then, as from the 3rd of the integral shows that ̇= ⃗ ⃗ + = − = . Then

Conclusion
Thus, this paper provides an example of determining the optimal control for a specific rocket system. A method for determining approximate optimal control for the problem of dynamic system stabilization is considered. The possible influence of nonlinear perturbations on the solution of system stabilization problems is analysed.
As a result of this study, a method for solving the optimal stabilization problem in relation to controlled dynamic systems is specified. These questions are used in the study of the dynamics of spacecraft flights.