A new pragmatic approach to solve the problems of vector optimization with uncertain parameters

In this paper, we consider the method for solving problems of multi-criteria optimization with mathematical models that contain a lot of variables. The values of these variables are not regulated by a decision-maker. We introduce a new concept of 'tolerance of the decision variant'. This concept is similar to such concepts as stability, survivability, etc.


Introduction
Any decision-making situation is characterized by the following common elements: a) The set of variables, the values of which are chosen by a decision-maker (hereinafter -DM). Such variables are called variants of a solution or easy variants.
b) The set of variables, which are not regulated by DM. We call such variables the conditions. c) The method evaluating quality solutions for each of the conditions. Usually it is one or more functions depending on the options and conditions. Now let us turn to the mathematical formulation of the problem. This paper assumes that the solutions are described by n-dimensional vectors 1 ,..., We also assume that the parametric restrictions are as follows:  , which we call an original block. Block X is the block of variants and block P is a block of parameters. In addition to parametric constraint we shall consider functional limitations 0 ( , ) , 1 ii g x p g i r      . In the original block they cut some curved region G, which is called a range of possible situations. A projection of G to the space of the design parameters is denoted by G|X. The set of G|X is a set of alternative solutions allowing one to make a choice. Finally, the quality of situation 'Variant + conditions' will be evaluated using objective functions (criteria) ( ii f x p f x p  for all i and at least for one of the objective functions this inequality is strict. We need to find a variant of the solution, which generates an optimal situation for a wide variety of conditions So, we have the following problem. In area G, we need to select 'best' vector x that defines a choice of the variant, the quality of which is in any way consistent with a set of objective functions. We have to understand what a 'best' vector is. Let us consider first the simplified version of the problem when external parameters are missing.

The existing approaches
The existing methods for solving the multi-criteria optimization can be divided into automatic and interactive.

Automatic methods
These methods provide the obtaining of a single compromise point (the point of the Pareto set), which is considered to be optimal. To obtain this point in automatic methods of the multi-criteria optimization, the problem is converted into a task with one criterion of efficiency. The most common methods of this type are the methods of the main criterion, the generalized criterion and goal programming [1][2][3][4].

A method of the main (primary) criteria.
The original multi-criteria optimization problem is reduced to the task of optimizing one of criteria () k fx , which is considered to be the most important one provided that the values of other criteria should be not less than established values i f  .

Methods of a generalized criterion and goal programming.
These methods consist in the convolution of all criteria () i fx in a single function, which is called a generalized criterion [5]. In these methods, DM chooses not a particular solution but the generalized criterion. Based on this criterion, the optimal solution will be selected. The obtained solution is considered to be optimal in the sense of the selected generalized criterion. Here DM separates oneself from the immediate values of criteria and can only deal with their normalized values using weighting factors that determine the importance of the criterion. Automatic methods are good because they give one unique solution. However, these methods have significant drawbacks. Consequently for the method of the main criteria it is necessary to:  identify a set of the most important criteria, which is not possible in all practical tasks;  set limits for all criteria except for the principal; setting these limitations is difficult for DM if they are objectively unrelated to the problem statement. In addition, when DM chooses large boundary values of the criteria, it often provides an ideal point, i.e. a non-existent one. On the other hand, small values for the constraints resulted from the multi-objective optimization problem lead to a trivial problem of maximization of the criterion, which contradicts to the formulation of the problem of the multi-objective optimization.

An illusion of automatic methods.
The methods of the generalized criterion and goal programming create an illusion of obtaining a single optimal solution, even though DM does not operate directly on the values of functions, but only normalized ones. It is not possible to prove the correctness of the choice of the generalized criterion. The same concerns the justification of the reason why the obtained solution better than any other, apart from the generalized criterion. Automatic methods emerged from disbelief in DM ability to choose the desired compromise. Choosing the main or generalized criterion, generally speaking, is arbitrary. Disadvantages of automatic methods have increasingly convinced us of the need for human involvement in decision-making. Therefore, in recent times interactive methods have been widely adopted.

Interactive methods
The essence of interactive methods is the same: DM implements a directed search of compromise points to select the point that best meets DM preferences. For compromise points at each step of the procedure automatic methods have been used. The procedure is over when one of the points will satisfy DM requirements. This point is considered to be optimal. However, the analysis of existing interactive methods showed the presence of a paradox associated with the assumptions about DM knowledge: on the one hand, DM decides the best compromising point, i.e. she or he has sufficient knowledge to select it, and on the other hand, it is assumed that DM is not able to clearly indicate what solution she or he wishes to obtain [6].
In [6,7] there are methods proposed for solving the multi-criteria optimization that require DM to set the desired values of the objective functions. If the point of compromise satisfying specified requirements is absent, the methods require DM to specify setpoints criteria. On the basis of such specification a compromising region narrows until it turns into a point. The proposed procedures are quite flexible, but they demand a very large number of one-criterion extreme tasks for execution of the decision and, therefore, they are only effective for problems with a small number of criteria.
In [8] an interactive method based on sensing of the parameter space was proposed, that is, replacing of the original continuous area of its discrete representation. Moreover, in this method it is assumed that DM has an active dialog with PC. This method has significant advantages, which can be extended to the solution of problems with external parameters.
Let us consider the basic techniques used in the practice of uncertainty removal in case of problems with external parameters. In fact, there are only two.

A minimax approach
Selection is considered to be optimal if it is optimal for the worst case. In other words, external parameters are excluded from the task by replacing objective function   advantage of this approach is that the obtained result is guaranteed, but only in case a solution exists. This approach is very inflexible; it requires a large amount of computation and, besides, it is hardly reasonable to plan the extreme.

A statistical approach
Selection is optimal if it is optimal average. The basis of this approach rests upon the idea of external parameters as random variables with certain probability characteristics. The uncertainty is removed if we replace the objective functions with their mathematical expectations, calculated using the known distributions of external parameters. However, for the implementation of the statistical approach it is necessary to construct a probability distribution of external parameters, which is often impossible to do.

A pragmatic approach
In this paper, we propose a pragmatic approach. Multi-objective optimization tasks are solved not as purely mathematical constructions, but to satisfy very specific needs of design, planning, etc. In order to achieve practical results it is often enough to find an alternative solution, which perhaps not of the best, but of a quite acceptable quality, and keep it within the widest possible set of external factors. Therefore, in each particular situation, the following construction are really meaningful.
will be called an effective region or a region of efficiency.
Let us introduce a new concept, somewhat related to such concepts as durability, stability, insensitivity, etc. Due to the fact that these terms are already engaged and widely used, we will use the word 'tolerance' using the colors for its values, which mean 'patience', 'ability to accept', 'indifference'.
Let When keeping our problem in mind, we can assume that tolerance   , T x F is a characteristic of the ability of variant x to retain the values of objective functions inside initial region G within the limits specified by vector 1 ,, . An option with the greatest tolerance is a pragmatic solution to the optimization problem.
Obviously, the variant of the decision with tolerance of 1 is approximately optimal for the minimax approach. From a statistical point of view, tolerance can be interpreted as a probability of options to keep the situation within the available area. Thus, the concept of tolerance provides the ability to generalize the minimax and stochastic approach to solving optimization problems with uncertain parameters.
Let us note that as set F is a part of the field of G, then tolerance   , T x F is a function not only of vector x, but also of vectors and g f  , which define criteria and functional limitations, respectively. Let functions  

 
, , xg f T  is a continuous function. Unfortunately, the calculation of tolerance for different variants of solutions using analytical methods is possible only for areas with a simple geometric structure, defined by algebraic equations of the first or second order. But we also need to find an option with the greatest tolerance. From the formal point of view, we have two different problems: the problem of calculating the measure of sets and the task of finding the optimum. We can try to solve both these problems, using good enough evenly distributed sequences [9][10][11].
We assume that we know how to build the initial portions of uniformly distributed sequences     kk X иP in blocks X and P. . We will use this approximation to calculate tolerance.

Schemes for problem solving
There are three fundamentally different cases of the general problem formulation of the multiobjective optimization: 1. Neither the objective functions, no restrictions are independent of the conditions. 2. A part of the objective function depends on the conditions and the other part does not. 3. All objective functions depend on the conditions. Let us consider each option in detail and offer a scheme of the solution of the corresponding problem.

Case 1
This is a classical problem of optimization in the conditions of certainty. We offer methods of its solution proposed in [7].
In set G, which coincides with the area of possible options, we select the N probe points Looking through the table of tests, DM assigns threshold criteria, i.e. specifies vector f  . Once the thresholds are assigned, there is a selection of effective pointspoints satisfying criteria constraints. It may happen that for given N there is not a single effective point. This means that if a set of effective options is not empty, then its volume has the order of / GN . In this situation, we can proceed in two ways: either to change vector f  easing any restrictions, or, if the restrictions do not change, to increase the number of sampling points. If in case of the repeated increase in the number of N the set of efficient options is still empty, then there is reason to believe that the restrictions are not compatible. Of course, one cannot exclude that there is an effective point, but if this is the case, then its neighborhood, in which all the restrictions are applied, has a very small volume. In this case a variant of the solution corresponding to that point will be unstable, i.e. small violations of the allowances will lead to efficiency loss.
Once having found a set of viable options, DM can be pleased with the progress considering that any alternative solution of the problem has been found. However, DM can go further by choosing a built set of points of Pareto optimal.

Case 2
This is the most general case of multi-objective optimization problems. A vector of objective functions consists of two parts: In blocks X and P we construct a set of sampling points. We form tables of tests for all f and g functions counting them at the points of set NK XP  . Let us assign a criteria and functional limitations. For criteria that do not depend on the conditions we find effective options in order to choose Pareto optimal from them, and namely for them we can calculate tolerance. An option with the greatest tolerance will be a solution.

Case 3
This is another extreme case, in some sense opposite to the first one. Here, all objective functions depend on the conditions.
In blocks X and P sets and NK XP of sampling points are constructed. We form tables of tests for all f and g functions calculating them at the points of set NK XP  . Then we assign criteria and functional limitations. For each variant of the area of effective options we can calculate tolerance. A variant with the greatest tolerance will be a solution.

Conclusion
So, in this paper we have presented a method of solution of the multi-objective optimization with mathematical models that contain many variables, which are not regulated by the decision-maker. This technique is based on the newly introduced concept of tolerance of variant solution. Schemes for three different types of tasks are presented. These procedures can be applied for rather a wide range of problems, but for the problems with a large number of criteria a huge amount of computation may be required. To remove these restrictions, it is possible to apply a technique of parallel computing, which is presented in [12].