A robust algorithm for global optimization problems

In this paper, a global optimization algorithm namely Kerk and Rohanin’s Trusted Region is used to find the global minimizers by employing an interval technique; with it, the algorithm can find the region where a minimizer is located and will not get trapped in a local one. It is able to find the convex part within the non-convex feasible region. This algorithm has descent property and global convergence. The numerical results have shown the algorithm has an outstanding capability in locating global minimizers.


Introduction
The ultimate goal of optimization is to obtain the best result under given circumstances. There are many established global optimization methods in existence since there is no single method available for solving all optimization efficiently [1]. However, some of these methods have no guarantee that the obtained solution is a global solution, they may get stuck at a local point [2].
In 2005, a global optimization algorithm called Homotopy Optimization with Perturbations and Ensembles (HOPE) was introduced by Dunlavy and Leary [3]. HOPE has shown its excellent ability in locating the global minimizer is better than simulated annealing, quasi-newton method and Homotopy Optimization Method (HOM).
HOPE constructed an auxiliary function with its minimizer known. That minimizer is perturbed in several directions with different lengths to generate some new points. Those generated points are taken as the initial points in local searches. All minimizers found are stored in an ensemble.
The steps mentioned above are repeated as the homotopy function deforms from its auxiliary function to the target function. The same minimizers found are not excluded except when they exceed the maximum number of points in the ensemble. Hence, the ensemble will be uninterruptedly expanding during the computation process and caused HOPE to be expensive.
Besides that, the percentage of success rate in locating the global minimizer is highly dependent on the step size and the number of perturbations. A small step size and a large number of perturbations will increase the chances of predicting the global minimizer correctly. In the meantime, it also increases the computational steps taken. However, increasing the number of computational steps did not promise a significant success rate [4].  [5]. HSPM is a trusted interval-based algorithm. It locates a minimizer within a trusted interval. Such an interval can be determined by the Intermediate Value Theorem (IVT). For details of HSPM, refer to [6].
Interval analysis is very useful in optimization. Alefeld and Herzberger [7] claimed that global optimization problems could be solved by using interval analysis with a guarantee that the computed bounds on the location and value of a solution are numerically correct. Solving optimization problems within intervals will produce the global optimum within the bounding region.
Kerk and Rohanin's Trusted Interval (KRTI) was introduced to improve the time complexity of HSPM [15]. KRTI was found to be able to convert a hard optimization problem to an easier one [16]. With IVT, an interval technique, KRTI can reduce a large non-convex optimization problem into several convex optimization problems. IVT also enables HSPM and KRTI overcame the repetitive problem of HOPE.
However, HSPM and KRTI are only designed for solving one variable optimization problems, and hence an extended algorithm is needed to solve multi-variable optimization problems. In this paper, Kerk and Rohanin's Trusted Region (KRTR) is established to improve HOPE. It also can be considered as an extended algorithm from KRTI.

Kerk and Rohanin's Trusted Region (KRTR)
For unconstrained optimization problems, there are three conditions that a minimizer must fulfill, which are the first order necessary condition, second order necessary condition and second order sufficient condition. These conditions stated that a minimizer must have a zero gradient. However, a zero derivative of a function does not always guarantee the solution found is a minimizer, sometimes a saddle point or inflection point may also occur [17] KRTR is a gradient-based algorithm. It finds the area of the zeroes on the gradient of an objective function. Such areas are called the trusted region (TR). In the algorithm of KRTR, a TR must have negative derivative function values followed by positive ones. With this condition, KRTR can filter out the maximizers and also the saddle points.
KRTR contains two main parts which are the identification and local search parts. KRTR initializes the algorithm with homotopy loop. The identification part contains the preparation steps to identify the TRs; it is the nested loops inside the main homotopy loop. After all the TRs are assembled, a filtration step is carried out to eliminate the repetitive TRs. Next, the local search part is executed to locate the minimizers from each TRs. The endpoints values will be calculated and stored in an ensemble with all the minimizers found in local search step. Then, the global solution will be determined.
To deal with the cases of more than two variables, say three variables, another loop needs to be added in the identification part of TR; for four variables cases, two more loops should be added and so on and so forth. The algorithm of KRTR for two variables is stated below: function value while {x i+1 , y j } gives a positive function value and m = 1, 2, ..., M .
13 Filtration:subinterval k =delete duplicate subintervals among subinterval m and subinterval n where k = 1, 2, ..., K. 14 for k = 1, 2, ..., K do 15 let {x i , y j } be the initial point. 16 run a local search method on f (x) and store the solution found in an ensemble. 17 Calculate the function value of endpoints and add into the ensemble. 18 Select the lowest function value for the function value f (x, y).

Convexity of KRTR
Convexity is vital in optimization since the defining line in optimization is not between linearity and nonlinearity, but convexity and non-convexity [18]. A problem formulated as a convex optimization problem can be solved reliably and efficiently [19].
In general, minimizing an arbitrary function is very difficult, but if the objective function to be minimized is convex, then things become considerably simpler since every local optimal solution is global if the objective function is convex [20].
In a search for sufficient condition to guarantee maxima and minima, we are lead to a class of functions called convex functions and a class of sets called convex set [21]. In fact, many optimization problems are not convex, and it is difficult to recognize one and also hard to reformulate a problem such that it can be one [19]. KRTR is an algorithm that is able to reduce a large non-convex optimization problem into several smaller convex problems.
Theorem 3.1 Let K ⊂ R be an interval, and let f be a real-valued function on K with a continuous second derivative. If f is convex on K, then f is nonnegative everywhere.
TR plays a leading role in KRTR. KRTR locates a minimizer within a TR. Each TR is able to produce one local solution, and eventually the global solution can be identified. A TR can be determined by Poincare-Miranda Theorem (PMT). TR is an intersection set from subinterval n and subinterval m , where these two subinterval is assumed to be convex due to their restrictions in Algorithm 1. Thus, TR is also supposed to be a convex region based on Theorem 3.2 which taken from [19]. The intersection of any collection of convex sets in R n is convex.
The Lemma 3.3 shows the convexity of TR.
be a trusted region, and let f be a real-valued function on T R. The function f is said to be convex on TR if and only if subinterval n and subinterval m are convex.
The condition of subinterval m and subinterval n stated that they must have negative derivative function values followed by positive ones. Hence, they hold the statement of Lemma 3.4. Thus, subinterval m and subinterval n are convex.
According to Theorem 3.2, we can conclude that TR is convex since it is an intersection set of subinterval m and subinterval n .

Globally Convergence of KRTR
The convergence of a method is not the only characteristic of a method; however, it gives the methods to be qualified as tools for solving optimization problems. To prove whether a method exhibits global convergence characteristic, it must hold both descent nature and closedness properties [22].
Based on the statement above, we yield the following Theorem 4.1. Since TR is an intersection between subinterval m and subinterval n , then Based on the theorem of the closed set, the intersection of any collection of closed sets is closed; thus we can conclude TR is closed.
KRTR has the convex property when the function f is defined over TR. In other words, f on TR can be illustrated as a valley due to its restriction in identification part of Algorithm 1.
In such a landscape, KRTR will select the lower bound of the valley as the initial point, (x k , y k ). It is usually a higher point than the local minimizer of the valley. The initial point will move towards the lowest position eventually which is also the minimizer. A sequence of function values f which satisfy the expression f (x k+1 , y k+1 ) < f (x k , y k ) will be yielded iteratively by the local search part.
Thus, KRTR has the descent property. ∴ KRTR is globally convergent.

Numerical Results
The simulation of KRTR are presented here. These functions have the diverse properties to compare the performance of the tested algorithm efficiently [25], and some of them have a very low success rate in locating the global solution due to its characteristic such as highly multimodal [26]. All the implementation is applied with Mathematica version 11.1.1 on a laptop with CPU 2.5GHz and 8.00GB RAM. The details of each function and its suggested domain are stated below.

TF 15: Zakharov's Function
The global minimizer of this function is x * = (0, 0, . . . , 0) with the minimum f * = 0 for the domain −5 ≤ x i ≤ 5. This function can be generalized as where k = 1, 2, . . . , 20 and Table 1 shows the numerical results obtained by KRTR with the test functions (TF) discussed. Number of TR found does not imply the number of local minimizers located. Some TRs exist are intersected in the same basin of attraction which lead them to the same minimizer. KRTR successfully attained the global minimizers of each function.  KRTR is able to locate the region which can be trusted to have at least one minimizer; this region is called a trusted region (TR). Each TR can produce one minimizer. The occurrence of repetitive minimizers is caused by the intersection of TRs. Figure 1 presented a picture of the basins of attractions of a function. There are three trusted regions (TR) determined. These three TRs intersected and will arrive at the same minimizer eventually since they are located in the same basin of attraction.
The success rate in locating the global solution is highly dependent on the parameter s which is the step size used to divide the domain into several sub-intervals. A smaller s increases the success rate to locate a global solution and a larger s can reduce the computation time but the global minimizer found cannot be guaranteed to be the exact solution. This is due to the characteristic of KRTR that it only determines the global solution from among the local solutions detected.

Conclusion
KRTR has proven to hold the descent property and global convergence. It has shown its robustness capability of locating the global minimizers in the simulations. From the results, KRTR does played its role well as a global optimizer.
KRTR is still facing the repetitive minimizer problem like HOPE, but KRTR locates the same minimizer more than once if and only if the TRs determined are overlapping. Hence, more research is needed such that the repetitive minimizer problem of KRTR can be avoided. KRTR is developed to solve unconstrained optimization problems. We expect it can be extended to solve constrained optimization problems by using the condition of Riemannian manifolds [27].