Performance Comparison of Cuckoo Search and Differential Evolution Algorithm for Constrained Optimization

Cuckoo Search (CS) and Differential Evolution (DE) algorithms are considerably robust meta-heuristic algorithms to solve constrained optimization problems. In this study, the performance of CS and DE are compared in solving the constrained optimization problem from selected benchmark functions. Selection of the benchmark functions are based on active or inactive constraints and dimensionality of variables (i.e. number of solution variable). In addition, a specific constraint handling and stopping criterion technique are adopted in the optimization algorithm. The results show, CS approach outperforms DE in term of repeatability and the quality of the optimum solutions.


Introduction
Cuckoo Search (CS) algorithm is a novel meta-heuristic optimization algorithm that has been developed in 2009 [1]. On the other hand, the Differential Evolution (DE) algorithm is introduced in 1995 [2]. DE is considerably robust optimization algorithm as it has been proven in many applications when the performance is compared with other algorithms such as particle swarm optimization [3] and ant colony algorithm [4]. These optimization algorithms have gained popularity recently due to their simplicity [5] and efficiency to solve difficult many problems [6,7].
In this paper, the performance of CS and DE algorithm in solving constrained optimization problem is compared. CS and DE are considered as a robust optimization algorithm which the DE optimization algorithm was proposed earlier [3]. The constrained optimization problem used in this paper is taken from the established benchmark functions and it has been combined with a specific constrain handling.

The constrained optimization problem
The aim of a constrained optimization is to get the optimal values of solution variable (which the solution is within the boundary), so that the objective function is optimized and all constraints are satisfied. To achieve this goal, the optimization algorithm evaluate the objective function by searching a value of solution variable in the bounded space, . Thus, an optimal solution is found in the feasible space, , once the objective (minimum or maximum) is met, where ⊆ [8].
Assuming a minimization problem, the general constrained problem is defined as: where is solution variables, is dimension of the problem, and are the number of inequality and equality constraints respectively, and are specified lower and upper bounds for variable solutions respectively.

Constraint handling
An efficient and adequate constraint-handling technique is a key element in the design of optimization algorithm. Although the use of penalty functions is the most common technique for constraint-handling, there are a lot of different approaches for dealing with constraints [9]. A comprehensive discussion about constraint handling is presented by [10].
In constraint handling using penalty approach, a penalty is added to the objective function to penalize an individual for constraint violation so that the constrained optimization seems to be converted to unconstraint optimization. The optimization might be inefficient with this technique due to the conversion and boundary space.
In this study, a dynamic constraint handling approach is adopted in order to improve the efficiency, i.e reducing the computation time. The dynamic constraint handling called dynamic-objective constrainthandling method (DOCHM) is adopted from Lu and Chen work [11].
Through defining auxiliary function ( ), the dynamic constraint handling converts the original problem into bi-objective optimization problem ( ( ), ( )), where ( ) is treated as the first objective function and ( ) is the second (the main) objective.
The auxiliary function ( ) will be merely used to determine whether or not an individual (candidate solution) is within the feasible region and how close a particle is to the feasible region. If an individual lies outside the feasible region, the individual will take ( ) as its optimization objective. Otherwise, the individual will instead optimize the real objective function ( ). During the optimization process if an individual leaves the feasible region, it will once again optimize ( ). Therefore, the optimizer has the ability to dynamically drive the individuals into the feasible region. The dynamic constraint handling can be illustrated in the following pseudo-code: ;(the auxiliary objective func.) The auxiliary objective function is defined as: where represents the distance of an individual (candidate solution), represented by '*' marks, to the constraint violation boundary as illustrated in Figure 1.

Stopping Criterion
The objective of min-max optimization is usually clear, i.e. the global optimum should be found. However, it is not easy to decide when the execution of an optimization algorithm should be terminated. For practical applications, the stopping criteria selection can significantly influence the computational time of an optimization process. Due to bad selection, an optimization process might be terminated before the population has converged, or computational resources might be wasted because the optimization process took longer time to complete [3].
In this study, the number of function evaluation (FE) in feasible region is used as stopping criterion, i.e. to terminate the optimization run. This FE counts for how many times the objective function has been evaluated in the optimization loop and the constraint condition are satisfied, i.e. evaluation in feasible region.

Benchmark functions
Three benchmark functions are selected for the constrained optimization experiment. These benchmark functions are taken from [12]. These function varies from the simplest representation to a complicate form in terms of constraint and dimensionality (number of variable).

Results
Series of experiments on the selected benchmark functions have been conducted. The experiment is to evaluate the performance of the proposed approach for the constrained optimization problem by using CS and DE. The program has been written and executed in MATLAB 2008 using a Laptop with i3 processor and 4GB RAM. The results are presented in the following section.

Optimization parameters
Prior to the optimization run, a number of parameters (solution variables) must be specified. In metaheuristics optimization algorithms, the common parameter that all algorithms share is the number of population (population size). The rest number of parameters differs from each other depending on the algorithm complexity.
For DE algorithm, the main parameters are (mutation factor) and (crossover constant). In addition, population size ( ) has been included. A preliminary study has been carried out such that = 0.5 and = 0.9 gives the optimum solution, for =50 [3]. Thus, these DE parameters are also used in this optimization as comparison with CS algorithm.
For CS algorithm, in addition to (number of nest in this case), only one parameter needs to be set, i.e. discovery rate ( ). It is suggested that = 15 to 25 and = 0.15 to 0.30 are sufficient for most optimization problems [13]. In this study = 0.25 will be used and population size = 50 is set in order to have un-bias comparison with that of DE solution.

Optimization for the benchmark functions
Experiment on the constrained optimization for the selected benchmark functions has been carried out. The number of objective function evaluation (FE) in feasible region is used as stopping criterion of the optimization run as mentioned earlier. Performance comparison between CS and DE is evaluated based on the optimal results of each objective function (the benchmark functions). This means, with the same number of FE, the optimum value of f(X) for a number of independent optimization runs are recorded.
The statistical measure of the optimum solution for 20 independent runs is presented (mean, best, worst and standard deviation) as showed in Table 1. It can be seen from the quality of results, CS outperforms the DE solution for all cases of dimensionality and constraint. CS can produce better solutions, i.e. faster convergence. In addition, for the case of 1 ( ) where constraints are inactive, CS can give the exact result with 100 times evaluation of the function (FE=100), while DE still could not get the exact solution. In the next experiment, the number of FE is increased to 1000 in order to further investigate the capability of the DE to reach the optimal solution.
The results as in Table 2 is consistent with the previous experiment (Table 1) with the addition that DE can reach the exact solution for 1 ( ) and reach nearly exact for 2 ( ). For CS, solution of 2 ( ) and 3 ( ) are improved. Furthermore, CS produced a slight robust (slightly better repeatability) solution of 3 ( ) as compared to DE approach.
In addition, this result confirms the advantage of CS over DE as also mentioned in some previous studies such as in [14] and [15]. In the study done in [15], it is mentioned that both CS and DE are more robust optimization algorithms as compared to PSO (particle swarm optimization) and ABC (artificial bee colony) algorithm.
However, it is shown that CS performs slightly better than DE. Another advantage of CS over DE, and perhaps over many other algorithms such as PSO and ABC, is that only one parameter needs to be adjusted, i.e. discovery rate ( ).

Conclusions
The performance of CS and DE for constrained optimization problems have been evaluated in several investigations. The analysis cases are adopted from the established benchmark functions. The study demonstrate the effectiveness of CS algorithm combined with the specific constraint handling for constrained optimization problem. From the results, it can be concluded that the CS algorithm outperforms DE algorithm in terms of convergence speed to reach the optimum solution with the same number of the function evaluation (FE) in the feasible region, i.e. how many times the objective function is evaluated in the feasible region.
In Addition, another advantage of CS over DE, and perhaps over many other algorithms, is that only one parameter needs to be adjusted, i.e. discovery rate ( ). Meanwhile for DE, it needs at least two parameters to set, i.e.