Three terms of derivative free projection technique for solving nonlinear monotone equations

The derivative-free projection technique is one of the efficient methods for solving nonlinear monotone equations. In this study, three terms of the derivative-free projection method with a monotone line search technique is presented. This method based on extension of a conjugate gradient descent and a developed gradient projection method to solve the nonlinear system of monotone equations. The proposed method can be used for large scale equations due to limited memory requirement. We investigated the global convergence of the suggested approach without requiring differentiability and also the equation is Lipschitz continuous. The numerical results showed that the new algorithm is efficient and promised.


Introduction
We consider a derivative-free projection techniques are the most effective line search methods to solve the following nonlinear system of equations: ( ) = 0, ∈ , (1.1) s.t. : Ω ⊂ → be continuous and nonlinear monotone function, Ω ≠ ∅ closed convex, the monotonicity means 〈 ( ) − ( ), − 〉 ≥ 0, for all , ∈ . The gradient projection techniques are efficient to find the solution of large scale unconstraint optimization due to their simplicity and limited memory. A lot of computation methods have been proposed to solve unconstraint nonlinear problems. For example, Newton method, quasi newton method and Levenberg-Marquardt type method [1,2]. A good property of the derivative-free for solving the monotone equation is that competitive with conjugate gradient descent [3,4 ]. In this work, we developed a derivative-free projection to three terms of a derivative-free with a monotone line search technique. Also, motivated by the idea of Liu [5], we construct a new projection method of three terms derivative-free for solving large scale systems of equations. The proposed approach used to solve a large scale systems of equations because it inherit nice properties of conjugate gradient descent such as the limited memory require and high efficient. The organized of this paper as: in section one we showed the conjugate gradient projection algorithm, in section two, we presented our algorithm with a new line search, in section three some lemma and global convergence are established and in section four we introduced the numerical experiments.
The conjugate gradient descent (CGD) is one of the important methods for solving unconstraint optimization problems and nonlinear equations. It is search direction as follows: Liu and Li [6] considered a conjugate gradient technique of Hager-Zhang [7] and suggested that the conjugate gradient descent method with = 1 is high competitive than with = 2. Also, Yan et al. [8] applied the spectral technique to analyses the conjugate gradient descent method and showed that the CGD method with = 1 is best than with = 2. In our method, we choose the parameter = 1 in the proposed algorithm. The projection operator is a mapping : → Ω for all , ∈ it holds that [9]

Algorithm
Given an initial point an iterative scheme for (1.1) generates a sequence { } by = + , ∈ which a line search procedure employs alone the direction to calculate step size . Let = + , by monotonicity of F, the hyperplane = { ∈ | ( ) ( − ) = 0}, strictly separates from the solution of the problem (1.1). Based on Solodov and Svaiters [10] advised that the other iteration point is constructed by projecting onto that is is determined by: We assume that holds some assumptions as follows:  B1 The solution set of (1.1) is nonempty.

Remark (i):
We conclude that by definitions of and that = + ≥ ‖ ‖ ≥ 0. This inequality based on the monotonicity of a mapping F, and always the divisors of and are greater than zero before the algorithm breaks. The sufficiently descent property of Algorithm (2.1) showed in the next lemma.

Lemma:
By taking inner product (3.1) with , and from (3.2) we get And from . ≤ ( + ), we get For k=0, = −‖ ‖ Thus (3.1) holds. □ Now, the next lemma shows that the line search of the proposed algorithms is well-defined.

Remark (ii):
By remark (i), we have From (3.9) and (3.10) we have So, by (3.5) and the definition of , we get This implies a contradiction with (3.11), so, the assumption does not satisfied, and (3.8) holds. □

Numerical Experiments
Numerical results are used to assess the efficiency of the new approach (MOH3). We compare it with three famous algorithms: (GC) which is introduced by Yan et al. [8].
(HS) which is introduced by Liu and Li [6].
The parameter of suggested algorithm set as follows: = 0.8, = 0.7, = 0.5, = 0.001 and = 0.5. The parameter of the other methods comes from [10,6,11]. All Algorithms are terminated whenever ‖ ‖ ≤ 10 . The total number of iteration exceeds 500000. Our computations were carried using MATLAB R2014a and run PC with 4GH, CPU2.30-Windows8 operation system. We test the performance of the algorithm (2.1) with different initial starting points [12] and various dimensions. Similar in [13,14], we check the test problem when the variables number n=5000, 10000, … with the following starting points

Conclusions
We have suggested a new class of three terms of derivative-free projection technique for solving unconstraint optimization. The suggested approach is appropriate for large scale equations, because it has a nice property which is the low memory requirement. The global convergence of our method is established. The numerical results showed that our method is efficient and working better than the three other algorithms that the new algorithm is compared with.