Using a new line search method with gradient direction to solve nonlinear systems of equations

The line search techniques together with the Newton method are the best methods to solve nonlinear systems of equations. These methods use the gradient directions because they required low storage. In this paper, we suggest a new line search algorithm with gradient direction to solve the nonlinear systems of equations. The purpose of this algorithm is to reduce the number of iterations and the function evaluations, and it can increase the effectiveness of the approach. The algorithms global convergence is proved. The numerical results indicates the efficiency of the new algorithm and it is promised for solving nonlinear systems of equations.


Introduction
Some problems arise in the various fields of computational science and geometry, among the problems that arise in these areas, especially in the interpretation of nonlinear partial differential equations and the problem of specific value, etc, are nonlinear systems.
To solve these nonlinear systems, many researchers have proposed some common algorithms that provide appropriate ways and means to solve these problems.
Among those problems that will be addressed are optimization problems and since the convergence and the work of these problems are slow so we will use the monotone strategy to solve it. Consider the nonlinear system of equations 0 ( 1 ) Where : → is continuous and monotone, i.e. 〈 , 〉 0, ∀ , ∈ . In different applications, such as sub-problems in general algorithms, this type of nonlinear monotone equations is created [1], also through a fixed point map or a natural map, many monotonous variational inequality can be transformed into monotonous nonlinear equations [2].
Newton's method, the methods of quasi-Newton and their different forms are important to solve smooth systems of equations because of the rates of rapid convergence of local [3,4], also the line search technique used for some merit functions to ensure global convergence [5,6].
In (1986), Griewank calculated the global convergence results of nonlinear equations for Broyden's rank one form [7]. In (1998), and to solve the systems of monotonous equations, Svaiter and Solodov introduced a Newton-type algorithm, and by using the hybrid projection method they showed that this method converges globally [8]. In (1999), and to solve symmetric nonlinear equations Li and Fukushima presented a Gauss-Newtonbased BFGS method and prove their global convergence [9]. Because of the need for these methods to calculate and store the matrix then it will be inappropriate in large-scale nonlinear equations [10]. In order to get rid of this defect, Nocedal proposed a BFGS method of limited memory (L-BFGS) for the problems of unconstrained minimization [11], because of the low storage of this method, numerical results showed that this approach is very competitive [12].
Many methods and algorithms have also been proposed to solve non smooth systems of equations, such as semi smooth Newton method [13] and trust region methods [14,15], and since these methods converge easily from any initial guess that is strong enough, then they are more attractive than others. Despite its attractiveness, it contains some negatives, which are important in large values of and they need to solve a linear system of equations in each iteration by using or approximating the Jacobian matrix.
In (1988), both Braz and Bruan introduced the spectral gradient method, which is considered a lowcost non-monotonous schemes used to find local minimizers [16], then in (2003) Cruz and Raydan expanded this method to solve nonlinear equations, also by using some merit functions, Cruz and Raydan have made some attempts to convert nonlinear equations into a unconstrained minimization problem [17].
In (2006), in order to address unconstrained nonlinear monotone equations, Zhang and Zhou introduced some important and effective adjustments to the ways listed in the [18] by adopting a new line search strategy with a new step length with taking into account the monotony of [19].
In this paper, and without using any descent method and any merit function, we suggest a new algorithm to solve (1), and by using some assumptions we will prove that it is a well-define and also globally converges to solutions of (1), then we will see how efficient the proposed new method is compared to method in [19] and SCGP method in [20] by using numerical results. Now, we will give our algorithm.
Step 1. Determine the search direction by 0, ℎ , Step 3. Determine step length = such that is the smallest non-negative integer w satisfying: , and find the trail point .
Step 4. Compute Step 5. Set 1 and go to step 1. Assumptions: ( ) the solution of ( 1 ) is non-empty. ( ) the mapping is monotone and its Lipschitz continuous (LC), that is to say, there is a positive constant 0 such that  (5) and (6) (ii) the algorithm (KA) generates an infinite sequence if we assume that 0 for 0. Now we will introduce two ways to prove that the line search (3) is a well-define.

First:
Suppose that for any nonnegative integer w and for some iteration index , the line search (3) is not satisfied i.e., Then we have a contradiction, since it is not possible to have each of , ‖ ‖ and ‖ ‖ less than zero, so the line search (3) (1) and (2) we have 0, 0 ∀ 0 Thus from 6 and the value of we obtain Note that the last inequality contradicts with (8), thus the statement is proved.

Convergence property
We need the following lemma if we have to obtain global convergence. and let is (LC) and monotone such that the solution set of (1) is nonempty. Then for any ̅ satisfying ̅ 0, we have: ‖ ̅ ‖ ‖ ̅ ‖ ‖ ‖ Particularly, the sequence is bounded. Also, the satisfaction is either that is finite and the last iterate is a solution, or that the sequence is infinite and ‖ ‖ 0. Furthermore, the sequence converge to some ̅ such that ̅ 0 . Proof. First if the algorithm finishes at some iteration , then we have 0 and 0, in this case will be a solution of (1). Now suppose that 0, and 0 for all , Then we will get an infinite sequence . It follows from (3)

Conclusion
Through the comparison technique and through the numerical results presented in the table above for problems with different points and initial distances, we conclude that the performance of the proposed algorithm (KA) more efficient and effective than the proposed algorithms for comparison in terms of and . In turn, it is possible to improve the behavior of the proposed new algorithm to solve nonlinear monotone equations that do not require information about Jacobian. Also, algorithm (KA) can find the best solution to problem (1) and has been created a global convergence without using any merit functions.