Resolution of Minimal Solutions for Max-Lukasiewicz Fuzzy Relation Equation by Fuzzy Neural Network

Fuzzy relation equation (FRE) was introduced by Sanchez in 1976, which was extensively applied in fuzzy logic, approximate reasoning, intelligent control, artificial intelligence and so on. It is well known that solving the minimal solutions of FRE is a NP-hard problem because of its computational complexity. Many scholars have studied some algorithms to solve the minimal solutions of fuzzy relation equation. Considering that max-lukasiewicz composition operator is widely used, in this paper, we investigate the resolution of minimal solutions for max-lukasiewicz fuzzy relation equation based on fuzzy neural network and propose a novel algorithm. Through continuous downward iteration and conditional constraints, all minimum solutions of FRE are finally obtained. Both mathematical conclusion and simulation experiment show that our algorithm is effective and valid.


Introduction
Fuzzy relation equation(FRE) was extensively applied in many fields, such as fuzzy decision, fuzzy control, transport system, data compression, fault diagnosis and medical diagnosis and so on [1,2,3,4].
FRE based on the max-min composition was introduced by Sanchez [5] in 1976. Aimed at the maximum solution of FRE, some scholars obtained some meaningful conclusion [6,7]. And for the minimal solutions of FRE, many researchers investigated it and proposed some different algorithms [8][9][10][11]. Considering the diversity of the minimal solution, thus the resolution of minimal solution for FRE is a NP-hard problem.
In 1994, Fuzzy neural networks were introduced to solve FRE [12]. Li and Ruan [13,14] proposed some novel neural algorithms to solve the maximum solution of FRE based on fuzzy δ rule. Aimed at that max-lukasiewicz operator has been widely applied in many fields, in this paper, inspired by Li and Ruan's δ rule, we investigate the minimal solutions of FRE with max-lukasiewicz operator by using fuzzy neural network, and propose a novel algorithm.
The rest of our work is organized as follows: in section2, we briefly recall some notions of FRE and fuzzy neural network. In Section 3, we propose a novel algorithm for solving the FRE. In Section 4, we propose some related mathematical conclusions and investigate the convergence of our proposed algorithm. In Section 5, we give the simulation experiment of the algorithm. The conclusion is given in the last section.

Preliminaries
To better understand the introduced algorithm, In this section we will give some basic definitions.

Fuzzy Relation Equation
Definition 3. binary fuzzy relation [5]: Let and be nonempty set , a binary fuzzy relation R between X and Y is a mapping . Thus a binary fuzzy relation is a fuzzy . ∈ [0,1] × Definition 4. fuzzy relation equation [5]: Given the deterministic domain X, Y, Z, given the binary fuzzy relation , , find the fuzzy relation satisfies the where in equation represents the fuzzy composition operator, it is composed of t-norm and t-conorm °i n definition 1 and 2 respectively, that is

Fuzzy Neural Network
Here, we give a common neural network in Figure.1. If we set the input node as each sample of A, ignore all hidden layers, output layer as each sample of B, and change the and of the × + intermediate operator into " " and " " respectively, then we will require W to be the weight of * ∨ the neural network.

Novel Algorithm
In this section we will present the objective and specific steps of the algorithm.

The Objective
Assume the fuzzy relational equation is Eq.
Here, we call the relationship A and relationship B as n different training samples , where are fuzzy vectors, and are the (a j ,b j ) ⋯ (a n ,b n ), a j = (a 1j ,a 1j ,⋯,a pj ) T (j = 1,2,⋯,n) b 1 ,b 2 , · · ·b n expected outputs samples ,respectively. And our objective is to train these samples in turn through the neural network to adjust the value of weight W.

Algorithm a. Initializing
where represents the 0 th set of is the maximum solution of Eq.(1).
For j=1 to n: Input the sample: is the input pattern and is the output pattern. The Step1. Calculating the actual outputs: represents the actual output when the data pair is being trained, and is the ( ) ' j th w s k (j -1) connection weight from the input node to the output node, and is the component of j -1 th k th k th the input pattern. Step2.
Case3. , then , ( ) ' < I j = {i 1 ,i 2 ,⋯,i p j } = {i | a ij + w i + 1 ≥ b j , i = 1,2,⋯,p} where , obviously , then we adjust in B. If , then compare with the solutions of except , and if ) Otherwise change make , and add to , then go to next ( ), w i r (j) = y i r (j -1) ( ) ( ) way. Else go to C. C.
, if , move on the next way, else calculate y i r (j -1) = y i r (j -1) + y i r (j -1) > (b j ) ' and return to the step A. = ⋁ p k = 1 (max {y r k (j -1) + a kj -1,0}) where represents the margin of error that we can accept, and η represents learning rate or step length. ε c. When , we can get the result j = n U(n) = {W 1 (n),W 2 (n),⋯,W k n (n)}.
In addition, if ( is the component of the maximum solution ), for ( + 1) > i th . ∀i: w i (t + 1) = w i Known by (i),(ii) and (iii) we always have , is a monotone increasing sequence.
, in the algorithm of section 3 is surely convergent.
Proof. Know by Theorem1, the training sequence is monotone ascending, and we also { ( )} know is a bounded sequence, that is, Where is the maximum solution of Eq.(1), and is a a matrix with all components being 0. So , is surely convergent.
, known by step C in the section 3, we have , then , ∀ ∈ ≤ ∘ ≤ ∘ = and now we need to prove that this equals sign holds. If this is not true, then makes ∃j 0 ∈ {1,2,⋯,n} that . Since W is the maximum solution to Eq.(1), , there ⋁ must be to make . In the process of algorithm iteration, there must be a 0 0 0 + 0 + 1 ≥ 0 generation of solution , where one element satisfies .
Known by Theorem1, the training sequence is monotone increasing, so
Known by and , we have: , such that , and there has 0 ≤ 0 < 0 .
then . Therefore, we have, then Y is not ⋁ Proof. When η=1，this algorithm is going to be similar to Li's [16] algorithm, and to sum up, the conclusion is valid. It is easy to verify that the every solution in satisfies equation , furthermore they are the °= whole minimal solutions. Now Suppose t represents the number of iteration steps when a stable point arrives, then t is related to the size of η. The results are shown in Table 1, and t 1 represents the iteration of example 1. We find that the number of iterations increased with the decrease of η.

Conclusion
In this paper, we investigate the minimal solutions of fuzzy relation equation and propose a novel algorithm base on neural network where the operator is max-lukasiewicz operator, we can obtain all minimum solutions of the FRE through automatic iteration process. Finally, we give some theoretical mathematical results and simulation experiment to illustrate that our algorithm is effective and valid.