Algorithm and model for improve the avoiding of deadlock with increasing efficiency of resource allocation in cloud environment

Recently, cloud computing has been the most popular method among all other promising technologies that provides the service for many users at the same time. all users can reach and use the resource that is provided by the Virtual Machines (VMs). Hence users process, data must arrive to the resources to perform as a disturbed system, which provides dynamical scale services and virtualized resources. As a result of limitation of resources, the deadlock status may occur in such type of systems. In this paper, we have designed an algorithm that could increase the efficiency of deadlock avoidance. Our technique will use the attribute of execution time in the process to get better resource allocation in additional to the checking system to stay in safe state and free of deadlock.


Introduction
In recent years we have seen a high increase in the data that need to be processed this lead to improve the new technologies using distributed computing to reduce the cost to process such a high amount of data as well as a lot of researches appears published in this direction, so as a result for this turnout pushed markets for high investment in this field, which led to its rapid development [1], millions of users now just depends on this technology such as cluster computing and Cloud computing. In this research, will focus on the Cloud computing which provide multi types of resources and different services, it could be by providing hardware such as CPU performance, memory capacity, etc. this type of service is called infrastructure as a service (IaaS), which is so popular in use (e.g., Amazon EC2) [2]. And there are other types of cloud services which provides a capacity of Cloud resources, platform as a service (PaaS) or (SaaS) that refer to software as a service [3]. Furthermore, the resources need to be managed by the management of resource allocation available in Cloud and dynamically VMs resource allocation using strategies match with Quality of Service (QoS). keeping the VMs physical resource work in high performance [4]. Hence, the virtualization technology made VMs ability to use cluster system and work like separated physical hardware although its sharing [5], while here we have shared resources so there is a possibility of deadlock there is a lot of algorithms that had designed just for avoiding this problem. But all these algorithms do the job form one side is to keep system far from deadlock state [6]. In such status we suggest in our work to use the time of process to keep the Cloud faster response to share resource and allocate the resource efficacy and the system always in a safe state.

Algorithms for avoiding the deadlock
In the first part we review the concepts of deadlock avoidance have already used. By surveying most of Previous studies. These studies concentrate on the deadlock avoidance policies (DAPs) [7], the system that control this operation is resource allocation system (RAS) [7]. Although, this strategies and algorithms developed a complex method to detect and deals with deadlock problem in all types of systems, most of these strategies are still depend on calculating the available resource framework and how many requests must be served only. For example, to avoid the deadlock the system may use Petri Net (PN). PN is one of the effective ways to deals with resources failure, in order to avoid deadlock [8]. The other popular method is Load Balancing, which used in different levels of cloud computing [9]. The primary problem solves by this paper is the deadlock avoidance in Cloud Computing with keeping higher level of Quality of Service (QoS). The (QoS) of the cloud under all these actions may cause a lot of delay of execution time for jobs., we improved and proposed an algorithm using the technique of detect and avoid the deadlock before it performed in the VMs. this algorithm will aim to eliminate the delay of time processing and reach save system.

Deadlock model design
The distributed system can be modeled by mathematical logic. However, the detection of deadlock directly from equations is not clear, by minimize the function of probability of deadlock and represent the VMs as a matrix VM ij [9], if we conceder we have n of VMs and the total capacity of each denoted by C ij so: Where A is available resources, and R is Already Allocated resource, so the new job request can be performed only if the need matrix of request job less than or equal to available matrix of resources as following: While the R ij is a mount of resource that process need. Therefore, the graphical methods to detect and solve the problem of deadlock is the most common and efficient. In order to achieve the results of existing algorithms and proposed algorithms, we have a model example for the deadlock problem. Although our work will deal with resource allocation in cloud computing, this technology uses the concept of a smart distributed system that provides different types of resource allocation and sharing VMs. The cloud computing and the traditional distributed architecture share a similar function that is the pool of multi-access to resources by requesting numbers of jobs to process at the same time. In this case, the quality of cloud will depend on providing the best way of resource allocation and number of processing that is submitted into the cloud [10]. In a heterogeneous-distributed systems, there is nonsynchronous processers P 1 , P 2 , P 3 , . . . , P k , k n. All these processes have a different capacity and different global memory because the system becomes as a resource of a shared memory. In this system, the important part is the communication network between the resources itself and the jobs that requested these resources. This allows the jobs to be able to share the physical resources (CPU cores, amount of memory, etc.) and leads to accomplish the jobs in a minimum cost [5]. When the job has a reservation for any resource in VM, the other jobs must wait until this resource becomes free. As a result of this process, this may lead to a deadlock just like in an operating system. To demonstrate the proposed algorithm, the status and deadlock problem will be presented by a simple regular graph such as the Resource Allocation Graph (RAG) and Wait For Graph (WFG) [11]. The WFG is a standard tool for deadlock detection. The proposed model considers that there is a number of jobs that required a number of resources from VM. 3 Every job will have an ID that relates to the VM ID that requires the resource. In this case, the algorithm must detect all the attributes of the job and the VM status.   Figure 1 also shows that a required tow reoccurs may occur for the same job. Simply, the deadlock may happen where the job requests for resource that already held by another job that still running. This process method will make both of processes in waiting status, and it will lead to the fact that the deadlock in cloud has the same four conditions of deadlock in operating system [11]: 1. Mutual Exclusion: when the resource is not sharable nor have enough space or energy to process more than one job at the same time. 2. Hold and wait: means the process (job) is holding on resource and at the same time waiting for another resource, but the resource that is already waiting for is held by other process. 3. No Preemption: the process can only release the resource because the resource can't do this by itself. 4. Circular waiting: It is the most common condition when each process waits for held resources.
The Resource Allocation Graph is representing the deadlock problem, and it's not deferent from (WFG) which is more focusing on the problem. Let us assume that we have a set of processes each process defined as (P ) so a set will represent as: Simultaneously, the same P i held resource (R 1 ) while it is waiting resource (R 2 ) to be released from the other process for example (P i+1 ). At the same time, process (P i+1 ) is waiting for P i to release R 1 . This status will generate a non-finite circle of waiting and this situation is represented by (WFG). In the Graph, we can see if the circle was broken by making some processes permanently waiting for period of time to get permission before the request, for example in figure 3 if we delayed the request of P 1 until P 2 release the R 1 , and the same action with the requesting of P 3 that request resources of R 3 this will give time to P 2 to release the R 3 , in this way we will never enter in the circle of deadlock. While the system is trying to avoid deadlock this effect on the QoS for the cloud. This problem will be the main objective for proposed algorithm.
To deal with the problem of deadlock in the virtualized computing, there are more than one technique including ignoring the deadlock, passing it, or resolving it after occur [12].

Proposed algorithm
In this algorithm, we proposed an approach that can be used to improve the exacting algorithms by reallocating resource in an efficient way by taking the consideration of time of execution for every job request. First, we assume that a set of resources R = {R 1 , R 2 , . . . , R m }, where m are all available resources in the system, and the capacity of each resource represented by C R that refers to a number of instances of each resource (R) which is creating the matrix of C ij for a total capacity of the system. In other words, how many tasks can the resource hold at a time depends on the cloud capacity [13]. The proposed algorithm suggests that performing of a requested task without ignore the time of execution (t) that every process (R) needs to release the resource (R). Each of these process in job Queue must declare how much it's time for execution (release resource) before submission to the system, as a result of this statement our algorithm will use the concept of Short Job First (SJF) [11], but not the algorithm itself because in avoiding deadlock must use Algorithms for resource allocation to make the system in save status. So if there are two process requesting (P ) from the job queue to use the resource R 1 to R m ∈ R, and our algorithm will check if P C R and compare the time of execution of both to be sure that will not cause a deadlock in next job submission otherwise the job will be set to waiting statues in temporary queue and the system submit other request and do the same audit in this way of choosing submission jobs will guarantee the QoS and all requesting process will be able to use the resources.as a result of our technic the total execution time for all requests will be faster and free of deadlock, this technic can be achieved with any type of resource allocation algorithm that Cloud use it. As showed in the figure 4. In this paper we will  [14] and the Load Balancing methods, by adding the attribute of the processes itself as an important argument. This attribute will play a significant role to determine the priority of resource allocation and jobs execution in a cloud environment.

Improve banker algorithm
As a practical example of the proposed algorithm. Applying banker algorithm, which is most faster was to a void deadlock using the strategy of banks, by keeping reserve resources [11]. m ← number of resources in VM 3.
AvelV[ ] ← the available resources in initial state Begin 6.
AvelV[ ] ← the available resources in initial state 6.
TQ[ ] ← temporary Queue to save the ID of waiting job request Step 3: if there is more than resource request than start with the minimum t(P i ) 11.
So for ∀ P arrange descending according to t(P i ) 12.
Free Select the minimum t(P i ) 23.
Go to step 4 24.
Step 6: the system in safe state End

Implement proposed algorithm in load balancing method
As a second example for practically is by applying in load balancing, which is a very efficient method to resource allocation in distributed systems. We get an efficient dynamic load balancing scheduling algorithm.

Existing algorithm used load balancing
1. Input: n ← number of process 2. m ← number of resources in VM 3.
Step 1 : Receive request P i by Data Control Centre 5.
Set P i to execution in VM 8.
Else the Data Control Centre chick IF the P i completed set VMID = 0 go to step 2 10.
Step 3: load balancer receives new process and allocate to available VM (to VMID = 0) 12.
Step 4: IF more than one VMID = 0 and more than P i the load balancer will choose fist arrived to service by first VM available 13.
Step 5: go to step 2 until the P i =NULL IF VMID = 0 then Begin 8.
Set P i to execution in VM 9.
Else the Data Control Centre chick IF the Pi completed set VMID = 0 go to step 2 10.
Step 3: load balancer receives new process and allocate to available VM (to VMID = 0) 13.
Set P i in TQ with its execution time (t) go to step 1 15.
Step 5: IF more than Pi stored in the TQ Begin 16.
Step 6: load balancer will choose the shortest process in execution time to service by first VM available End IF End IF 17.
Step 7: go to step 2 until the P i =NULL So in simple explanation the proposed algorithm all requests will be served in the faster time as a result of the resource for example CPU will be available in maximum capacity faster when serve the short execution time request first.

Discussion of the result
We assume the numerical example of the algorithm as the following configurations:  In table 1 we can observe that the requesting of the process for a resource can be continues as well as in the Cloud computing so in the begging we will make resource allocation by using the standard Banker algorithm and to avoid deadlock afar applying the algorithm we get the save sequence for this table as the flowing P 0 , P 2 , P 4 , P 3 , P 1 this sequence is a safe state for the system but the system ignored the time of execution for each process.
In proposed algorithm, we start with the short job first if don't much the Banker so it will save in to Queue and then comparing the execution time for all process including that waiting in queue as a result by applying this technic in table 1 the process P 1 will be wait in the Queue then will get comparing between P 1 , P 3 and P 4 so the save sequence will P 0 , P 2 , P 1 , P 4 , P 3 . Here we notice that the waiting time for some process are reduce and total time in whole system was reduced this will be very useful for system of Cloud computing when we have unlimited for P 0  to P k while k = ∞ In this case the system will serve the shorter request as once it's arrived after checking deadlock and in this way the total system efficiency will increase by free the resources as soon as possible to be able to get the more requests.

Conclusion and future work
The response of the resource in the cloud computing is important because it increases the capability and the speed of task processing. Sometimes, the efficiency of the whole system is measured only by the processes speed. In the structure of the distributed system, the resource allocation way is one of the main processes, therefore; our proposed algorithm designs a new way to allocate the resource for the request process by finding the optimal way to choose the request that will serve first. Since this type of system dealing with multitasking processing, the problem of deadlock is likely to occur. Our work focuses on solving this problem with keep the improvement of QoS for Cloud by taking into consideration the timing. In future work, we suggest to add another process attribute like the importance of the request and use the parallel processing when the Cloud has enough resources for more than one process request.