Resource Sharing and Management:Deadlocks and Deadlock Detection and Prevention Algorithms

Deadlocks

We can understand the notion of a deadlock from the following simple real-life example. To be able to write a letter one needs a letter pad and a pen. Suppose there in one letter pad and one pen on a table with two persons seated around the table. We shall identify these two persons as Mr. A and Ms. B. Both Mr. A and Ms. B are desirous of writing a letter. So both try to acquire the resources they need. Suppose Mr. A was able to get the letter pad. In the meantime, Ms. B was able to grab the pen. Note that each of them has one of the two resources they need to proceed to write a letter. If they hold on to the resource they possess and await the release of the resource by the other, then neither of them can proceed. They are deadlocked. We can transcribe this example for processes seeking resources to proceed with their execution.

Consider an example in which process P1 needs three resources r1; r2, and r3 before it can make any further progress. Similarly, process P2 needs two resources r2 and r3. Also, let us assume that these resources are such that once granted, the permission to use is not withdrawn till the processes release these resources. The processes proceed to acquire these resources. Suppose process P1 gets resources r1 and r3 and process P2 is able to get resource r2 only. Now we have a situation in which process P1 is waiting for process P2 to release r2 before it can proceed. Similarly, process P2 is waiting for process P1 to release resource r3 before it can proceed. Clearly, this situation can be recognized as a deadlock condition as neither process P1 nor process P2 can make progress. Formally, a deadlock is a condition that may involve two or more processes in a state such that each is waiting for release of a resource which is currently held by some other process.

A graph model: In Figure 6.2 we use a directed graph model to capture the sense of deadlock. The figure uses the following conventions.

  • There are two kinds of nodes - circles and squares. Circles denote processes and squares denote resources.
  • A directed arc from a process node (a circle) to a resource node denotes that the process needs that resource to proceed with its execution.
  • A directed arc from a square (a resource) to a circle denotes that the resource is held by that process.

With the conventions given above, when a process has all the resources it needs, it can execute. This condition corresponds to the following.

Y The process node has no arcs directed out to a resource node.

Y All the arcs incident into this process node are from resource nodes.

clip_image001

Figure 6.2: A directed graph model.

In Figure 6.2, P1 holds r4 but awaits release of r1 to proceed with execution; P2 holds r1 but awaits release of r2 to proceed with execution; P3 holds r2 but awaits release of r3 to proceed with execution; P4 holds r3 but awaits release of r4 to proceed with execution. Clearly, all the four processes are deadlocked.

Formally, a deadlock occurs when the following four conditions are present simultaneously.

  • Mutual exclusion: Each resource can be assigned to at most one process only.
  • Hold and wait: Processes hold a resource and may seek an additional resource.
  • No pre-emption: Processes that have been given a resource cannot be preempted to release their resources.
  • Circular wait: Every process awaits release of at least one resource held by some other processes.

Dead-lock Avoidance: A deadlock requires the above four conditions to occur at the same time, i.e. mutual exclusion, hold and wait, no pre-emption and circular wait to occur at the same time. An analysis and evaluation of the first three conditions reveals that these are necessary conditions. Also, we may note that the circular wait implies hold and wait. The question is how does one avoid having a deadlock? We shall next examine a few arguments. The first one favors having multiple copies of resources. The second one argues along preventive lines, i.e. do not permit conditions for deadlock from occurring. These arguments bring out the importance of pre-empting.

The infinite resource argument: One possibility is to have multiple resources of the same kind. In that case, when one copy is taken by some process, there is always another copy available. Sometimes we may be able to break a deadlock by having just a few additional copies of a resource. In Figure 6.3 we show that there are two copies of resource r2. At the moment, processes P1 and P2 are deadlocked. When process P3 terminates a copy of resource r2 is released. Process P2 can now have all the resources it needs and the deadlock is immediately broken. P1 will get r1 once P2 terminates and releases the resources held.

clip_image002

The next pertinent question is: how many copies of each resource do we need? Unfortunately, theoretically, we need an infinite number of copies of each resource!! Note even in this example, if P3 is deadlocked, then the deadlock between P1 and P2 cannot be broken. So, we would need one more copy of resource r2. That clearly demonstrates the limitation of the multiple copies argument.

Never let the conditions occur: It takes some specific conditions to occur at the same time to cause deadlock. This deadlock avoidance simply states that do not let these conditions occur at the same time. Let us analyze this a bit deeper to determine if we can indeed prevent these conditions from occurring at the same time. The first condition is mutual exclusion. Unfortunately, many resources do require mutual exclusion!! So we must live with it and design our systems bearing in mind that mutual exclusion would have to be provided for. Next, let us consider the condition of hold and wait. Since, hold and wait is also implied by circular wait, we may look at the possibility of preventing any circular waits. This may be doable by analyzing program structures. Now let us examine pre-emption. It may not be the best policy to break a deadlock, but it works. Pre-emption is clearly enforceable in most, if not all, situations. Pre-emption results in releasing resources which can help some processes to progress, thereby breaking the deadlock. In fact, many real-time OSs require pre-emption for their operation. For example, when a certain critical condition arises, alarms must be set or raised. In some other cases an emergency process may even take over by pre-empting the currently running process.

6.1.1 A Deadlock Prevention Method

In a general case, we may have multiple copies of resources. Also, processes may request multiple copies of a resource. Modeling such a scenario as a graph is difficult. In such a situation, it is convenient to use a matrix model. We shall use Figure 6.4 to explain the matrix-based method. In Figure 6.4 we assume n processes and m kinds of resources. We denote the ith resource by ri. We now define two vectors, each of size m.

Vector R = (r1; r2; ::::; rm) : ri = resources of type i with the system.

Vector A = (a1; a2; ::::; am) : ai = resources of type i presently available for allocation. Initially with no allocations made, we have R = A. However, as allocations happen, vector A shall be depleted. Also, when processes terminate and release their resources, vector A gets updated to show additional resources that become available now. We also define two matrices to denote allocations made and requests for the resources. There is a row for each process and a column for each resource. Matrix AM and matrix RM respectively have entries for allocation and requests. An entry ci,j in matrix AM denotes the number of resources of type j currently allocated to process Pi. Similarly, qi,j in matrix RM denotes the number of resources of type j requested by process Pi. This is depicted in Figure 6.4. Below we state the three conditions which capture the constraints for the model. The first condition always holds. The second condition holds when requests on resources exceed capacity. In this condition not all processes can execute simultaneously.

1. image.This condition states that the allocation of resource j to all the processes plus the now available resource of kind j is always less than the ones available with the system.

2. imageThis condition states that the requests for resources made by every process may exceed what is available on the system.

3. In addition, we have the physical constraintimage . This condition states that allocation of a resource j to a process may be usually less than the request made by the process. At the very best the process's request may be fully granted. The matrix model captures the scenario where n processes compete to acquire one or more copies of the m kinds of resources.

clip_image003

Deadlock Detection and Prevention Algorithms

In this section we shall discuss a few deadlock detection and prevention algorithms. We shall employ both the graph and matrix models. We begin with the simple case when we have one copy of each resource and have to make allocation to a set of processes. A graph based detection algorithm: In our digraph model with one resource of one kind, the detection of a deadlock requires that we detect a directed cycle in a processor resource digraph. This can be simply stated as follows.

  • Choose a process node as a root node (to initiate a depth first traversal).
  • Traverse the digraph in depth first mode.
  • Mark process nodes as we traverse the graph.
  • If a marked node is revisited then a deadlock exists.

In the above steps we are essentially trying to detect the presence of a directed cycle.

Bankers' Algorithm: This is a simple deadlock prevention algorithm. It is based on a banker's mind-set: “offer services at no risk to the bank". It employs a policy of resource denial if it suspects a risk of a deadlock. In other words, for a set of processes, resources are allocated only when it shall not lead to a deadlock. The algorithm checks for the four necessary conditions for deadlock. A deadlock may happen when any one of the four conditions occurs. If a deadlock is likely to happen with some allocation, then the algorithm simply does not make that allocation.

The manner of operation is as follows: The request of process i is assessed to determine whether the process request can be met from the available resources of each kind. This means image  In that case, process i may be chosen to execute. In fact, the policy always chooses that subset amongst the processes which can be scheduled to execute without a deadlock.

Let us now offer a critique of the algorithm.

1. If there are deadlocked processes, they shall remain deadlocked. Bankers' algorithm does not eliminate an existing deadlock.

2. Bankers' algorithm makes an unrealistic assumption. It stipulates that the resource requirements for processes are known in advance. This may not be rare but then there are processes which generate resource requests on the fly. These dynamically generated requirements may change during the lifetime of a process.

3. With multi-programming, the number of live processes at any one time may not be known in advance.

4. The algorithm does not stipulate any specific order in which the processes should be run. So in some situations, it may choose an order different from the desired order. Sometimes we do need processes to follow a specific order. This is true when the processes must communicate in a particular sequence (see synchronization example in Section 6.5).

5. Also, the algorithm assumes a fixed number of resources initially available on a system. This too may vary over time.

A matrix based deadlock detection method: When multiple resources of each kind are available, we use the matrix model shown in Figure 6.4 to detect deadlocks. We analyze the requests of the processes (matrix RM) against initially available copies of each

resource (vector A). This is what the algorithm below indicates. Following the description of the algorithm there is a brief explanation of the same.

Procedure detect deadlock

begin

 image set marked(i) = false. These flags help us to detect marked processes whose requirements can be satisfied.

deadlockpresent = false

All entries in matrix AM are initialized to zero.

While there are processes yet to be examined do

{

Pick a process Pi whose requests have not been examined yet.

For process Pi check if RMi ≤ A then

{

allocate the resources;

marked(i) = true

Add allocation made to row AMi

Subtract this allocation from A to update A

}

}

If ∀i , marked(i) is true then deadlockpresent = false else deadlockpresent = true

end

We offer an alternative explanation for the above algorithm; let us assume that processes P1 through Pn are to be allocated m kinds of resources. We begin with some tentative allocation of resources starting with, say, P1 in sequence. Now let us consider an intermediate step: the allocation for process during which we are determining allocation for process Pi. The row corresponding to process Pi in matrix RM denotes its resource requests. This row gives the number for each kind of resource requested by Pi. Recall that vector A denotes the resources presently available for allocation. Now, let us suppose that resource requests of process Pi can be met. In that case, vector RMi ≤ A. This means that this process could be scheduled to execute and no deadlock as yet has manifested. This allocation can then be reflected in matrix AMi. Also, vector A needs to be modified accordingly. Next, with the revised vector A, we try to meet the requirements of the next process Pi+1 which is yet to be allocated with its resources. If we can exhaustively run through the sequence then we do not have a deadlock. However, at some stage we may find that its requirement of resources exceeds the available resources. Suppose this happens during the time we attempt allocation for process Pi+k. In that case, we have a deadlock for the subset of processes P1 through Pi+k. Recall that we are marking the processes that obtain their allocation. So if we have all the processes marked then there is no deadlock. If there is a set of processes that remain unmarked then we have a deadlock. Notwithstanding the non-deterministic nature of this algorithm it always detects a deadlock. Like the bankers' algorithm, this algorithm also does not help to eliminate an existing deadlock. Deadlock elimination may require pre-emption or release of resources. This may also result in a roll back in some transaction-oriented systems. This further reinforces pre-emption as an effective deadlock elimination strategy.

Comments

Popular posts from this blog

Introduction to Operating Systems:Early History: The 1940s and 1950s

Input Output (IO) Management:HW/SW Interface and Management of Buffers.

Summary of Process Concepts