Deadlock prevention



Rating - 3/5
538 views

Deadlock Prevention

 For a deadlock to occur, each of the four necessary conditions must hold. By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock. We elaborate on this approach by examining each of the four necessary conditions separately.

Mutual Exclusion

 The mutual-exclusion condition must hold for nonsharable resources. For example, a printer cannot be simultaneously shared by several processes. Sharable resources, in contrast, do not require mutually exclusive access and thus cannot be involved in a deadlock. Read-only files are a good example of a sharable resource. If several processes attempt to open a read-only file at the same time, they can be granted simultaneous access to the file. A process never needs to wait for a sharable resource. In general, however, we cannot prevent deadlocks by denying the mutual-exclusion condition, because some resources are intrinsically nonsharable,

Hold and Wait

To ensure that the hold-and-wait condition never occurs in the system, we must guarantee that, whenever a process requests a resource, it does not hold any other resources. One protocol that can be used requires each process to request and be allocated all its resources before it begins execution. We can implement this provision by requiring that system calls requesting resources for a process precede all other system calls. An alternative protocol allows a process to request resources only when it has none. A process may request some resources and use them.

These Topics Are Also In Your Syllabus
1 what is compression in mutimdedia? link
2 Requirements of Multimedia Kernels link
You May Find Something Very Interesting Here. link
3 What Is Multimedia? link
4 How is CPU Scheduling done in Multimedia systems? link
5 Types Of Systems link

 Before it can request any additional resources, however, it must release all the resources that it is currently allocated. To illustrate the difference between these two protocols, we consider a process that copies data from a DVD drive to a file on disk, sorts the file, and then prints the results to a printer. If all resources must be requested at the beginning of the process, then the process must initially request the DVD drive, disk file, and printer. It will hold the printer for its entire execution, even though it needs the printer only at the end. The second method allows the process to request initially only the DVD drive and disk file. It copies from the DVD drive to the disk and then releases both the DVD drive and the disk file. The process must then again request the disk file and the printer. After copying the disk file to the printer, it releases these two resources and terminates.

Both these protocols have two main disadvantages. First, resource utilization may be low, since resources may be allocated but unused for a long period. In the example given, for instance, we can release the DVD drive and disk file, and then again request the disk file and printer, only if we can be sure that our data will remain on the disk file. If we cannot be assured that they will, then we must request all resources at the beginning for both protocols. Second, starvation is possible. A process that needs several popular resources may have to wait indefinitely, because at least one of the resources that it needs is always allocated to some other process.

No Preemption

 The third necessary condition for deadlocks is that there be no preemption of resources that have already been allocated. To ensure that this condition does not hold, we can use the following protocol. If a process is holding some resources and requests another resource that cannot be immediately allocated to it (that is, the process must wait), then all resources currently being held are preempted. In other words, these resources are implicitly released. The preempted resources are added to the list of resources for which the process is waiting. The process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting. Alternatively, if a process requests some resources, we first check whether they are available. If they are, we allocate them. If they are not, we check whether they are allocated to some other process that is waiting for additional resources. If so, we preempt the desired resources from the waiting process and allocate them to the requesting process. If the resources are neither available nor held by a waiting process, the requesting process must wait. While it is waiting, some of its resources may be preempted, but only if another process requests them.

These Topics Are Also In Your Syllabus
1 Design Issues link
2 Design Principles link
You May Find Something Very Interesting Here. link
3 Linux-Input & output link
4 Communication Protocols link
5 Naming and Transparency link

 A process can be restarted only when it is allocated the new resources it is requesting and recovers any resources that were preempted while it was waiting. This protocol is often applied to resources whose state can be easily saved and restored later, such as CPU registers and memory space. It cannot generally be applied to such resources as printers and tape drives.

Circular Wait

The fourth and final condition for deadlocks is the circular-wait condition. One way to ensure that this condition never holds is to impose a total ordering of all resource types and to require that each process requests resources in an increasing order of enumeration. To illustrate, we let R = {R\, Ri, ..., Rm} be the set of resource types. We assign to each resource type a unique integer number, which, allows us to compare two resources and to determine whether one precedes another in our ordering. Formally, we define a one-to-one function F: R —> N, where N is the set of natural numbers.

For example, if the set of resource types R includes tape drives, disk drives, and printers, then the function F might be defined as follows: F(tape drive) = 1 F(di.s.k drive) — 5 F (printer) = 12 We can now consider the following protocol to prevent deadlocks: Each process can request resources only in an increasing order of enumeration. That is, a process can initially request any number of instances of a resource type— say, R,. After that, the process can request instances of resource type R; if and only if F(R;) > F(R,). If several instances of the same resource type are needed, a single request for all of them must be issued. For example, using the function defined previously, a process that wants to use the tape drive and printer at the same time must first request the tape drive and then request the printer.

Alternatively, we can require that, whenever a process requests an instance of resource type R,, it has released any resources R. such that F{Rj) > F(Rj). If these two protocols are used, then the circular-wait condition cannot hold. We can demonstrate this fact by assuming that a circular wait exists (proof by contradiction). Let the set of processes involved in the circular wait be {PQ, P\,..., P,,}, where P. is waiting for a resource R,-, which is held by process P/+i. (Modulo arithmetic is used on the indexes, so that P,, is waiting for a resource R,, held by Po-) Then, since process P.+i is holding resource R; while requesting resource R;+i, we must have F(R,) < F(R,-+i), for all i. But this condition means that

These Topics Are Also In Your Syllabus
1 Various Operating system services link
2 Architectures of Operating System link
You May Find Something Very Interesting Here. link
3 Monolithic architecture - operating system link
4 Layered Architecture of Operating System link
5 Microkernel Architecture of operating system link

                                             F(R()) < F(R^) < ••• < F(R,,) < F(R0).

 By transitivity, F(Ro) < F(RQ), which is impossible. Therefore, there can be no circular wait. We can accomplish this scheme in an application program by developing an ordering among all synchronization objects in the system. All requests for synchronization objects must be made in increasing order. For example, if the lock ordering in the Pthread program shown in Figure 7.1 was F(first_mutex)= 1 F(second_mutex) = 5 then threacLtwo could not request the locks out of order. Keep in mind that developing an ordering, or hierarchy, in itself does not prevent deadlock. It is up to application developers to write programs that follow the ordering. Also note that the function F should be defined according to the normal order of usage of the resources in a system. For example, because the tape drive is usually needed before the printer, it would be reasonable to define F(tape drive) the order

 (1) firstjnutex,

(2) secondjnutex.

Witness records the relationship that first jnutex must be acquired before secondjnutex. If threacLtwo later acquires the locks out of order, witness generates a warning message on the system console.

These Topics Are Also In Your Syllabus
1 Layered Architecture of Operating System link
2 Microkernel Architecture of operating system link
You May Find Something Very Interesting Here. link
3 Hybrid Architecture of Operating System link
4 System Programs and Calls link
5 Process Management: Process concept link

Rating - 3/5
496 views

Advertisements
Rating - 3/5