Deadlock Avoidance




Deadlock Avoidance

Deadlock-prevention algorithms, as discussed in Section 7.4, prevent deadlocks by restraining how requests can be made. The restraints ensure that at least one of the necessary conditions for deadlock cannot occur and, hence, that deadlocks cannot hold. Possible side effects of preventing deadlocks by this method, however, are low device utilization and reduced system throughput. An alternative method for avoiding deadlocks is to require additional information about how resources are to be requested. For example, in a system with one tape drive and one printer, the system might need to know that process P will request first the tape drive and then the printer before releasing both resources, whereas process Q will request first the printer and then the tape drive. With this knowledge of the complete sequence of requests and releases for each process, the system can decide for each request whether or not the process should wait in order to avoid a possible future deadlock.

 Each request requires that in making this decision the system consider the resources currently available, the resources currently allocated to each process, and the future requests and releases of each process. The various algorithms that use this approach differ in the amount and type of information required. The simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. Given this a priori, information, it is possible to construct an algorithm that ensures that the system will never enter a deadlocked state. Such an algorithm defines the deadlock-avoidance approach. A deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that a circularwait condition can never exist. The resource-allocation state is defined by the number of available and allocated resources and the maximum demands of the processes. In the following sections, we explore two deadlock-avoidance algorithms.

Safe State

A state is safe if the system can allocate resources to each process (up to its maximum) in some order and still avoid a deadlock. More formally, a system is in a safe state only if there exists a safe sequence. A sequence of processes pis a safe sequence for the current allocation state if, for each Pi, the resource requests that P, can still make can be satisfied by the currently available resources plus the resources held by all Pi, with / < /. In this situation, if the resources that Pi needs are not immediately available, then P, can wait until all Pj have finished. When they have finished, P; can obtain all of its needed resources, complete its designated task, return its allocated resources, and terminate. When P, terminates, P,+l can obtain its needed resources, and so on. If no such sequence exists, then the system state is said to be unsafe.

Deadlock Avoidance

 

A safe state is not a deadlocked state. Conversely, a deadlocked state is an unsafe state. Not all unsafe states are deadlocks, however (Figure 7.5). An unsafe state may lead to a deadlock. As long as the state is safe, the operating system can avoid unsafe (and deadlocked) states. In an unsafe state, the operating system cannot prevent processes from requesting resources such that a deadlock occurs: The behavior of the processes controls unsafe states. To illustrate, we consider a system with 12 magnetic tape drives and three processes: PLl/ P\, and P2. Process PQ requires 10 tape drives, process Pi may need as many as 4 tape drives, and process P? may need up to 9 tape drives. Suppose that, at time to, process PQ is holding 5 tape drives, process P\ is holding 2 tape drives, and process P2 is holding 2 tape drives. (Thus, there are 3 free tape drives.)

At time fo, the system is in a safe state. The sequence < Pi, Po, ?2> satisfies the safety condition. Process Pj can immediately be allocated all its tape drives and then return them (the system will then have 5 available tape drives); then process PL) can get all its tape drives and return them (the system will then have 10 available tape drives); and finally process P^ can get all its tape drives and return them (the system will then have all 12 tape drives available).

Deadlock Avoidance

 A system can go from a safe state to an unsafe state. Suppose that, at time t\, process Pz requests and is allocated one more tape drive. The system is no longer in a safe state. At this point, only process P, can be allocated all its tape drives. When it returns them, the system will have only 4 available tape drives. Since process Pp, is allocated 5 tape drives but has a maximum of 10, it may request 5 more tape drives. Since they are unavailable, process Po must wait. Similarly, process P? may request an additional 6 tape drives and have to wait, resulting in a deadlock. Our mistake was in granting the request from process Pi for one more tape drive. If we had made P2 wait until either of the other processes had finished and released its resources, then we could have avoided the deadlock. Given the concept of a safe state, we can define avoidance algorithms that ensure that the system will never deadlock. The idea is simply to ensure that the system will always remain in a safe state. Initially, the system is in a safe state. Whenever a process requests a resource that is currently available, the system must decide whether the resource can be allocated immediately or whether the process must wait.

Topics You May Be Interested In
Monolithic Architecture - Operating System Transforming I/o Requests To Hardware Operations
Process Scheduling An Example: Cineblitz
System Calls Remote File Access
Process Concept What Is Ibm Os/360?
Thread Libraries What Is Election Algorithms ?

The request is granted only if the allocation leaves the system in a safe state. In this scheme, if a process requests a resource that is currently available, it may still have to wait. Thus, resource utilization may be lower than it would otherwise be.

Resource-Allocation-Graph Algorithm

 If we have a resource-allocation system with only one instance of each resource type, a variant of the resource-allocation graph defined in Section 7.2.2 can be used for deadlock avoidance. In addition to the request and assignment edges already described, we introduce a new type of edge, called a claim edge. A claim edge P; —> Rj indicates that process P, may request resource R, at some time in the future. This edge resembles a request edge in direction but is represented in the graph by a dashed line. When process P.- requests resource Rj, the claim edge P, —> Rj is converted to a request edge. Similarly, when a resource Rj is released by Pj, the assignment edge Rj -» P,- is reconverted to a claim edge P; —> Rj. We note that the resources must be claimed a priori in the system.

Deadlock Avoidance

That is, before process p starts executing, all its claim edges must already appear in the resource-allocation graph. We can relax this condition by allowing a claim edge P, —> R- to be added to the graph only if all the edges associated with process P,- are claim edges. Suppose that process P, requests resource Rj. The request can be granted only if converting the request edge P, —» Rj to an assignment edge Rj —> P; does not result in the formation of a cycle in the resource-allocation graph.

 

Note that we check for safety by using a cycle-detection algorithm. An algorithm for detecting a cycle in this graph requires an order of n 2 operations, where n is the number of processes in the system. If no cycle exists, then the allocation of the resource will leave the system in a safe state. If a cycle is found, then the allocation will put the system in an unsafe state. Therefore, process P: will have to wait for its requests to be satisfied. To illustrate this algorithm, we consider the resource-allocation graph of Figure 7.6. Suppose that Pi requests R?. Although Ri is currently free, we cannot allocate it to p>, since this action will create a cycle in the graph (Figure 7.7). A cycle indicates that the system is in an unsafe state. If Pi requests R2, and Po requests R\, then a deadlock will occur.

Banker's Algorithm

The resource-allocation-graph algorithm is not applicable to a resourceallocation system with multiple instances of each resource type. The deadlockavoidance algorithm that we describe next is applicable to such a system but is less efficient than the resource-allocation graph scheme. This algorithm is commonly known as the banker's algorithm. The name was chosen because the algorithm could be used in a banking system to ensure that the bank never allocated its available cash in such a way that it could no longer satisfy the needs of all its customers. When, a new process enters the system, it must declare the maximum number of instances of each resource type that it may need. This number may not exceed the total number of resources in the system. When a user requests a set of resources, the system must determine whether the allocation of these resources will leave the system in a safe state. If it will, the resources are allocated; otherwise, the process must wait until some other process releases enough resources. Several data structures must be maintained to implement the banker's algorithm.

These data structures encode the state of the resource-allocation system. Let n be the number of processes in the system and m be the number of resource types. We need the following data structures:

Topics You May Be Interested In
System Programs And Calls Free Space Management
Direct Memory Access Deadlock Characteristics
Monitors Application I/o Interface
Segmentation Types Of System Calls
Allocation Of Frames Programmer Interface

• Available. A vector of length m indicates the number of available resources of each type. If Availab!c[f] equals k, there are k instances of resource type Ri available.

 • Max. An n x m matrix defines the maximum demand of each process. If M(7.t[;][/] equals k, then process P\ may request at most k instances of resource type /?/.

• Allocation. An n x in matrix defines the number of resources of each type currently allocated to each process. If Allocation[i][j] equals k, then process Pi is currently allocated k instances of resource type /?,. • Need. An n x m matrix indicates the remaining resource need of each process. If Need[i][j] equals k, then process P,- may need k more instances of resource type R- to complete its task. Note that Need[/][/] equals Max[i][j] - Allocntion[i][j]. These data structures vary over time in both size and value. To simplify the presentation of the banker's algorithm, we next establish some notation. Let X and Y be vectors of length n. We say that X < Y if and only if X[i] < Y[/] for all / = 1, 2, ..., n. For example, if x"= (1,7,3,2) and Y = (0,3,2,1), then Y < X. Y < X if Y < X and Y + X. We can treat each row in the matrices Allocation and Need as vectors and refer to them as Allocation; and Need,. The vector Allocation, specifies the resources currently allocated to process P,; the vector Needi specifies the additional resources that process P, may still request to complete its task.

Safety Algorithm

 We can now present the algorithm for finding out whether or not a system is in a safe state. This algorithm can be described, as follows: 1. Let Work and Finish be vectors of length in and n, respectively. Initialize Work = Availabl e an d Fiiush\i] - false fo r / - 0,1 , ..., n-l . 2. Find an / such that both a. Finish[i] ==false b. Need, < Work If no such / exists, go to step 4. 3. Work = Work + Allocation, Finish[i] = true Go to step 2. 4. If Finisli[i] -- true for all. /, then the system is in a safe state. This algorithm may require an order of m x it operations to determine whether a state is safe.

Resource-Request Algorithm

We now describe the algorithm which determines if requests can be safely granted. Let Request• be the request vector for process P,. If Request,; [ /'] —— k, then process P, wants k instances of resource type R;. When a request for resources is made by process P,, the following actions are taken: 1. If Request, < Need:, go to step 2. Otherwise, raise an error condition, since the process has exceeded its maximum claim. 2. If Request{ < Available, go to step 3. Otherwise, Ps must wait, since the resources are not available. 3. Have the system pretend to have allocated the requested resources to process P- by modifying the state as follows: Available = Available - Request;; Allocation-, = Allocation; + Request;; Need; = Necdj - Request-; If the resulting resource-allocation state is safe, the transaction is completed, and process P; is allocated its resources. However, if the new state is unsafe, then P, must wait for Request;, and the old resource-allocation state is restored.

An Illustrative Example

 Finally, to illustrate the use of the banker's algorithm, consider a system with five processes PQ through P4 and three resource types A, B, and C. Resource type A has 10 instances, resource type B has 5 instances, and resource type C has 7 instances. Suppose that, at time To, the following snapshot of the system has been taken: We claim that the system is currently in a safe state.

 Indeed, the sequence p3satisfies the safety criteria. Suppose now that process P] requests one additional instance of resource type A and two instances of resource type C, so Request] = (1,0,2). To decide whether this request can be immediately granted, we first check that Request < Available—that is, that (1/0/2) < (3,3,2), which is true. We then pretend that this request has been fulfilled, and we arrive at the following new state:

Topics You May Be Interested In
Hybrid Architecture Of Operating System Operating System Services
Thrashing Naming And Transparency
Mass Storage Structure Overview Example: The Intel Pentium
File Protection What Is Multics?
Systems Analysis And Design: Core Concepts What Is Special Purpose System?

Deadlock Avoidance

We must determine whether this new system state is safe.

Deadlock Avoidance

To do so, we execute our safety algorithm and find that the sequence satisfies the safety requirement. Hence, we can immediately grant the request of process P\. You should be able to see, however, that when the system is in this state, a request for (3,3,0) by P4 cannot be granted, since the resources are not available. Furthermore, a request for (0,2,0) by Po cannot be granted, even though the resources are available, since the resulting state is unsafe. We leave it as a programming exercise to implement the banker's algorithm



Frequently Asked Questions

+
Ans: Deadlock Prevention As we noted in Section 7.2.1, for a deadlock to occur, each of the four necessary conditions must hold. By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock. We elaborate on this approach by examining each of the four necessary conditions separately. view more..
+
Ans: Virtual Machines The layered approach described in Section 2.7.2 is taken to its logical conclusion in the concept of a virtual machine. The fundamental idea behind a virtual machine is to abstract the hardware of a single computer (the CPU, memory, disk drives, network interface cards, and so forth) into several different execution environments, thereby creating the illusion that each separate execution environment is running its own private computer. By using CPU scheduling (Chapter 5) and virtual-memory techniques (Chapter 9), an operating system can create the illusion that a process has its own processor with its own (virtual) memory. Normally, a process has additional features, such as system calls and a file system, that are not provided by the bare hardware. view more..
+
Ans: Operating-System Design and Implementation In this section, we discuss problems we face in designing and implementing an operating system. There are, of course, no complete solutions to such problems, but there are approaches that have proved successful. view more..
+
Ans: Deadlock Avoidance Deadlock-prevention algorithms, as discussed in Section 7.4, prevent deadlocks by restraining how requests can be made. The restraints ensure that at least one of the necessary conditions for deadlock cannot occur and, hence, that deadlocks cannot hold. Possible side effects of preventing deadlocks by this method, however, are low device utilization and reduced system throughput. An alternative method for avoiding deadlocks is to require additional information about how resources are to be requested. For example, in a system with one tape drive and one printer, the system might need to know that process P will request first the tape drive and then the printer before releasing both resources, whereas process Q will request first the printer and then the tape drive. With this knowledge of the complete sequence of requests and releases for each process, the system can decide for each request whether or not the process should wait in order to avoid a possible future deadlock. view more..
+
Ans: Recovery From Deadlock When a detection algorithm determines that a deadlock exists, several alternatives are available. One possibility is to inform the operator that a deadlock has occurred and to let the operator deal with the deadlock manually. Another possibility is to let the system recover from the deadlock automatically. There are two options for breaking a deadlock. One is simply to abort one or more processes to break the circular wait. The other is to preempt some resources from one or more of the deadlocked processes. view more..
+
Ans: Stable-Storage Implementation We introduced the write-ahead log, which requires the availability of stable storage. By definition, information residing in stable storage is never lost. To implement such storage, we need to replicate the needed information on multiple storage devices (usually disks) with independent failure modes. We need to coordinate the writing of updates in a way that guarantees that a failure during an update will not leave all the copies in a damaged state and that, when we are recovering from a failure, we can force all copies to a consistent and correct value, even if another failure occurs during the recovery. In this section, we discuss how to meet these needs. view more..
+
Ans: File-System Mounting Just as a file must be opened before it is used, a file system must be mounted before it can be available to processes on the system. More specifically, the directory structure can be built out of multiple volumes, which must be mounted to make them available within the file-system name space. The mount procedure is straightforward. The operating system is given the name of the device and the mount point—the location within the file structure where the file system is to be attached. Typically, a mount point is an empty directory. view more..
+
Ans: Access Methods Files store information. When it is used, this information must be accessed and read into computer memory. The information in the file can be accessed in several ways. Some systems provide only one access method for files. Other systems, such as those of IBM, support many access methods, and choosing the right one for a particular application is a major design problem. view more..
+
Ans: Directory implementation The selection of directory-allocation and directory-management algorithms significantly affects the efficiency, performance, and reliability of the file system. In this section, we discuss the trade-offs involved in choosing one of these algorithms. view more..
+
Ans: Swap-Space Use Swap space is used in various ways by different operating systems, depending on the memory-management algorithms in use. For instance, systems that implement swapping may use swap space to hold an entire process image, including the code and data segments. Paging systems may simply store pages that have been pushed out of main memory. The amount of swap space needed on a system can therefore vary depending on the amount of physical memory, the amount of virtual memory it is backing, and the way in which the virtual memory is used. It can range from a few megabytes of disk space to gigabytes. view more..
+
Ans: Copy-on-Write we illustrated how a process can start quickly by merely demandpaging in the page containing the first instruction. However, process creation using the fork () system call may initially bypass the need for demand paging by using a technique similar to page sharing (covered in Section 8.4.4). This technique provides for rapid process creation and minimizes the number of new pages that must be allocated to the newly created process. view more..
+
Ans: File Replication Replication of files on different machines in a distributed file system is a useful redundancy for improving availability. Multimachine replication can benefit performance too: Selecting a nearby replica to serve an access request results in shorter service time. view more..
+
Ans: Special-Purpose Systems The discussion thus far has focused on general-purpose computer systems that we are all familiar with. There are, however, different classes of computer systems whose functions are more limited and whose objective is to deal with limited computation domains. view more..
+
Ans: Computing Environments : Traditional Computing, Client-Server Computing, Peer-to-Peer Computing, Web-Based Computing view more..
+
Ans: Scheduling Criteria Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. Many criteria have been suggested for comparing CPU scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best. The criteria include the following: • CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system). view more..
+
Ans: Thread Scheduling we introduced threads to the process model, distinguishing between user-level and kernel-level threads. On operating systems that support them, it is kernel-level threads—not processes—that are being scheduled by the operating system. User-level threads are managed by a thread library, and the kernel is unaware of them. To run on a CPU, user-level threads must ultimately be mapped to an associated kernel-level thread, although this mapping may be indirect and may use a lightweight process (LWP). In this section, we explore scheduling issues involving user-level and kernel-level threads and offer specific examples of scheduling for Pthreads. view more..
+
Ans: Thread Libraries A thread library provides the programmer an API for creating and managing threads. There are two primary ways of implementing a thread library. The first approach is to provide a library entirely in user space with no kernel support. All code and data structures for the library exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call. view more..
+
Ans: we illustrate a classic software-based solution to the critical-section problem known as Peterson's solution. Because of the way modern computer architectures perform basic machine-language instructions, such as load and store, there are no guarantees that Peterson's solution will work correctly on such architectures. However, we present the solution because it provides a good algorithmic description of solving the critical-section problem and illustrates some of the complexities involved in designing software that addresses the requirements of mutual exclusion, progress, and bounded waiting requirements. Peterson's solution is restricted to two processes that alternate execution between their critical sections and remainder sections. The processes are numbered Po and Pi. view more..




Rating - 3/5
503 views

Advertisements