Mutual Exclusion



Rating - 3/5
483 views

Mutual Exclusion

 In this section, we present a number of different algorithms for implementing mutual exclusion in a distributed environment. We assume that the system consists of n processes, each of which resides at a different processor. To simplify our discussion, we assume that processes are numbered uniquely from 1 to n and that a one-to-one mapping exists between processes and processors (that is, each process has its own processor).

Centralized Approach

 In a centralized approach to providing mutual exclusion, one of the processes in the system is chosen to coordinate the entry to the critical section. Each process that wants to invoke mutual exclusion sends a request message to the coordinator. When the process receives a reply message from the coordinator, it can proceed to enter its critical section. After exiting its critical section, the process sends a release message to the coordinator and proceeds with its execution.

On receiving a request message, the coordinator checks to see whether some other process is in its critical section. If no process is in its critical section, the coordinator immediately sends back a reply message. Otherwise, the request is queued. When the coordinator receives a release message, it removes one of the request messages from the queue (in accordance with some scheduling algorithm) and sends a reply message to the requesting process.

These Topics Are Also In Your Syllabus
1 Robustness link
2 Design Issues link
You May Find Something Very Interesting Here. link
3 Design Principles link
4 Linux-Input & output link
5 Communication Protocols link

 It should be clear that this algorithm ensures mutual exclusion. In addition, if the scheduling policy within the coordinator is fair—such as first-come, firstserved (FCFS) scheduling—no starvation can occur. This scheme requires three messages per critical-section entry: a request, a reply, and a release. If the coordinator process fails, then a new process must take its place. In Section 18.6, we describe some algorithms for electing a unique new coordinator. Once a new coordinator has been elected, it must poll all the processes in the system to reconstruct its request queue. Once the queue has been constructed, the computation can resume

Fully Distributed Approach

 If we want to distribute the decision making across the entire system, then the solution is far more complicated. One approach, described next, uses an algorithm based on the event-ordering scheme described in Section 18.1. When a process P, wants to enter its critical section, it generates a new timestamp, TS, and sends the message request(Pj, TS) to all processes in the system (including itself). On receiving a request message, a process may reply immediately (that is, send a reply message back to P;), or it may defer sending a reply back (because it is already in its critical section, for example).

A process 18.2 Mutual Exclusion 667 that has received a reply message from all other processes in the system can enter its critical section, queueing incoming requests and deferring them. After exiting its critical section, the process sends reply messages to all its deferred requests. The decision whether process P, replies immediately to a requcst(Pj, TS) message or defers its reply is based on three factors:

1. If process P, is in its critical section, then it defers its reply to P-.

These Topics Are Also In Your Syllabus
1 What Is Multimedia? link
2 How is CPU Scheduling done in Multimedia systems? link
You May Find Something Very Interesting Here. link
3 Disk Scheduling in Multimedia Systems link
4 Multimedia- Network Management link
5 What is CTSS? link

2. If process P; does not want to enter its critical section, then it sends a reply immediately to Pj.

3. If process P; wants to enter its critical section but has not yet entered it, then it compares its own request timestamp with the timestamp of the incoming request made by process Pj. If its own request timestamp is greater than that of the incoming request, then it sends a reply immediately to Pj (Pj asked first). Otherwise, the reply is deferred. This algorithm exhibits the following desirable behavior:

• Mutual exclusion is obtained.

• Freedom from deadlock is ensured.

 • Freedom from starvation is ensured, since entry to the critical section is scheduled according to the timestamp ordering. The timestamp ordering ensures that processes are served in FCFS order.

These Topics Are Also In Your Syllabus
1 Explain Reaching Agreement. link
2 What is Atomicity? link
You May Find Something Very Interesting Here. link
3 What is Concurrency Control? link
4 Features of Real-Time Kernels link
5 Implementing Real-Time Operating Systems link

• The number of messages per critical-section entry is 2 x (n — 1). This number is the minimum number of required messages per critical-section entry when processes act independently and concurrently. To illustrate how the algorithm functions, we consider a system consisting of processes Pi, Pz, and P3. Suppose that processes Pi and P3 want to enter their critical sections. Process Pi then sends a message request (Pi, timestamp - 10) to processes P? and P$, while process P3 sends a message request (P3, timestamp = 4) to processes P] and Pi.

The timestamps 4 and 10 were obtained from the logical clocks described in Section 18.1. When process P2 receives these request messages, it replies immediately. When process Pi receives the request from process P3, it replies immediately, since the timestamp (10) on its own request message is greater than the timestamp (4) for process P3. When process P3 receives the request message from process P\, it defers its reply, since the timestamp (4) on its request message is less than the timestamp (10) for the message from process Pi. On receiving replies from both process Pi and process P?, process P3 can enter its critical section.

 After exiting its critical section, process P3 sends a reply to process Pi, which can then enter its critical section. Because this scheme requires the participation of all the processes in the system, however, it has three undesirable consequences:

1. The processes need to know the identity of all other processes in the system. When a new process joins the group of processes participating in the mutual-exclusion algorithm, the following actions need to be taken:

The process must receive the names of all the other processes1n the group. b. The name of the new process must be distributed to all the other processes in the group. This task is not as trivial as it may seem, since some request and reply messages may be circulating in the system when the new process joins the group. The interested reader is referred to the Bibliographical Notes for more details.

These Topics Are Also In Your Syllabus
1 Batch Operating Systems link
2 Time sharing operating systems link
You May Find Something Very Interesting Here. link
3 Distributed Operating Systems link
4 Network Operating System link
5 Types Of Systems link

 2. If one process fails, then the entire scheme collapses. We can resolve this difficulty by continuously monitoring the state of all processes in the system. If one process fails, then all other processes are notified, so that they will no longer send request messages to the failed process. When a process recovers, it must initiate the procedure that allows it to rejoin the group.

 3. Processes that have not entered their critical section must pause frequently to assure other processes that they intend to enter the critical section. Because of these difficulties, this protocol is best suited for small, stable sets of cooperating processes.

Token-Passing Approach

 Another method of providing mutual exclusion is to circulate a token among the processes in the system. A token is a special type of message that is passed around the system. Possession of the token entitles the holder to enter the critical section. Since there is only a single token, only one process can be in its critical section at a time. We assume that the processes in the system are logically organized in a ring structure. The physical communication network need not be a ring. As long as the processes are connected to one another, it is possible to implement a logical ring.

To implement mutual exclusion, we pass the token around the ring. When a process receives the token, it may enter its critical section, keeping the token. After the process exits its critical section, the token is passed around again. If the process receiving the token does not want to enter its critical section, it passes the token to its neighbor. This scheme is similar to algorithm 1 in Chapter 6, but a token is substituted for a shared variable. If the ring is unidirectional, freedom from starvation is ensured.

These Topics Are Also In Your Syllabus
1 Swap Space Management link
2 Copy on write link
You May Find Something Very Interesting Here. link
3 File Replication link
4 Special purpose systems link
5 Computing Environments- Traditional Computing, Client-Server Computing, Peer-to-Peer Computing, Web-Based Computing link

The number of messages required to implement mutual exclusion may vary from one message per entry, in the case of high contention (that is, every process wants to enter its critical section), to an infinite number of messages, in the case of low contention (that is, no process wants to enter its critical section). Two types of failure must be considered. First, if the token is lost, an election must be called to generate a new token. Second, if a process fails, a new logical ring must be established.


Rating - 3/5
534 views

Advertisements
Rating - 3/5