Programmer Interface




Programmer Interface

The Win32 API is the fundamental interface to the capabilities of Windows XP. This section describes five main aspects of the Win32 API: access to kernel objects, sharing of objects between processes, process management, interprocess communication, and memory management.

Access to Kernel Objects

The Windows XP kernel provides many services that application programs can use. Application programs obtain these services by manipulating kernel objects. A process gains access to a kernel object named XXX by calling the CreateXXX function to open a handle to XXX. This handle is unique to the process. Depending on which object is being opened, if the Create() function fails, it may return 0, or it may return a special constant named INVALID _HANDLE_VALUE. A process can close any handle by calling the CloseHandle () function, and the system may delete the object if the count of processes using the object drops to 0.

Topics You May Be Interested In
Real Time Operating System Naming And Transparency
Interprocess Communication Example: The Intel Pentium
Instruction Execution User Authentication
Allocation Of Frames Access Matrix
File Protection Types Of System Calls

Sharing Objects

 Between Processes Windows XP provides three ways to share objects between processes. The first way is for a child process to inherit a handle to the object. When the parent calls the CreateXXX function, the parent supplies a SECURITIESJVTTRIBUTES structure with the blnheritHandle field set to TRUE. This field creates an inheritable handle. Next, the child process is created, passing a value of TRUE to the CreateProcessO function's blnheritHandle argument. Figure 22.11 shows a code sample that creates a semaphore handle inherited by a child process. Assuming the child process knows which handles are shared, the parent and child can achieve interprocess communication through the shared objects. In the example in Figure 22.11, the child process gets the value of the handle from the first command-line argument and then shares the semaphore with the parent process.

Programmer Interface

The second way to share objects is for one process to give the object a name when the object is created and for the second process to open the name. This method has two drawbacks: Windows XP does not provide a way to check whether an object with the chosen name already exists, and the object name space is global, without regard to the object type. For instance, two applications may create an object named pipe when two distinct—and possibly different— objects are desired.

Topics You May Be Interested In
Distributed Operating Systems Networking In Windows Xp
Instruction Execution Requirements Of Multimedia Kernels
User Os Interface, Command Interpreter, And Graphical User Interfaces Windows Xp- History
Deadlock Avoidance What Is Vxworks 5.x?
Deadlock Recovery Distributed System

Named objects have the advantage that unrelated processes can readily share them. The first process calls one of the CreateXXX functions and supplies a name in the lpszName parameter. The second process gets a handle to share the object by calling OpenXXX () (or CreateXXX) with the same name, as shown in the example of Figure 22.12.

 The third way to share objects is via the DuplicateHandleO function. This method requires some other method of interprocess communication to pass the duplicated handle. Given a handle to a process and the value of a handle within that process, a second process can get a handle to the same object and thus share it. An example of this method is shown in Figure 22.13

Process Management

In Windows XP, a process is an executing instance of an application, and a thread is a unit of code that can be scheduled by the operating system. Thus, a process contains one or more threads. A process is started when some other process calls the CreateProcess() routine. This routine loads any dynamic link libraries used by the process and creates a primary thread. Additional threads can be created by the CreateThreadO function. Each thread is created with its own stack, which defaults to 1 MB unless specified otherwise in an argument to CreateThreadO. Because some C run-time functions maintain state in static variables, such as errno, a multithread application needs to guard against unsynchronized access. The wrapper function beginthreadexO provides appropriate synchronization.

Topics You May Be Interested In
Hybrid Architecture Of Operating System Linux History
Semaphore In Operation System Remote File Access
Contiguous Memory Allocation What Is The Security Problem?
Operating System Services Networking In Windows Xp
Network Topology What Is Compression In Mutimdedia?

Instance Handles Every dynamic link library or executable file loaded into the address space of a process is identified by an instance handle. The value of the instance handle is actually the virtual address where the file is loaded. An application can get the handle to a module in its address space by passing the name of the module to GetModuleHandleO. If NULL is passed as the name, the base address of the process is returned. The lowest 64 KB of the address space are not used, so a faulty program that tries to de-reference a NULL pointer gets an access violation.

Programmer Interface

 Priorities in the Win32 API environment are based on the Windows XP scheduling model, but not all priority values may be chosen. Win32 API uses four priority classes:

1. IDLE_PRIORITY_CLASS (priority level 4)

Topics You May Be Interested In
Architectures Of Operating System File Protection
Interprocess Communication Thread Scheduling
System Boot Afs - Andrew File System
Contiguous Memory Allocation What Is The Wafl File System?
Os Examples Mutual Exclusion

2. NORMAL_PRIORITY_CLASS (priority level 8)

3. HIGH_PRIQRITY_CLASS (priority level 13)

4. REALTIME_PRIORITY_CLASS (priority level 24)

Processes are typically members of the NORMALJPRIORITY_CLASS unless the parent of the process was of the IDLE_PRIORITY_CLASS or another class was specified when CreateProcess was called. The priority class of a process can be changed with the SetPriorityClassO function or by passing of an argument to the START command.

Topics You May Be Interested In
Operating System Concepts ( Multi Tasking, Multi Programming, Multi-user, Multi-threading ) An Example: Networking
Architectures Of Operating System Algorithm Evaluation
File Concept Multimedia- Network Management
Computing Environments- Traditional Computing, Client-server Computing, Peer-to-peer Computing, Web-based Computing Access Matrix
Robustness Introduction To Storage Management

 For example, the command START /REALTIME cbserver.exe would run the cbserver program in the REALTIMEJPRIORITY_CLASS. Only users with the increase scheduling priority privilege can move a process into the REALTIME-PRIORITY XLASS. Administrators and power users have this privilege by default. 832 Chapter 22 Windows XP

Scheduling Rule

When a user is running an interactive program, the system needs to provide especially good performance for the process. For this reason, Windows XP has a special scheduling rule for processes in the NORMAL .PRIORITY-CLASS. Windows XP distinguishes between the foreground process that is currently selected on the screen and the background processes that are not currently selected. When a process moves into the foreground, Windows XP increases the scheduling quantum by some factor—typically by 3. (This factor can be changed via the performance option in the system section of the control panel.) This increase gives the foreground process three times longer to run before a time-sharing preemption occurs

Thread Priorities

Topics You May Be Interested In
Interprocess Communication Directory Implementation
Process Concept Disk Attachment
Free Space Management Naming And Transparency
Disk Management What Is The Security Problem?
Deadlock Prevention Summary Of Os Structures

A thread starts with an initial priority determined by its class. The priority can be altered by the SetThreadPriority O function. This function takes an argument that specifies a priority relative to the base priority of its class:

 • THREAD_PRIORITY_LDWEST: base – 2

 • THREAD PRIORITY JELOW JJORMAL: base - 1

• THREAD_PRIORITYJJORMAL: base 4- 0

Topics You May Be Interested In
Architectures Of Operating System User Os Interface, Command Interpreter, And Graphical User Interfaces
Process Scheduling Thread Scheduling
System Calls Explain Reaching Agreement.
Operations On Process Introduction To Protection And Security
Systems Analysis And Design: Core Concepts Distributed System-motivation

 • THREAD_PRIORITY_ABOVE_NORMAL: base + 1

 • THREAD_PRIORITY_HIGHEST:base + 2 Two other designations are also used to adjust the priority. Recall from Section 22.3.2.1 that the kernel has two priority classes: 16-31 for the realtime class and 0-15 for the variable-priority class. THREADJPRIORITY_IDLE sets the priority to 16 for real-time threads and to 1 for variable-priority threads. THREADJPRIORITY_TIME_CRITICAL sets the priority to 31 for real-time threads and to 15 for variable-priority threads. As we discussed in Section 22.3.2.1, the kernel adjusts the priority of a thread dynamically depending on whether the thread is I/O bound or CPU bound. The Win32 API provides a method to disable this adjustment via SetProcessPriorityBoost () and SetThreadPriorityBoostQ functions.

Thread Synchronization

 A thread can be created in a suspended state; the thread does not execute until another thread makes it eligible via the ResumeThreadO function. The SuspendThreadO function does the opposite. These functions set a counter, so if a thread is suspended twice, it must be resumed twice before it can run. To synchronize the concurrent access to shared objects by threads, the kernel provides synchronization objects, such as semaphores and mutexes. In addition, synchronization of threads can be achieved by use of the WaitForSingleObjectQ and WaitForMultipleObjectsQ functions. Another method of synchronization in the Win32 API is the critical section. A critical section is a synchronized region of code that can be executed by only one thread at a time. A thread establishes a critical section by calling InitializeCriticalSection().

Topics You May Be Interested In
Operating System Structure File Replication
Operating System Operations- Dual-mode Operation, Timer File System-recovery
Monitors Transforming I/o Requests To Hardware Operations
User Os Interface, Command Interpreter, And Graphical User Interfaces What Is The Wafl File System?
Os Design And Implementation Requirements Of Multimedia Kernels

The application must call EnterCriticalSectionQ hefore entering the critical section and LeaveCriticalSectionO after exiting the critical section. These two routines guarantee that, if multiple threads attempt to enter the critical section concurrently, only one thread at a time will be permitted to proceed; the others will wait in the EnterCriticalSectionO routine. The critical-section mechanism is faster than using kernel-synchronization objects because it does not allocate kernel objects until it first encounters contention for the critical section

Fibers

A fiber is user-mode code that is scheduled according to a user-defined scheduling algorithm. A process may have multiple fibers in it, just as it may have multiple threads. A major difference between threads and fibers is that whereas threads can execute concurrently, only one fiber at a time is permitted to execute, even on multiprocessor hardware. This mechanism is included in Windows XP to facilitate the porting of those legacy UNIX applications that were written for a fiber-execution model. The system creates a fiber by calling either ConvertThreadToFiberQ or CreateFiber(). The primary difference between these functions is that CreateFiber () does not begin executing the fiber that was created. To begin execution, the application must call SwitchToFiberO. The application can terminate a fiber by calling DeleteFiber ().

Thread Pool

Topics You May Be Interested In
Different Types Of Operating Systems What Is Multimedia?
Microkernel Architecture Of Operating System The Mach Operating System
Disk Management Access Matrix
Deadlock Characteristics Explain Reaching Agreement.
Algorithm Evaluation What Is Special Purpose System?

 Repeated creation and deletion of threads can be expensive for applications and services that perform small amounts of work in each. The thread pool provides user-mode programs with three services: a queue to which work requests may be submitted (via the QueueUserWorkltemQ API), an API that can be used to bind callbacks to waitable handles (RegisterWaitForSingleObject ()), and APIs to bind callbacks to timeouts (CreateTimerQueueO and CreateTimerQueueTimerO).

The thread pool's goal is to increase performance. Threads are relatively expensive, and a processor can only be executing one thing at a time no matter how many threads are used. The thread pool attempts to reduce the number of outstanding threads by slightly delaying work requests (reusing each thread for many requests) while providing enough threads to effectively utilize the machine's CPUs. The wait and timer-callback APIs allow the thread pool to further reduce the number of threads in a process, using far fewer threads than would be necessary if a process were to devote one thread to servicing each waitable handle or timeout.

Interprocess Communication

Win32 API applications handle interprocess communication in several ways. One way is by sharing kernel objects. Another way is by passing messages, an approach that is particularly popular for Windows GUI applications. One thread can send a message to another thread or to a window by calling PostMessageO, PostThreadMessageO, SendMessageQ, SendThreadMessageO, or SendMessageCallbackQ. The difference between posting a mes sage and sending a message is that the post routines are asynchronous? They return immediately, and the calling thread does not know when the message is actually delivered. The send routines are synchronous: They block the caller until the message has been delivered and processed.

Topics You May Be Interested In
File Concept An Example: Cineblitz
File System Mounting Remote File Access
File Replication Explain Reaching Agreement.
Disk Attachment Programmer Interface
Design Principles Threads-summary

 In addition to sending a message, a thread can send data with the message. Since processes have separate address spaces, the data must be copied. The system copies data by calling SendMessageO to send a message of type WM_COPYDATA with a COPYDATASTRUCT data structure that contains the length and address of the data to be transferred. When the message is sent, Windows XP copies the data to a new block of memory and gives the virtual address of the new block to the receiving process. Unlike threads in the 16-bit Windows environment, every Win32 API thread has its own input queue from which it receives messages. (All input is received via messages.) This structure is more reliable than the shared input queue of 16-bit Windows, because, with separate queues, it is no longer possible for one stuck application to block input to the other applications. If a Win32 API application does not call GetMessage () to handle events on its input queue, the queue fills up; and after about five seconds, the system marks the application as "Not Responding".

Memory Management

The Win32 API provides several ways for an application to use memory: virtual memory, memory-mapped files, heaps, and thread-local storage.

Virtual Memory

Topics You May Be Interested In
File Sharing Design Issues
Disk Management Stateful Versus Stateless Service
User Os Interface, Command Interpreter, And Graphical User Interfaces User Authentication
Swap Space Management Remote File Access
Petersons's Solution Rc 4000

 An application calls VirtualAlloc () to reserve or commit virtual memory and VirtualFreeO to decommit or release the memory. These functions enable the application to specify the virtual address at which the memory is allocated. They operate on multiples of the memory page size, and the starting address of an allocated region must be greater than 0x10000. Examples of these functions appear in Figure 22.14. A process may lock some of its committed pages into physical memory by calling VirtualLockO. The maximum number of pages a process can lock is 30, unless the process first calls SetProcessWorkingSetSizeO to increase the maximum working-set size.

Programmer Interface

Memory-Mapping  Files

 Another way for an application to use memory is by memory-mapping a file into its address space. Memory mapping is also a convenient way for two processes to share memory: Both processes map the same file into their virtual memory Memory mapping is a multistage process, as you can see in the example in Figure 22.15. If a process wants to map some address space just to share a memory region with another process, no file is needed. The process calls CreateFileMappingO with a file handle of Oxffffffff and a particular size. The resulting file-mapping object can be shared by inheritance, by name lookup, or by duplication.

Topics You May Be Interested In
Monolithic Architecture - Operating System Computer Security Classifications
Direct Memory Access Networking In Windows Xp
Thread Scheduling What Is Ibm Os/360?
System Model Implementing Real-time Operating Systems
Design Issues Introduction To Storage Management

Heaps

Heaps provide a third way for applications to use memory. A heap in the Win32 environment is a region of reserved address space. When a Win32 API process is initialized, it is created with a 1-MB default heap. Since many Win32 API functions use the default heap, access to the heap is synchronized to protect the heap's space-allocation data structures from being damaged by concurrent updates by multiple threads. Win32 API provides several heap-management functions so that a process can allocate and manage a private heap. These functions are HeapCreateQ, HeapAllocO, HeapReallocO, HeapSizeO, HeapFreeQ, and HeapDestroyC). The Win32 API also provides the HeapLockO and HeapUnlockO functions to enable a thread to gain exclusive access to a heap. Unlike VirtualLockO, these functions perform only synchronization; they do not lock pages into physical memory.

Programmer Interface

Thread-Local Storage The fourth way for applications to use memory is through a thread-local storage mechanism. Functions that rely on global or static data typically fail to work properly in a multithreaded environment. For instance, the C runtime function strtokO uses a static variable to keep track of its current position while parsing a string. For two concurrent threads to execute strto k () correctly, they need separate current position variables. The thread-local storage mechanism allocates global storage on a per-thread basis. It provides both dynamic and static methods of creating thread-local storage. The dynamic method is illustrated in Figure 22.16. To use a thread-local static variable, the application declares the variable as follows to ensure that every thread has its own private copy: ..declspec (thread)

                                                                DWORD cur _pos = 0;



Frequently Asked Questions

+
Ans: Atomic Transactions The mutual exclusion of critical sections ensures that the critical sections are executed atomically. That is, if two critical sections are executed concurrently, the result is equivalent to their sequential execution in some unknown order. Although this property is useful in many application domains, in many cases we would like to make sure that a critical section forms a single logical unit of work that either is performed in its entirety or is not performed at all. An example is funds transfer, in which one account is debited and another is credited. Clearly, it is essential for data consistency either that both the credit and debit occur or that neither occur. Consistency of data, along with storage and retrieval of data, is a concern often associated with database systems. Recently, there has been an upsurge of interest in using database-systems techniques in operating systems. view more..
+
Ans: Overview of Mass-Storage Structure In this section we present a general overview of the physical structure of secondary and tertiary storage devices. view more..
+
Ans: Types of System Calls System calls can be grouped roughly into five major categories: process control, file manipulation, device manipulation, information maintenance, and communications. In Sections 2.4.1 through 2.4.5, we discuss briefly the types of system calls that may be provided by an operating system. view more..
+
Ans: Programmer Interface The Win32 API is the fundamental interface to the capabilities of Windows XP. This section describes five main aspects of the Win32 API: access to kernel objects, sharing of objects between processes, process management, interprocess communication, and memory management. view more..
+
Ans: Memory Management The main memory is central to the operation of a modern computer system. Main memory is a large array of words or bytes, ranging in size from hundreds of thousands to billions. Each word or byte has its own address. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices. The central processor reads instructions from main memory during the instruction-fetch cycle and both reads and writes data from main memory during the data-fetch cycle (on a Von Neumann architecture). The main memory is generally the only large storage device that the CPU is able to address and access directly. view more..
+
Ans: Storage Management To make the computer system convenient for users, the operating system provides a uniform, logical view of information storage. The operating system abstracts from the physical properties of its storage devices to define a logical storage unit, the file. The operating system maps files onto physical media and accesses these files via the storage devices view more..
+
Ans: Protection and Security If a computer system has multiple users and allows the concurrent execution of multiple processes, then access to data must be regulated. For that purpose, mechanisms ensure that files, memory segments, CPU, and other resources can be operated on by only those processes that have gained proper authorization from the operating system. For example, memory-addressing hardware ensures that a process can execute only within its own address space. view more..
+
Ans: Distributed Systems A distributed system is a collection of physically separate, possibly heterogeneous computer systems that are networked to provide the users with access to the various resources that the system maintains. Access to a shared resource increases computation speed, functionality, data availability, and reliability. Some operating systems generalize network access as a form of file access, with the details of networking contained in the network interface's device driver. view more..
+
Ans: Special-Purpose Systems The discussion thus far has focused on general-purpose computer systems that we are all familiar with. There are, however, different classes of computer systems whose functions are more limited and whose objective is to deal with limited computation domains. view more..
+
Ans: Operating systems provide a number of services. At the lowest level, system calls allow a running program to make requests from the operating system directly. At a higher level, the command interpreter or shell provides a mechanism for a user to issue a request without writing a program. Commands may come from files during batch-mode execution or directly from a terminal when in an interactive or time-shared mode. System programs are provided to satisfy many common user requests. The types of requests vary according to level. view more..
+
Ans: Summary A thread is a flow of control within a process. A multithreaded process contains several different flows of control within the same address space. The benefits of multithreading include increased responsiveness to the user, resource sharing within the process, economy, and the ability to take advantage of multiprocessor architectures. User-level threads are threads that are visible to the programmer and are unknown to the kernel. view more..
+
Ans: Motivation A distributed system is a collection of loosely coupled processors interconnected by a communication network. From the point of view of a specific processor in a distributed system, the rest of the processors and their respective resources are remote, whereas its own resources are local. The processors in a distributed system may vary in size and function. They may include small microprocessors, workstations, minicomputers, and large general-purpose computer systems. view more..
+
Ans: Summary Multimedia applications are in common use in modern computer systems. Multimedia files include video and audio files, which may be delivered to systems such as desktop computers, personal digital assistants, and cell phones. view more..
+
Ans: Summary CPU scheduling is the task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher. First-come, first-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes. Shortestjob-first (SJF) scheduling is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult, however, because predicting the length of the next CPU burst is difficult. view more..
+
Ans: Summary It is desirable to be able to execute a process whose logical address space is larger than the available physical address space. Virtual memory is a technique that enables us to map a large logical address space onto a smaller physical memory. Virtual memory allowr s us to run extremely large processes and to raise the degree of multiprogramming, increasing CPU utilization. Further, it frees application programmers from worrying about memory availability. In addition, with virtual memory, several processes can share system libraries and memory. view more..
+
Ans: Summary Disk drives are the major secondary-storage I/O devices on most computers. Most secondary storage devices are either magnetic disks or magnetic tapes. Modern disk drives are structured as a large one-dimensional array of logical disk blocks which is usually 512 bytes. Disks may be attached to a computer system in one of two ways: (1) using the local I/O ports on the host computer or (2) using a network connection such as storage area networks. view more..
+
Ans: Microsoft designed Windows XP to be an extensible, portable operating system —one able to take advantage of new techniques and hardware. Windows XP supports multiple operating environments and symmetric multiprocessing, including both 32-bit and 64-bit processors and NUMA computers. view more..
+
Ans: Summary A deadlock state occurs when two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes. There are three principal methods for dealing with deadlocks: • Use some protocol to prevent or avoid deadlocks, ensuring that the system, will never enter a deadlock state. • Allow the system to enter a deadlock state, detect it, and then recover. • Ignore the problem altogether and pretend that deadlocks never occur in the system. The third solution is the one used by most operating systems, including UNIX and Windows view more..




Rating - 3/5
480 views

Advertisements