Architectural elements



Rating - 3/5
534 views

 

 To understand the fundamental building blocks of a distributed system, it is necessary to consider four key questions:
What are the entities that are communicating in the distributed system? 

How do they communicate, or, more specifically, what communication paradigm is used?

What (potentially changing) roles and responsibilities do they have in the overall architecture?
How are they mapped on to the physical distributed infrastructure (what is their placement)?
Communicating entities • The first two questions above are absolutely central to an understanding of distributed systems; what is communicating and how those entities communicate together define a rich design space for the distributed systems developer to consider. It is helpful to address the first question from a system-oriented and a
problem-oriented perspective. From a system perspective, the answer is normally very clear in that the entities that communicate in a distributed system are typically processes, leading to the prevailing view of a distributed system as processes coupled with appropriate
interprocess communication paradigms (as discussed, for example, in Chapter 4), with two caveats:
In some primitive environments, such as sensor networks, the underlying operating systems may not support process abstractions (or indeed any form of isolation), and hence the entities that communicate in such systems are nodes.
In most distributed system environments, processes are supplemented by threads, so, strictly speaking, it is threads that are the endpoints of communication. At one level, this is sufficient to model a distributed system and indeed the fundamental
models considered in Section 2.4 adopt this view. From a programming perspective, however, this is not enough, and more problem-oriented abstractions have been proposed:
Objects: Objects have been introduced to enable and encourage the use of
object oriented approaches in distributed systems (including both object-oriented design and object-oriented programming languages). In distributed object-based approaches, a computation consists of a number of interacting objects representing natural units of decomposition for the given problem domain. Objects are accessed
via interfaces, with an associated interface definition language (or IDL) providing a specification of the methods defined on an object. Distributed objects have become a major area of study in distributed systems.
Components: Since their
introduction components/interfaces that must be present for a component their (provided) interfaces but also the assumptions they make in terms of other accessed through interfaces. The key difference is that components specify not only
offer problem-oriented abstractions for building distributed systems and are also as a direct response to such weaknesses. Components resemble objects in that they identified with distributed objects, and the use of component technology has emerged a number of significant problems have been to fulfill other words, making all dependencies explicit and its function – inprovidinga more complete contract for system construction. This more contractual approach encourages and enables third-party development of components and also promotes a purer compositional approach to constructing distributed systems by removing hidden dependencies. Component-based middleware often provides additional support for key areas such as deployment and support for server-side programming [Heineman and Councill 2001].

Web services: Web services represent the third important paradigm for the development of distributed systems [Alonso et al. 2004]. Web services are closely related to objects and components, again taking an approach based on encapsulation of behavior and access through interfaces. In contrast, however, web services are intrinsically integrated into the World Wide Web, using web standards to represent and discover services. The World Wide Webconsortium (W3C) defines a web service as:

These Topics Are Also In Your Syllabus
1 Software measuring process and product attributes link
2 Direct and Indirect Measures, Reliability link
You May Find Something Very Interesting Here. link
3 What Is Information Systems Analysis and Design? link
4 CHARACTERIZATION OF DISTRIBUTED SYSTEMS link
5 Types Of Systems link

... a software application identified by a URI, whose interfaces and
bindings are capable of being defined, described and discovered as XML 
artefacts. A Web service supports direct interactions with other software agents using XML-based message exchanges via Internet-based protocols.
In other words, web services are partially defined by the web-based technologies they adopt. A further important distinction stems from the style of use of the technology. Whereas objects and components are often used within an organization to develop tightly coupled applications, web services are generally viewed as complete services
in their own right that can be combined to achieve value-added services, often crossing organizational boundaries and hence achieving business to business integration. Web services may be implemented by different providers and using different underlying technologies.
Communication paradigms • We now turn our attention to how entities communicate in a distributed system, and consider three types of communication paradigm:
interprocess communication;
remote invocation;
indirect communication.

nterprocess communication refers to the relatively low-level support for communication between processes in distributed systems, including message-passing primitives, direct access to the API offered by Internet protocols (socket programming) and support for multicast communication.
Remote invocation represents the most common communication paradigm in distributed systems, covering a range of techniques based on a two-way exchange between communicating entities in a distributed system and resulting in the calling of a r
emote operation, procedure or method,
Request-reply protocols: Request-reply protocols are effectively a pattern imposed on an underlying message-passing service to support client-server computing. In 
particular, such protocols typically involve a pairwise exchange of messages from client to server and then fromserver back to client, with the first message containing an encoding of the operation to be executed at the server and also an array of bytes holding associated arguments and the second message containing any results of the operation, again encoded as an array of bytes. This paradigm is rather primitive and only really used in embedded systems where performance is paramount. Most distributed systems will elect to use remote procedure calls or remote method invocation, as discussed below, but note that both approaches are supported by underlying request reply exchanges.

Remote procedure calls: The concept of a remote procedure call (RPC), initially attributed to Birrell and Nelson [1984], represents a major intellectual breakthrough in distributed computing. In RPC, procedures in processes on remote computers can be called as if they are procedures in the local address space. The underlying RPC
system then hides important aspects of distribution, including the encoding and decoding of parameters and results, the passing of messages and the preserving of the required semantics for the procedure call. This approach directly and elegantly supports client-server computing with servers offering a set of operations through a
service interface and clients calling these operations directly as if they were available locally. RPC systems
therefore offer (at a minimum) access and location transparency.
Remote method invocation: Remote method invocation (RMI) strongly resembles remote procedure calls but in a world of distributed objects. With this approach, a calling object can invoke a method
in a remote object. As with RPC, the underlying details are generally hidden from the user. RMI implementations may, though, go
further by supporting object identity and the associated ability to pass object identifiers as parameters in remote calls. 
The above set of techniques all have one thing in common: communication represents a two-way relationship between a sender and a receiver with senders explicitly directing messages/invocations to the associated receivers. Receivers are also generally aware of
the identity of senders, and in most
cases both parties must exist at the same time. In contrast, a number of techniques have emerged whereby communication is indirect, through a third entity, allowing a strong degree of decoupling between senders and receivers. In particular:
Senders do not need to know who they are sending to (space uncoupling).
Senders and receivers do not need to exist at the same time (time uncoupling).
Indirect communication is discussed in more detail in Chapter 6.
Key techniques for indirect communication include:
Group communication: Group communication is concerned with the delivery of messages to a set of recipients and hence is a multiparty communication paradigm supporting one-to-many communication. Group communication relies on the abstraction of a group which is represented in the system by a group identifier.

 

Recipients elect to receive messages sent to a group by joining the group. Senders then send messages to the group via the group identifier and hence do not need to know the recipients of the message. Groups typically also maintain group membership and include mechanisms to deal withfailure of group members.
Publish-subscribe systems: Many systems, such as the financial trading, can be classified as information-dissemination systems wherein a large number of producers (or publishers) distribute information items of interest (events) to a similarly large number of consumers (or subscribers). It would be complicated and inefficient to employ any of the core communication paradigms discussed above
for this purpose and hence publish-subscribe systems (sometimes also called distributed event-based systems) have emerged to meet this important need [Muhl et al. 2006]. Publish-subscribe systems all share the crucial feature of providing an intermediary service that efficiently ensures information generated by producers is routed to consumers who desire this information. 

Message queues: Whereas publish-subscribe systems offer a one-to-many style of communication, message queues offer a point-to-point service whereby producer processes can send messages to a specified queue and consumer processes can receive messages from the queue or be notified of the arrival of new messages in the
queue. Queues
therefore offers an indirection between the producer and consumer processes.
Tuple spaces: Tuple spaces offer a further indirect communication service by supporting a model whereby processes can place arbitrary items of structured data, called tuples, in a persistent tuple space and other processes can either read or remove such tuples from the tuple space by specifying patterns of interest. Since the tuple space is persistent, readers and writers do not need to exist at the same time. This style of programming, otherwise known as generative communication, was introduced by Gelernter [1985] as a paradigm for parallel programming. A number of distributed implementations have also been developed, adopting either a client-server-style implementation or a more decentralized peer-to-peer approach.
Distributed shared memory: Distributed shared memory (DSM) systems provide an abstraction for sharing data between processes that do not share physical memory.
Programmers are nevertheless presented with a familiar abstraction of reading or writing (shared) data structures as if they were in their own local address spaces, thus presenting a high level of distribution transparency. The underlying infrastructure must ensure a copy is provided in a timely manner and also deal with issues relating
to synchronization and consistency of data.
The architectural choices discussed so far are summarized in Figure 2.2.
Roles and responsibilities • In a distributed system processes – or indeed objects, components or services, including web services (but for the sake of simplicity we use the term process throughout this section) – interact with each other to perform a useful activity, for example, to support a chat session. In doing so, the processes take on given 
roles, and these roles are fundamental in establishing the overall architecture to be adopted. In this section, we examine two architectural styles stemming from the role of

These Topics Are Also In Your Syllabus
1 Feasibility study and its importance link
2 Types of feasibility studies and feasibility reports link
You May Find Something Very Interesting Here. link
3 System Selection plan and proposal Prototyping link
4 Cost-Benefit and analysis -Tools and techniques link
5 Scope and classification of metrics link

individual processes: client-server and peer-to-peer.
Client-server: This is the architecture that is most often cited when distributed systems are discussed. It is historically the most important and remains the most widely employed. Figure 2.3 illustrates the simple structure in which processes take on the roles
of being clients or servers. In particular, client processes interact with individual server processes in potentially separate host computers in order to access the shared resources that they manage.
Servers may
in turn be clients of other servers, as the figure indicates. For example, a web server is often a client of a local file server that manages the files in which the web pages are stored. Web servers and most other Internet services are clients of the DNS service, which translates Internet domain names to network addresses.
Another web-related example concerns search engines, which enable users to look up summaries of information available on web pages at sites throughout the Internet. These summaries are made by programs called web crawlers, which run in the background at
a search engine site using HTTP requests to access web servers throughout the Internet.
Thus a search engine is both a server and a client: it responds to queries from browser clients and it runs web crawlers that act as clients of other web servers. In this example, the server tasks (responding to user queries) and the crawler tasks (making requests to other web servers) are entirely independent; there is little need to synchronize them and they may run concurrently. In fact, a typical search engine would normally include many concurrent threads of execution, some serving its clients and others running web crawlers. In Exercise 2.5, the reader is invited to consider the only  synchronization issue that does arise for a concurrent search engine of the type outlined here.

Peer-to-peer: In this architecture all of the processes involved in a task or activity play similar roles, interacting cooperativelyas peers without any distinction between client and server processes or the computers on which they run. In practical terms, all participating processes run the same program and offer the same set of interfaces to each other. While the client-server model offers a direct and relatively simple approach to the sharing of data and other resources, it scales poorly. The centralization of service provision and management implied by placing a service at a single address does not scale well beyond the capacity of the computer that hosts the service and the bandwidth of its network connections.
A number of placement strategies have evolved in response to this problem (see the discussion of placement below), but none of them addresses the fundamental issue – the need to distribute shared resources much more widely in order to share
the computing and communication loads incurred in accessing them amongst a much larger number of computers and network links. The key insight that led to the development of peer-to-peer systems is that the network and computing resources owned by the users of a service could also be put to use to support that service. This has the useful consequence that the resources available to run the service grow with the number of users.
The hardware capacity and operating system functionality of today’s desktop computers exceeds that of yesterday’s servers, and the majority are equipped with always-on broadband network connections. The aim of the peer-to-peer architecture is
to exploit the resources (both data and hardware) in a large number of participating computers for the
fulfilment of a given task or activity. Peer-to-peer applications and systems have been successfully constructed that enable tens or hundreds of thousands of
computers to provide access to data and other resources that they collectively store and manage. One of the earliest instances was the Napster application for sharing digital music files. Although Napster was not a pure peer-to-peer architecture (and also gained
notoriety for reasons beyond its architecture), its demonstration of feasibility has resulted in the development of the architectural model in many valuable directions. A more recent and widely used instance is the BitTorrent file-sharing system

Figure 2.4a illustrates the form of a peer-to-peer application. Applications are composed of large numbers of peer processes running on separate computers and the pattern of communication between them depends entirely on application requirements.
A large number of data objects are shared, an individual computer holds only a small part of the application database, and the storage, processing
and communication loads for access to objects are distributed across many computers and network links. Each
object is replicated in several computers to further distribute the load and to provide resilience in the event of disconnection of individual computers (as is inevitable in the large, heterogeneous networksat which peer-to-peer systems are aimed). The need to place individual objects and retrieve them and to maintain replicas amongst many
computers renders this architecture substantially more complex than the client-server architecture.
Placement • The final issue to be considered is how entities such as objects or services 
map on to the underlying physical distributed infrastructure which will consist of a potentially large number of machines interconnected by a network of arbitrary complexity. Placement is crucial in terms of determining the properties of the distributed system, most obviously with regard to performance but also to other aspects, such as reliability and security.
The question of where to place a given client or server in terms of machines and processes within machines is a matter of careful design. Placement needs to take into account the patterns of communication between entities, the reliability of given
machines and their current loading, the quality of communication between different machines and so on. Placement must be determined with strong application knowledge, and there are few universal guidelines
to obtaining an optimal solution. We therefore
focus mainly on the following placement strategies, which can significantly alter the characteristics of a given design (although we return to the key issue of mapping to physical infrastructure in Section 2.3.2, where we look at tiered architecture):

These Topics Are Also In Your Syllabus
1 System Selection plan and proposal Prototyping link
2 Cost-Benefit and analysis -Tools and techniques link
You May Find Something Very Interesting Here. link
3 Scope and classification of metrics link
4 Software measuring process and product attributes link
5 Types Of Systems link

• mapping of services to multiple servers;
• caching;
• mobile code;
• mobile agents.
Mapping of services to multiple servers: Services may be implemented as several serverprocesses in separate host computers interacting as necessary to provide a service to client processes (Figure 2.4b). The servers may partition the set of objects on which the service is based and distribute those objects between themselves, or they may maintain replicated copies of them on several hosts. These two options are illustrated by the following examples.
The Web provides a common example of partitioned data in which each web server manages its own set of resources. A user can employ a browser to access a resource at any one of the servers.
An example of a service based on replicated data is the Sun Network Information Service (NIS), which is used to enable all the computers on a LAN to access the same user authentication data when users log in. Each NIS server has its own replica of a common password file containing a list of users’ login names and encrypted passwords.
Chapter 18 discusses techniques for replication in detail.
A more closely coupled type of multiple-server architecture is the cluster, as introduced in Chapter 1. A cluster is constructed from up to thousands of commodity processing boards, and service processing can be partitioned or replicated between them.
Caching: A
c a c h e is a store of recently used data objects that is closer to one client or a particular set of clients than the objects themselves. When a new object is received from a server it is added to the local cache store, replacing some existing objects if necessary.
When an object is needed by a client process, the caching service first checks the cache and supplies the object from there if an up-to-date copy is available. If not, an up-to-date 
copy is fetched. Caches may be co-located with each client or they may be located in a proxy server that can be shared by several clients.
Caches are used extensively in practice. Web browsers maintain a cache of recently visited web pages and other web resources in the client’s local file system, using a special HTTP request to check with the original server that cached pages are up-to date before displaying them. Web proxy servers (Figure 2.5) provide a shared cache of

web resources for the client machines at a site or across several sites. The purpose of proxy servers is to increase the availability and performance of the service by reducing the load on the wide area network and web servers. Proxy servers can take on other roles;
for example, they may be used to access remote web servers through a firewall.
Mobile code: Chapter 1 introduced mobile code. Applets are a well-known and widely used example of mobile code – the user running a browser selects a link to an applet whose code is stored on a web server; the code is downloaded to the browser and runs there, as shown in Figure 2.6. An advantage of running the downloaded code locally is that it can give
good interactive response since it does not suffer from the delays or variability of bandwidth associated with network communication.
Accessing services means running code that can invoke their operations. Some services are likely to be so standardized that we can access them with an existing and well-known application – the Web is the most common example of this, but even there, some
web sites use functionality not found in standard browsers and require the
downloading of additional code. The additional code may, for example, communicate with the server. Consider an application that requires that users be kept up-to-date with changes as they occur at an information source
in the server. This cannot be achieved by
normal interactions with the web server, which are always initiated by the client. The 
solution is to use additional software that operates in a manner often referred to as a push model – one in which the server instead of the client initiates interactions. For example,
a stockbroker might provide a customized service to notify customers of changes in the prices of shares; to use the service, each customer would have to download a special applet that receives updates from the broker’s server, displays them to the user and perhaps performs automatic buy and sell operations triggered by conditions set up by the customer and stored locally
in the customer’s computer.
Mobile code is a potential security threat to the local resources in the destination computer. Therefore browsers give applets limited access to local resources, using a scheme discussed in Section 11.1.1.
Mobile agents: A mobile agent is a running program (including both code and data) that travels from one computer to another in a network carrying out a task on someone’s behalf, such as collecting information, and eventually returning with the results. A

mobile agent may make many invocations to local resources at each site it visits – for client making remote invocations to some resources, possibly transferring large amounts of data, there is a reduction in communication cost and time through the replacement of remote invocations with local ones.Mobile agents might be used to install and maintain software on the computers within an organization or to compare the prices of products from a number of vendors by visiting each vendor’s site and performing a series of database operations. An early example of a similar idea is the so-called worm program developed at Xerox PARC

[Shoch and Hupp 1982], which was designed to make use of idle computers in order to carry out intensive computations.
Mobile agents (like mobile code) are a potential security threat to the resources in computers that they visit. The environment receiving a mobile agent should decide which of the local resources it should be allowed to use, based on the identity of the user on whose behalf the agent is acting – their identity must be included in a secure way with
the code and data of the mobile agent. In addition, mobile agents can themselves be vulnerable – they may not be able to complete their task if they are refused access to the information they need. The tasks performed by mobile agents can be performed by other
means. For example, web crawlers that need to access resources at web servers throughout the Internet work quite successfully by making remote invocations to server processes. For these reasons, the applicability of mobile agents may be limited.


 


Rating - 3/5
505 views

Advertisements
Rating - 3/5