Architectural patterns

Rating - 3/5


 Architectural patterns build on the more primitive architectural elements discussed above and provide composite recurring structures that have been shown to work well in given circumstances. They are not themselves necessarily complete solutions but rather
offer partial insights that, when combined with other patterns, lead the designer to a 
solution for a given problem domain.
This is a large topic, and many architectural patterns have been identified for distributed systems. In this section, we present several key architectural patterns in distributed systems, including layering and tiered architectures and the related concept of thin clients (including the specific mechanism of virtual network computing). We
also, examine web services as an architectural pattern and give pointers to others that may be applicable in distributed systems.
Layering • The concept of layering is a familiar one and is closely related to abstraction.In a layered approach, a complex system is partitioned into a number of layers, with a given layer making use of the services offered by the layer below. A given layer, therefore, offers a software abstraction, with higher layers being unaware of
implementation details, or indeed of any other layers beneath them.
In terms of distributed systems, this equates to a vertical organization of services into service layers. A distributed service can be provided by one or more server processes, interacting with each other and with client processes in order to maintain a consistent system-wide view of the service’s resources. For example, a network time service is implemented on the Internet based on the Network Time Protocol (NTP) by server processes running on hosts throughout the Internet that supply the current time to any client that requests it and adjust their version of the current time as a result of

interactions with each other. Given the complexity of distributed systems, it is often helpful to organize such services into layers. We present a common view of a layered architecture in Figure.
Figure introduces the important terms platform and middleware, which we define as follows:
A platform for distributed systems and applications consists of the lowest-level hardware and software layers. These low-level layers provide services to the layers above them, which are implemented independently in each computer, bringing the system’s programming interface up to a level that facilitates communication and coordination between processes. 

Intel x86/Windows, Intel x86/Solaris, Intel x86/Mac OS X, Intel x86/Linux and ARM/Symbian are major examples.
Middleware was defined in Section 1.5.1 as a layer of software whose purpose is to mask heterogeneity and to provide a convenient programming model to application programmers. Middleware is represented by processes or objects in a set of computers that interact with each other to implement communication and
resource-sharing support for distributed applications. It is concerned with providing useful building blocks for the construction of software components that can work with one another in a distributed system. In particular, it raises the level of the communication activities of application programs through the support of abstractions such as remote method invocation; communication between a group of processes; notification of events; the partitioning, placement and retrieval of shared data objects amongst cooperating computers; the replication of shared data objects; and the transmission of multimedia data in real time. 
Tiered architecture • Tiered architectures are complementary to layering. Whereas layering deals with the vertical organization of services into layers of abstraction, tiering is a technique to organize functionality of a given layer and place this functionality into 
appropriate servers and, as a secondary consideration, on to physical nodes. This technique into two processes, the In the two-tier solution, the three aspects mentioned above must be partitioned comparison in Figure 2.8 (a) and (b), respectively. technology. The associated two-tier and three-tier solutions are presented together, for Now, let us consider the implementation of such an application using client-server typically in a database management system. the data logic, which is concerned with the persistent storage of the application,•  although the concept is not limited only to business applications); processing associated with the application (also referred to as the business logic, the application logic, which is concerned with the detailed application-specific• updating the view of the application as presented to the user; the presentation logic, which is concerned with handling user interaction and•  illustrate this, consider the functional decomposition of a given application, as follows:

These Topics Are Also In Your Syllabus
1 Qualifications and responsibilities Of System Analyst link
2 System Analyst As Change Of Agent , Investigator and Monitoring Guy , Architect , Psychologist , Motivator , Intermediary link
You May Find Something Very Interesting Here. link
3 System development life cycle (SDLC) link
4 Various phases of development - Analysis, Design, Development, Implementation, Maintenance link
5 Types of documentation and their importance link

Let us first examine the concepts of two- and three-tiered architecture. To as in Figure above, but it also applies to all layers of a distributed systems is most commonly associated with the organization of applications and client and the server. This is most commonly done by splitting the application logic, with some residing in the client and the remainder in the server (although other solutions are also possible). The advantage of this scheme is low latency in terms of interaction, with only one exchange of messages to invoke an operation. The disadvantage is the splitting of application logic across a process boundary, with the consequent restriction on which parts of the logic can be directly invoked from which another part.

In the three-tier solution, there is a one-to-one mapping from logical elements to physical servers and hence, for example, the application logic is held in one place, which in turn can enhance maintainability of the software. Each tier also has a well-defined role; for example, the third tier is simply a database offering a (potentially standardized)
relational service interface. The first tier can also be a simple user interface allowing intrinsic support for thin clients (as discussed below). The drawbacks are the added complexity of managing three servers and also the added network traffic and latency associated with each operation.
Note that this approach generalizes to n-tiered (or multi-tier) solutions where a given application domain is partitioned into n logical elements, each mapped to a given server element. As an example, Wikipedia, the web-based publicly editable 
encyclopedia, adopts a multi-tier architecture to deal with the high volume of web requests (up to60,000 page requests per second). The role of AJAX: In Section 1.6 we introduced AJAX (Asynchronous Javascript And XML) as an extension to the standard client-server style of interaction used in the World Wide Web. AJAX meets the need for fine-grained communication between a Javascript front-end program running in a web browser and a server-based back-end program holding data describing the state of the application. To recapitulate, in the standard web style of interaction a browser sends an HTTP request to a server for a page, image or another resource with a given URL. The server replies by sending an entire page that is either read from a file on the server or generated by a program, depending on which type


of the resource is identified in the URL. When the resultant content is received at the client, the browser presents it according to the relevant display method for its MIME type (text/html, image/jpg, etc.). Although a web page may be composed of several items of
contentpresented by the browser. This time interval unable to interact with the page until the new HTML content is received and Once the browser has issued an HTTP request for a new web page, the user is•  in several significant ways:
This standard style of interaction constrains the development of web applications the manner specified in its HTML page definition.
of different types, the entire page is composed and presented by the browser in
isindeterminate, because it is subject to network and server delays.
In order to update even a small part of the current page with additional data from the server, the an
entire new page must be requested and displayed. This results in a delayed response to the user, additional processing at both the client and the server
and redundant network traffic.


These Topics Are Also In Your Syllabus
1 Data and fact gathering techniques- Interviews, Group communication, Presentations, Site visits link
2 Feasibility study and its importance link
You May Find Something Very Interesting Here. link
3 Types of feasibility studies and feasibility reports link
4 System Selection plan and proposal Prototyping link
5 Cost-Benefit and analysis -Tools and techniques link

The contents of a page displayed at a client cannot be updated in response to changes in the application data held at the server.
The introduction of Javascript, a cross-platform and cross-browser programming language that is downloaded and executed in the browser, constituted a first step towards the removal of those constraints. Javascript is a general-purpose language enabling both
user interface and application logic to be programmed and executed in the context of a browser window.
AJAX is the second innovative step that was needed to enable major interactive web applications to be developed and deployed. It enables Javascript front-end programs to request new data directly from server programs. Any data items can be requested and the current page updated selectively to show the new values. Indeed, the
front end can react to the new data in any way that is useful for the application. Many web applications allow users to access and update substantially shared datasets that may be subject to change in response to input from other clients or data feeds received by a server. They require a responsive front-end component running in
each client browser to perform user interface actions such as menu selection, but they 
also require access to a dataset that must be held at the server to enable sharing. Such datasets are generally too large and too dynamic to allow the use of an architecture based on the downloading of a copy of the entire application state to the client at the start of a user’s session for manipulation by the client.
AJAX is the ‘glue’ that supports the construction of such applications; it provides a communication mechanism enabling front-end components running in a browser to issue requests and
receive results from back-end components running on a server.
Clients issue requests through the Javascript
XmlHttpRequest object, which manages an HTTP exchange (see Section 1.6) with a server process. because XMLHttpRequest has a complex API that is also somewhat browser-dependent, it is usually accessed through one of the many Javascript libraries that are available to support the development of web applications.
In Figure, we illustrate its use in the Prototype.js Javascript library
The example is an excerpt from a web application that displays a page listing up to-date scores for soccer matches. Users may request updates of scores for individual games by clicking on the relevant line of the page, which executes the first line of the


example. The Ajax.Request object sends an HTTP request to a scores.php program located at the same server as the web page. The Ajax.Request object then returns control, allowing the browser to continue to respond to other user actions in the same window or
other windows. When the scores.php program has obtained the latest score it returns it in an HTTP response. The Ajax.The request object is then reactivated; it invokes the updateScore function (because it is the onSuccess action), which parses the result and inserts the score at the relevant position in the current page. The remainder of the page remains unaffected and is not reloaded.
This illustrates the type of communication used between Tier 1 and Tier 2 components. Although Ajax.Request (and the underlying
XmlHttpRequest object) offers both synchronous and asynchronous communication, the asynchronous version is almost always used because the effect on the user interface of delayed server responses
is unacceptable.
Our simple example illustrates the use of AJAX in a two-tier application. In a 
three-tier application, the server component (scores.php in our example) would send a request to a data manager component (typically an SQL query to a database server) for the required data. That request would besynchronous since there is no reason to return control to the server component until the request is satisfied.
The AJAX mechanism constitutes an effective technique for the construction of responsive web applications in the context of the indeterminate latency of the Internet, and it has been very widely deployed. The Google Maps application [
II] is an outstanding example. Maps are displayed as an array of contiguous 256 x 256-pixel images (called tiles). When the map is moved the visible tiles are repositioned by Javascript code in the browser and additional tiles needed to fill the visible area are
requested with an AJAX call to a Google server. They are displayed as soon as they are received, but the browser continues to respond to user interaction while they are awaited.
Thin clients • The trend in distributed computing is towards moving complexity away from the end-user device towards services
in the Internet. This is most apparent in the move towards cloud computing (discussed in Chapter 1) but can also be seen in tiered architectures, as discussed above. This trend has given rise to an interest in the concept of a thin client, enabling access to sophisticated networked services, provided for example by a cloud solution, with few assumptions or demands on the client device. More
specifically, the term thin client refers to a software layer that supports a window-based 
user interface that is local to the user while executing application programs or, more generally, accessing services on a remote computer. For example, Figure 
Figure  Thin clients and computer servers
Networked device Compute server
a thin client accessing a
compute server over the Internet. The advantage of this approach is that potentially simple local devices (including, for example, smart phones and other resource-constrained devices) can be significantly enhanced with a plethora of networked services and capabilities. The main drawback of the thin client architecture is in highly interactive graphical activities such as CAD and image processing, where the delays experienced by users are increased to unacceptable levels by the need to transfer image and vector information between the thin client and the application process, due to both network and operating system latencies.

This concept has led to the emergence of virtual network computing (VNC). This technology was first introduced by researchers at the Olivetti and Oracle Research Laboratory [Richardson et al. 1998]; the initial concept has now evolved into implementations such as RealVNC [], which is a software solution,
and Adventiq [], which is a hardware-based solution supporting the transmission of keyboard, video and mouse events over IP (KVM-over-IP). Other VNC 
implementations include Apple Remote Desktop, TightVNC and Aqua Connect. The concept is straightforward, providing remote access to graphical user interfaces. In this solution, a VNC client (or viewer) interacts with a VNC server through a VNC protocol. The protocol operates at a primitive level in terms of graphics support, based on framebuffers and featuring one operation: the placement of a rectangle of pixel data at a given position on the screen (some solutions, such as XenApp from Citrix
operate at a higher level in terms of window operations []). The thislow level approach ensures the protocol will work with any operating system or application.
Although it is straightforward, the implication is that users are able to access their computer facilities from anywhere on a wide range of devices, representing a significant step forward in mobile computing.
Virtual network computing has superseded network computers, a previous attempt to realise thin client solutions through simple and inexpensive hardware devices that are completely reliant on networked services, downloading their operating system
and any application software needed by the user from a remote file server. Since all the application data and code is stored by a file server, the users may migrate from one network computer to another. In practice, virtual network computing has proved to be a more flexible solution and now dominates the marketplace.
Other commonly occurring patterns • As mentioned above, a large number of architectural patterns have now been identified and documented. Here are a few key
The proxy pattern is a commonly recurring pattern in distributed systems designed particularly to support location transparency in remote procedure calls or remote method invocation. With this approach, a proxy is created in the local address space to represent the remote object. This proxy offers exactly the same interface
as the remote object, and the programmer makes calls on this proxy object and hence does not need to be aware of the distributed nature of the interaction. The role of proxies in supporting such location transparency in RPC and RMI. Note that proxies can also be used to encapsulate other functionality, such as the placement policies of replication or caching.
The use of brokerage in web services can usefully be viewed as an architectural pattern supporting interoperability in potentially complex distributed infrastructures. In particular, this pattern consists of the trio of service provider,
et al. [2007].

Bushmann service requester and service broker (a service that matches services provided to those requested), as shown in Figure 2.11. This brokerage pattern is replicated in many areas of distributed systems, for example with the registry in Java RMI and the naming service in CORBA (as discussed in Chapters 5 and 8, respectively).
Reflection is a pattern that is increasingly being used in distributed systems as a means of supporting both introspection (the dynamic discovery of properties of the system) and intercession (the ability to dynamically modify structure or behavior). For example, the introspection capabilities of Java are used effectively in the implementation of RMI to provide generic dispatching (as
discussed in Section 5.4.2). In a reflective system, standard service interfaces are available at the base level, but a meta-level interface is also available providing access to the components and their parameters involved in the realization of the services. A variety of techniques are generally available at the meta-level, including the ability to intercept incoming messages or invocations, to dynamically discover the interface offered by a given object and to discover and
adapt the underlying architecture of the system. Reflection has been applied in a variety of areas in distributed systems, particularly within the field of reflective middleware, for example to support more configurable and reconfigurable middleware architectures [Kon et al. 2002]. 

These Topics Are Also In Your Syllabus
1 Outsourcing-Systems Acquisition link
2 Sources of Software-Systems Acquisition link
You May Find Something Very Interesting Here. link
3 Choosing Off-the-Shelf Software-Systems Acquisition link
4 Reuse-Systems Acquisition link
5 Pine Valley Furniture Company Background-Managing the Information Systems Project link

Rating - 3/5

Rating - 3/5