Unit 2 - Software Architectures - WWW - Rgpvnotes.in
Unit 2 - Software Architectures - WWW - Rgpvnotes.in
Unit 2 - Software Architectures - WWW - Rgpvnotes.in
The revenue we generate from the ads we show on our website and app
funds our services. The generated revenue helps us prepare new notes
and improve the quality of existing study materials, which are
available on our website and mobile app.
If you don't use our website and app directly, it will hurt our revenue,
and we might not be able to run the services and have to close them.
So, it is a humble request for all to stop sharing the study material we
provide on various apps. Please share the website's URL instead.
Downloaded from www.rgpvnotes.in, whatsapp: 8989595022
UNIT-II
Course Contents:
Unit 2. Software architecture models: structural models, framework models, dynamic models, process models.
Architectures styles: dataflow architecture, pipes and filters architecture, call-and return architecture, data-
centered architecture, layered architecture, agent based architecture, Micro-services architecture, Reactive
Architecture, Representational state transfer architecture etc.
------------------------------------------------------------------------------------------------
Software Architecture Models:
Structural Models: Structural models of software display the organization of a system in terms of the
components that make up that system and their relationships. Structural models may be static models, which
show the structure of the system design. Structural models of a system required during the discussing and
designing the system architecture. Architectural design is a particularly important topic in software engineering
and UML component, package, and deployment diagrams may all be used when presenting architectural
models. Structural model represents the framework for the system and this framework is the place where all
other components exist.
Structural modelling captures the static features of a system. They consist of the following −
• Classes diagrams
• Objects diagrams
• Deployment diagrams
• Package diagrams
• Composite structure diagram
• Component diagram
Figure 2.1 (a): Structural Modelling Core Elements Figure 2.1 (b): Structural Modelling Core Relationships
containing an optional state name list, list the name in boldface, center the name near the top of the box,
capitalize the first letter.
• It is a part of Von-Neumann model of computation which consists of a single program counter, sequential
execution and control flow which determines fetch, execution, commit order.
• Its main objective is to achieve the qualities of reuse and modifiability. And suitable for applications that
involve a well-defined series of independent data transformations or computations on orderly defined
input and output such as compilers and business data processing applications.
There are three types of execution sequences between modules−
1. Batch sequential
2. Pipe and filter or non-sequential pipeline mode
3. Process control
1. Batch Sequential:
• Batch sequential compilation was regarded as a sequential process in 1970. It is a classical data
processing model.
• In Batch sequential, separate programs are executed in order and the data is passed as an aggregate
from one program to the next.
• It provides simpler divisions on subsystems and each subsystem can be an independent program working
on input data and produces output data.
• The main disadvantage of batch sequential architecture is that, it does not provide concurrency and
interactive interface. It provides high latency and low throughput.
The above diagram shows the flow of batch sequential architecture. It provides simpler divisions on subsystems
and each subsystem can be an independent program working on input data and produces output data.
Advantages
• Provides simpler divisions on subsystems.
• Each subsystem can be an independent program working on input data and producing output data.
Disadvantages
• Provides high latency and low throughput.
• Does not provide concurrency and interactive interface.
• External control is required for implementation.
2. Pipe and Filter Architecture:
This approach lays emphasis on the incremental transformation of data by successive component. In this
approach, the flow of data is driven by data and the whole system is decomposed into components of data
source, filters, pipes, and data sinks. The connections between modules are data stream which is first-in/first-
out buffer that can be stream of bytes, characters, or any other type of such kind. The main feature of this
architecture is its concurrent and incremented execution.
Pipe represents-
• Pipe is a connector which passes the data from one filter to the next.
• Pipe is a directional stream of data implemented by a data buffer to store all data, until the next filter
has time to process it.
• It transfers the data from one data source to one data sink.
• Pipes are the stateless data stream.
Filter represents-
• A filter is a component and an independent entity or independent data stream transformer or stream
transducers.
• It transforms and refines input data or input data stream, processes it, and writes the transformed data
stream over a pipe for the next filter to process.
• It works in an incremental mode, in which it starts working as soon as data arrives through connected
pipe.
• It has interfaces from which a set of inputs can flow in and a set of outputs can flow out.
There are two types of filters − active filter and
passive filter.
Active filter: Active filter lets connected pipes to pull
data in and push out the transformed data. It
operates with passive pipe, which provides
read/write mechanisms for pulling and pushing. This
mode is used in UNIX pipe and filter mechanism.
Passive filter: Passive filter lets connected pipes to
push data in and pull data out. It operates with active
pipe, which pulls data from a filter and pushes data
into the next filter. It must provide read/write
Figure 2.5: Pipes and Filter mechanism.
All filters are the processes that run at the same time, it means that they can run as different threads, coroutines
or be located on different machines entirely. Each pipe is connected to a filter and has its own role in the function
of the filter. The filters are robust where pipes can be added and removed at runtime. Filter reads the data from
its input pipes and performs its function on this data and places the result on all output pipes. If there is
insufficient data in the input pipes, the filter simply waits.
Advantages:
• Provides concurrency and high throughput for excessive data processing.
• Provides reusability and simplifies system maintenance.
• Provides modifiability and low coupling between filters.
• Provides flexibility by supporting both sequential and parallel execution.
Disadvantages:
• Not suitable for dynamic interactions.
• Overhead of data transformation between filters.
• Does not provide a way for filters to cooperatively interact to solve a problem.
• Difficult to configure this architecture dynamically.
3. Process Control: Process Control Architecture is a type of Data Flow Architecture, where data is neither
batch sequential nor pipe stream. In process control architecture, the flow of data comes from a set of variables
which controls the execution of process.
This architecture decomposes the entire system into subsystems or modules and connects them. Types of
Subsystems- A process control architecture would have a processing unit for changing the process control
variables and a controller unit for calculating the amount of changes.
A controller unit must have the following elements −
• Controlled Variable − Controlled Variable provides values for the underlying system and should be
measured by sensors. For example, speed in cruise control system.
• Input Variable − Measures an input to the process. For example, temperature of return air in
temperature control system
• Manipulated Variable − Manipulated Variable value is adjusted or changed by the controller.
• Process Definition − It includes mechanisms for manipulating some process variables.
• Sensor − Obtains values of process variables pertinent to control and can be used as a feedback
reference to recalculate manipulated variables.
• Set Point − It is the desired value for a controlled variable.
• Control Algorithm − It is used for deciding how to manipulate process variables.
Application Areas:
Process control architecture is suitable in the following domains −
• Embedded system software design, where the system is manipulated by process control variable data.
• Applications, which aim is to maintain specified properties of the outputs of the process at given
reference values.
• Applicable for car-cruise control and building temperature control systems.
• Real-time system software to control automobile anti-lock brakes, nuclear power plants, etc.
Call-and Return Architecture: A call and return architecture enables software designers to achieve a program
structure, which can be easily modified. This style consists of the following two substyles.
1. Main program/subprogram architecture: In this, function is decomposed into a control hierarchy where
the main program invokes a number of program components, which in turn may invoke other
components.
2. Remote procedure call architecture: In this, components of the main or subprogram architecture are
distributed over a network across multiple computers.
Call and Return (Functional):
• Routines correspond to units of the task to be performed.
• Combined through control structures.
• Routines known through interfaces (argument list)
Advantages: Disadvantages:
• Architecture based on well-identified parts of the task. • Must know which exact routine to
• Change implementation of routine without affecting change.
clients. • Hides role of data structure.
• Reuse of individual operations. • Bad support for extendibility.
Call and Return (Object-Oriented):
• A class describes a type of resource and all accesses to it (encapsulation).
• Representation hidden from client classes.
Advantages: Disadvantages:
• Change implementation without affecting • Objects must know their interaction partners;
clients. when partner changes, clients must change.
• Can break problems into interacting agents. • Side effects: if A uses B and C uses B, then C’s
• Can distribute across multiple machines or effects on B can be unexpected to A.
networks.
Data-Centered Architecture: Data Centered Architecture is also known as Database Centric Architecture.
It is a layered process which provides architectural guidelines in data center development. This architecture is
the physical and logical layout of the resources and equipment within a data center facility.
• In data-centred architecture, the data is centralized and accessed frequently by other components,
which modify data.
• It consists of different components that communicate through shared data repositories.
• The components access a shared data structure and are relatively independent, in that, they interact
only through the data store.
• This architecture specifies how these devices will be interconnected and how physical and logical
security workflows are arranged.
Micro-services Architecture: "Microservices" - yet another new term on the crowded streets of software
architecture. The microservice architectural style is an approach to developing a single application as a suite of
small services, each running in its own process and communicating with lightweight mechanisms.
• The microservice architecture enables the rapid, frequent and reliable delivery of large, complex
applications. It also enables an organization to evolve its technology stack.
• A microservices architecture consists of a collection of small, autonomous services. Each service is self-
contained and should implement a single business capability.
• Microservices are small, independent, and loosely coupled. A single small team of developers can write
and maintain a service.
• Each service is a separate codebase, which can be managed by a small development team.
• Services can be deployed independently. A team can update an existing service without rebuilding and
redeploying the entire application.
Benefits:
• Agility. Because microservices are deployed independently, it's easier to manage bug fixes and feature
releases. You can update a service without redeploying the entire application and roll back an update if
something goes wrong. In many traditional applications, if a bug is found in one part of the application, it
can block the entire release process.
• Small, focused teams. A microservice should be small enough that a single feature team can build, test,
and deploy it. Small team sizes promote greater agility. Large teams tend be less productive, because
communication is slower, management overhead goes up, and agility diminishes.
• Small code base. In a monolithic application, there is a tendency over time for code dependencies to
become tangled, adding a new feature requires touching code in a lot of places. By not sharing code or
data stores, a microservices architecture minimizes dependencies, and that makes it easier to add new
features.
• Fault isolation. If an individual microservice becomes unavailable, it won't disrupt the entire application.
• Scalability. Services can be scaled independently, letting you scale out subsystems that require more
resources, without scaling out the entire application.
• Data isolation. It is much easier to perform schema updates, because only a single microservice is affected.
Challenges: The benefits of microservices don't come for free. Here are some of the challenges to consider
before embarking on a microservices architecture.
• Complexity: A microservices application has more moving parts than the equivalent monolithic
application. Each service is simpler, but the entire system as a whole is more complex.
• Development and testing: Writing a small service that relies on other dependent services requires a
different approach than a writing a traditional monolithic or layered application. Existing tools are not
always designed to work with service dependencies.
• Network congestion and latency: The use of many small, granular services can result in more interservice
communication. Also, if the chain of service dependencies gets too long (service A calls B, which calls C...),
the additional latency can become a problem.
• Data integrity: With each microservice responsible for its own data persistence. As a result, data
consistency can be a challenge. Embrace eventual consistency where possible.
• Versioning: Updates to a service must not break services that depend on it. Multiple services could be
updated at any given time, so design carefully.