OOADS Notes-) BTIT604-Object Oriented Analysis and Design

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

Object Oriented Analysis and Design Notes

Unit-1

Object-oriented technology (OOT) is a software design model in which objects contain both data and the
instructions that work on the data. It is increasingly deployed in distributed computing.
Object-oriented technology (OOT) is a type of software design model. Developers are able to create applications
and programs by using objects or symbols. This differs from "normal development," which relies on the use of
a programming language. The instructions on how to use this programming tool are also contained within the
data.

What is an example of object-oriented technology?


Object-oriented programming builds on the concept of reusing code through the development and maintenance
of object libraries. These objects are available for building and maintaining other applications. Some example
of OOP languages are C++, SmallTalk, and Eiffel.

Basic Principles of Object-Oriented Technology

Today computer hardware continues to increase in power and speed at a phenomenal rate. Software, on the
other hand, remains difficult to develop and maintain. The job a systems analysts is to talk to their clients to
define a new system, or to define the nature of updates to an existing computer system. Usually structured
methods like functional decomposition, top down design, and data flow diagrams are used in their analysis. The
analyst writes the specs and then ask the client to sign off on the specs. Essentially, the end-user must approve
the analysis saying this is what they want, while the analyst is saying that this is what I will deliver. This
specification is then cast in stone and will never change during the development of the system. Yeah right.

The business world today is changing at an unprecedented rate and computer systems need to be able to change
as quickly as the technology. Unfortunately most computer system projects today seldom are done on time or
within budget, and when the project is finished it can be extremely difficult and expensive (if not impossible)
to make updates to the system. What is needed is a new approach to developing computer systems, one that
can handle both small and large systems. Maintenance costs must be kept to a minimum, the system must be
flexible, and the specification must be flexible to meet changing business needs.

To achieve this goal, we need to break out of the software paradigm that has been around for the last fifty
years. Object-oriented technology (OT) will help to break the existing software paradigm and meet the
changing business needs of today.

Object-oriented technology

The next logical step of database design is the object-oriented technology method. Object-oriented databases
will not only store data, the relationships between data, but also the behavior of the data. Once data behaviors
are added into a database management system, these "intelligent" databases are dramatically changing the
development of database systems.

The OT (Object-Oriented Technology) method not only uses the intelligent database concept but enhances it by
adding additional features. Instead of simple triggers that are associated with a physical event, object-oriented
behaviors may contain operations which effect hundreds of database objects. Objects are "encapsulated" with
their methods, and as a result no data items may be accessed or updated except through these methods. Of
course, encapsulation violates the Relational database concept of data independence, and any type of "ad-hoc"
data access is prohibited in the OT model. This is a major problem with coexistence strategies for relational
databases and object-oriented databases.

OT has suffered for a long time because there were no real standards and no one stepping up to develop
standards. Each object vendor seemed to be doing their own thing. Facilitated by Chris Stone, Some of the
object vendors got together and formed the OMG (Object Management Group).

One of the most confusing things about OT are all of the new terms, jargon, and acronyms. Every new
technology has it's share of buzzwords that you need to learn and understand but OT has a much higher share
of buzzwords. The two letters OO together seem to encourage more and more ?OO? acronyms. To put an end
to this plethora of acronyms, the OMG decided to call the paradigm OT, or Object Technology.

The following is a list of commonly used OOT acronyms:

Standards committee terms

OMG -Object Management Group

ODMG -Object Database Management Group

CORBA -Common Object Request Broker Architecture

OMT -Object Modeling Technique

General Object-Oriented Terms

OOA -Object-oriented Analysis

OOD -Object-oriented Design

OOPL -Object-oriented Programming Language

OOPS -Object-oriented Programming Systems

OODBMS -Object-oriented Database management system

OO -Object-oriented

OT -Object-oriented Technology

Object-Oriented Programming Languages

Beginning with modeling languages such as SIMULA, many object-oriented programming languages (OOPL)
have evolved, being incorporated into Object-oriented database systems. In their 1993 specifications endorsed
C++ and Smalltalk. For programming to be considered object oriented it must use encapsulation, inheritance,
and polymorphism.

C++ has emerged as the most dominant object-oriented language. C++ is really an extension of C language
which is not object-oriented in nature. Object technology purists will be quick to point out that C++ is not a
pure object-oriented language, and as anyone who uses C++ will tell you it is very difficult to learn and
master. C++ actually got its name from the C programming language, where the double plus sign (++) is used
to add a variable, and since C++ is the next extension of C, the ++ was added to the name.

Incrementing a counter

Cobol: ADD 1 TO COUNTER.


Basic: counter = counter + 1;

C: counter++;

Smalltalk, on the other hand, is a pure object-oriented language which makes the programmer follow the OT
methodology. Smalltalk is easier to learn than C++ and because of its nature most colleges and universities have
endorsed it as the standard teaching language. Most students learn Smalltalk as their first object language and
then move on to learn other OOPLs.

Development and Object Oriented Modeling History.

The history of OOA/D has many branches, and this brief synopsis can't do justice to all the contributors. The
1960s and 1970s saw the emergence of OO programming languages, such as Simula and Smalltalk, with key
contributors such as Kristen Nygaard and especially Alan Kay, the visionary computer scientist who founded
Smalltalk. Kay coined the terms object-oriented programming and personal computing, and helped pull
together the ideas of the modern PC while at Xerox PARC.

But OOA/D was informal through that period, and it wasn't until 1982 that OOD emerged as a topic in its own
right. This milestone came when Grady Booch (also a UML founder) wrote the first paper titled Object-Oriented
Design, probably coining the term [Booch82]. Many other well-known OOA/D pioneers developed their ideas
during the 1980s: Kent Beck, Peter Coad, Don Firesmith, Ivar Jacobson (a UML founder), Steve Mellor,
Bertrand Meyer, Jim Rumbaugh (a UML founder), and Rebecca Wirfs-Brock, among others. Meyer published
one of the early influential books, Object-Oriented Software Construction, in 1988. And Mellor and Schlaer
published Object-Oriented Systems Analysis, coining the term object-oriented analysis, in the same year. Peter
Coad created a complete OOA/D method in the late 1980s and published, in 1990 and 1991, the twin
volumes Object-Oriented Analysis and Object-Oriented Design. Also in 1990, Wirfs-Brock and others
described the responsibility-driven design approach to OOD in their popular Designing Object-Oriented
Software. In 1991 two very popular OOA/D books were published. One described the OMT method, Object-
Oriented Modeling and Design, by Rumbaugh et al. The other described the Booch method, Object-Oriented
Design with Applications. In 1992, Jacobson published the popular Object-Oriented Software Engineering,
which promoted not only OOA/D, but use cases for requirements.

The UML started as an effort by Booch and Rumbaugh in 1994 not only to create a common notation, but to
combine their two methods—the Booch and OMT methods. Thus, the first public draft of what today is
the UML was presented as the Unified Method. They were soon joined at Rational Corporation by Ivar
Jacobson, the creator of the Objectory method, and as a group came to be known as the three amigos. It was at
this point that they decided to reduce the scope of their effort, and focus on a common diagramming notation—
the UML—rather than a common method. This was not only a de-scoping effort; the Object Management Group
(OMG, an industry standards body for OO-related standards) was convinced by various tool vendors that an
open standard was needed. Thus, the process opened up, and an OMG task force chaired by Mary Loomis and
Jim Odell organized the initial effort leading to UML 1.0 in 1997. Many others contributed to the UML, perhaps
most notably Cris Kobryn, a leader in its ongoing refinement.
Object Modeling Technique (OMT) is real world based modeling approach for software modeling and
designing. It was developed basically as a method to develop object-oriented systems and to support object-
oriented programming. It describes the static structure of the system.
Object Modeling Technique is easy to draw and use. It is used in many applications like telecommunication,
transportation, compilers etc. It is also used in many real world problems. OMT is one of the most popular
object oriented development techniques used now-a-days. OMT was developed by James Rambaugh.
Purpose of Object Modeling Technique:
• To test physical entity before construction of them.
• To make communication easier with the customers.
• To present information in an alternative way i.e. visualization.
• To reduce the complexity of software.
• To solve the real world problems.
Object Modeling Technique’s Models:
There are three main types of models that has been proposed by OMT:
1. Object Model:
Object Model encompasses the principles of abstraction, encapsulation, modularity, hierarchy,
typing, concurrency and persistence. Object Model basically emphasizes on the object and class.
Main concepts related with Object Model are classes and their association with attributes.
Predefined relationships in object model are aggregation and generalization (multiple
inheritance).
2. Dynamic Model:
Dynamic Model involves states, events and state diagram (transition diagram) on the model.
Main concepts related with Dynamic Model are states, transition between states and events to
trigger the transitions. Predefined relationships in object model are aggregation (concurrency)
and generalization.
3. Functional Model:
Functional Model focuses on the how data is flowing, where data is stored and different
processes. Main concepts involved in Functional Model are data, data flow, data store, process
and actors. Functional Model in OMT describes the whole processes and actions with the help of
data flow diagram (DFD).
Phases of Object Modeling Technique:
OMT has the following phases:
1. Analysis:
This the first phase of the object modeling technique. This phase involves the preparation of
precise and correct modelling of the real world problems. Analysis phase starts with setting a
goal i.e. finding the problem statement. Problem statement is further divided into above
discussed three models i.e. object, dynamic and functional model.
2. System Design:
This is the second phase of the object modeling technique and it comes after the analysis phase.
It determines all system architecture, concurrent tasks and data storage. High level architecture
of the system is designed during this phase.
3. Object Design:
Object design is the third phase of the object modelling technique and after system design is
over, this phase comes. Object design phase is concerned with classification of objects into
different classes and about attributes and necessary operations needed. Different issues related
with generalization and aggregation are checked.
4. Implementation:
This is the last phase of the object modeling technique. It is all about converting prepared design
into the software. Design phase is translated into the Implementation phase.

Three models, Class Model, State model and Interaction model:

Intention of object oriented modeling and design is to learn how to apply object -oriented concepts to all the
stages of the software development life cycle.Object-oriented modeling and design is a way of thinking
about problems using models organized around real world concepts. The fundamental construct is the
object, which combines both data structure and behavior.
Purpose of Models:
1. Testing a physical entity before building it
2. Communication with customers
3. Visualization
4. Reduction of complexity
Types of Models:
There are 3 types of models in the object oriented modeling and design are: Class Model, State Model, and
Interaction Model. These are explained as following below.
1. Class Model:
The class model shows all the classes present in the system. The class model shows the attributes
and the behavior associated with the objects.
2. The class diagram is used to show the class model.The class diagram shows the class name followed
by the attributes followed by the functions or the methods that are associated with the object of the
class.Goal in constructing class model is to capture those concepts from the real world that are
important to an application.

3. State Model:
State model describes those aspects of objects concerned with time and the sequencing of operations
– events that mark changes, states that define the context for events, and the organization of events
and states.Actions and events in a state diagram become operations on objects in the class model.
State diagram describes the state model.

4. Interaction Model:
Interaction model is used to show the various interactions between objects, how the objects
collaborate to achieve the behavior of the system as a whole.
The following diagrams are used to show the interaction model:
1. Use Case Diagram
2. Sequence Diagram
3. Activity Diagram
Unit-2
Class Modeling: Object and class concepts:
Class model defines the structure of the entire system by identifying the static structure of objects in that system.
A class model defines attributes and operations for the objects of each class and also the relationship between
the objects in the systems.
A class model prepares the platform for the other two models i.e. the state model and interaction model. Class
model is considered important as it represents the graphical representation of the entire system and helps in
analysis while communicating with the customers.

The motive behind constructing the class model is to concentrate on those concepts of real world that are
important from the application point of view. In this section, we will discuss the class model along with its
elements like the objects, classes, links and association between the classes.

Objects in Class Model

An object is an abstracted thing that comprises data structure and behaviour. Each object can be identified
distictly from the application perspective. The objects that share the same data structure and behaviour are
classified to a single class.

We can classify the object into two types, concrete object and a conceptual object. A concrete object is the one
which has existence in the real world, for example, Jhon, IBM company, a file in a file system etc. Conceptual
object is one which has existence in the real world but can not be accessed or sensed for example an insurance
policy, a formula to solve a quadratic equation, scheduling policy of an operating system etc.

The objects in the system are identified from the problem statement provided for developing the software.
Generally, the objects in the system are proper noun and identified such that they are important from the
applications view point.

Every object is distinctly identified in the system, even though they have the same attribute values and share the
same functions. Consider two bikes that are of the same company, same model, same features and even have
the same colour. Still, two bikes are two different instances of the same class and have an individual identity.

Classes in Class Model

We can define a class as a group of objects possessing the same attributes and the same operations. An object
is said to be an instance or an occurrence of the class. To understand this in a better way consider a class
Person. Look at the image below you can observe, two objects of the class person i.e. Joe and Smith. They
share the same attributes such as name, city and same operation such as change-city, change-job.
Generally, a class is always a common noun or a noun phrase that is derived from the problem statement
provided to develop the system. Objects in the class can be distinguished identically on the basis of their attribute
values although two objects from the same class can have all attributes values same.

Whenever the class is derived from the problem statement, choice of its properties (attributes) and behaviour
(operation) depends upon the purpose of the application while the rest properties and functions are ignored.

Let us suppose, a company has several people then they all will be classified to a class employee. Now consider
a shopping centre here the people will be classified into two different classes employee and customer. So, the
interpretation of the classes depends on the purpose of the application.

The class model is represented using the class diagram which is the graphical representation of the classes and
their relationship with each other. It also describes the possible objects for each class.

Relationship Between Objects and Classes

To define the relationship between the objects and classes two terms are used, link and association.
A link defines the connection between the objects. A link can relate two or more objects.

Association can be defined as the group of links and a link is said to be an instance of the association. Like a
class describes all possible objects similarly association describe all possible links. In a problem statement, a
link or association can be identified by the verbs present in the problem statement.
When some classes have some common features then those features are factored out in a superclass. The
superclass has the general information which is shared by all its subclasses. This relation between a superclass
and one or more of its subclasses is called Generalization.

Generalization is referred to as ‘is-a’ relationship. The concept of generalization organizes the classes in a
hierarchy where the subclasses inherit the features of superclass.

Key Takeaways

• The class model is the graphical representation of the structure of the system and also the relation
between the objects and classes in the system.
• Each object in the system has data structure and behaviour.
• Object sharing the same features are grouped to a class.
• Objects are the proper nouns and classes are common nouns identified in the problem statement
provided for the development of the application.
• The relationship between the objects is the link and the group of links with the same structure is
termed as an association.
• The class model focuses on the factors that are essential from the applications point of view.

So, this is all about the class model which is the graphical representation that incorporates objects, classes, the
relation between objects and classes.

Link and association :

Link and association in UML represent the relation between objects and classes. Link is used to establish
the relation between the objects in the object diagram. Association is used to represent the relation between
the classes.

In the problem statement provided to develop the software, link and association are presented by the verb. In
this section, we will be discussing the link and association in the context of object-oriented analysis and
design.

Definition of Link and Association

Link defines the relationship between two or more objects and a link is considered as an instance of an
association. This means an association is a group of links that relates objects from the same classes. This can
be easily understood with the help of an example.

Let us take the two classes Person and Company. Now there is an association relation between these two
classes. Let’s say a person may own stock in zero or more companies. And it can be related in reverse that a
company may have several persons owing its stock.

The object diagram below shows the links between the objects of person and company class.
The class diagram below shows the association between the person and the company class. Both link and
association are represented with a line in UML notation.

Association Name

Association name is the name that reveals the kind of relation between the classes. While modelling, it not
mandatory to name the association until it is unambiguous. Multiple associations between the same classes then
it might create confusion.

This ambiguity in the model compels to name the association between classes. For example, a person own stock
in a company and person works for a company. So, to resolve the ambiguity you must name the association.

Association Direction

Associations are bidirectional. When the association is established between two classes we can travel in both
directions. The direction of traversing depends on the name of the association. Like, in the figure above, the
association name between the two classes person and stock be Worksfor. So, we navigate as a person works
for a company i.e. this association connects a person to a company.

If we name the association as Employs then we will navigate as a company employs a person i.e. this association
connects a company to a Person. It is optional while modelling but if applied it prevents confusion.

Association End Names

Associated end names allotted to the ends of the association, ease in traversing the associations. Let us take
the above example of two classes person and company and their participation in the association ‘Worksfor’.
Now if we name the association ends i.e. the association end at the person side would be named ‘employee’
and the association end at the company side would be ‘employer’. It eases traversing the association.

Considering the binary association, every end of the association refers to an object or a set of objects
associated with its respective source object.

As we have seen that there can be multiple associations between two classes. The association end names also
help in identifying the type of association between the classes. Now we will discuss the properties of
association end names
1. Multiplicity

Multiplicity is defined as the number of occurrences of one class that is associated with the single occurrence
of its associated class. Accordingly, the multiplicity defines the number of associated objects of the same
classes in the model. The figure below shows the many-to-many multiplicity.

The figure below shows the one-to-one multiplicity.

2. Ordering

In case we have objects with ‘many’ associated ends and there is no order then we can consider it as a set. But
sometimes the occurrence of objects has an explicit order. For example, consider the example below.
On one screen there can be multiple windows overlapped. The occurrence of each window on the screen is at
most once. The window at the top is visible to the user.

The ordering property is a part of the association. To mention that the set of objects are ordered you can add
“{ordered}” at the association end of that set of objects.

3. Bags and Sequence

As we have seen earlier in a binary association, the association between two objects is represented by a single
link. But when the object from the same classes has multiple associations then you can represent it with
multiple links between the pair of objects of same associated classes. You can do this by writing the “{bag}”
or “sequence” at the association end.

The bag is a collection of elements permitting the duplicates whereas sequence is the ordered collection of
elements allowing the duplicates. For example, a route may involve a visit to multiple airports. So, this
association end must be noted by {sequence}.

The only difference between sequence and ordered is that the sequence allows duplicates whereas ordered
don’t.

4. Qualified Association

As we have discussed above there can be multiple associations between the classes, the qualified association
helps in avoiding the confusion about the object with ‘many’, association end. The qualified association
presents an attribute named qualifier that disambiguates the association ends of the object.

The qualifier can be defined for one-to-one and many-to-many association. A qualified association adds more
information to the class model. For example, a bank has several accounts and each account has a unique
identity due to the account number. So, here the account number act as the qualifier.

Association Class

A class object is defined with the attributes of the class. Similarly, the links of an association can be described
with the attributes. This representation is called an association class. The way classes have attributes and
operation, the association class also has attributes and operations.
The instances or the occurrences of the association class retrieve its identity from the instances of the
constituent classes. You can identify the association classes by identifying adverbs in the problem statements.

Key Takeaways

• Link is the relation between the objects.


• Association is a group of links where each link is an instance of the association.
• If there are multiple associations between the same classes then to avoid the
ambiguity associations can be named.
• Associations are bidirectional, the direction of navigation depends on the name of the association.
• An association has two ends that can also be named for the conveniently traversing the association.
• Association end names are less confusing than the association names.
• Multiplicity refers to the ends of association and describes the number of instances of one class
associated with the instances of other class.
• The association ends can be ordered by writing the “{ordered}” at the end of association where the
objects have an explicit order avoiding the duplicates.
• A pair of objects with multiple links can be represented by the association end with {bag} or
{sequence}.
• Bag and sequence both are the collections of elements except that later one is ordered collection of
elements permitting the duplicates.
• A qualified association removes the confusion for the object with many associate ends.
• The qualified association is an attribute that qualifies the association between two classes.
• Association class is a class where the links of the association are described as attributes.
• Association class adds more information to the class model.

Aggregation in object orientation is a kind of association which relates two objects by ‘part-of whole’
relationship that’s why it is possible only in a binary association. Aggregation is a ‘has-a’ relationship
between two class like a car has a wheel, an engine has a gearbox and so on.

Discovering aggregation in a class model is just a matter of judgement. In this section, we will discuss what is
aggregation in object orientation and how we can figure out the aggregation in the class model.

What is Aggregation?

Aggregation is a kind of association that is used to establish a relationship between the assembly
class and one constituent part class. Aggregation is a binary association where one end of the association is
aggregate and the other end is constituent.
Aggregation is the ‘has-a’ relationship. An aggregate object is derived from the several lesser constituent parts
where each constituent is a part of the aggregation. We can refer an aggregate object as an extended object
which is operated as a single unit though it is derived from the several constituent objects.

The aggregation is denoted by the straight line with the empty arrow by the side of assembly class.

Let us understand this above theory with the help of an example. Let’s say we have an assembly class Car and
the constituent classes Wheel, Engine, Seat.

For a Car has wheels, an engine, seats. Here the class Car is an assembly class and the others are the constituent
part classes. Each individual pairing i.e. Car to Wheel is one aggregation relation, Car to Engine is an
aggregation relation and Car to Seats is an aggregation relation.

This concludes that aggregation is a binary association. Let us discuss some interesting properties of
aggregation.

• Aggregation is transitive which means if A belongs to B and B belongs to C then we can say that A
belongs to C. For example, the gearbox is a part of engine and engine is a part of car then gearbox is the
part of the car.
• Aggregation is antisymmetric i.e. when we say that A is part of B it doesn’t mean that b is also a part
of A. For example, a car has an engine but we can not say that engine will also behave like a car.
It can also be put up like this, aggregation is unidirectional which means only one end of the association
can be marked as aggregation.

Till now we have learnt that aggregation is a kind of association. Now let discuss what is special in aggregation
when compared to the association.

Comparison with Association

Association simply defines the relationship between two objects and classes in the class model. We have
discussed association in brief in our previous content, you can overview the topic. If two objects are related
using the part-whole relationship then the relation is aggregation.

If two objects are bounded by relation and even are independent of each other then it is an association. To decide
whether the relation between the classes must be aggregation or association you must ask the following
questions to the classes in the model.

• Whether a class is a part of another class?


• Whether an operation performed on the whole class gets automatically applies to the constituent classes?
• Do the attribute values from the whole class propagate to the constituent classes?
• Whether there is an asymmetry to the association?

When you discover the answers for these questions you will be able to realize the use of aggregation.
Aggregation include part-of relationship, discovering aggregate object incorporating the constituent parts.
While modelling one must not compel the association to be modelled as aggregation it is a matter of judgement.
If you exercise the judgement carefully and distinguish between aggregation and association then it is not a
problem to figure out aggregation and association separately.

Comparison with Composition

The aggregation and composition both are the part-of relationship and when explored in deep we observe that
the aggregation is weak ‘part-of’ relationship and composition is a strong and restrictive ‘part-of’ relationship.

The composition is similar to aggregation with two restrictions. The first restriction is that the constituent part
class must be related at most one assembly class. The second restriction is that once the constituent part class is
related to the assembly class its existence depends upon the assembly class lifetime.

Thus, the composition is an ownership of the constituent part by the assembly class. Well, this eases the
programming as the deletion of the assembly class object invokes the deletion of constituent objects also.

Like in the example given below, just note that the composite object of company class would be derived from
constituent objects of division and department constituent part classes. Now, if the object of the company class
is deleted then constituent objects of division and department class will also be affected.

While modelling the composition is denoted same as aggregation i.e. a straight line with the arrowhead but the
difference is that the aggregation has empty arrowhead and composition has solid filled arrowhead to the side
of assembly class.
Whereas the deletion of an assembly class object does not affect the object of the constituent part class. For
example, if the object of the car class is deleted it does not affect the existence of wheel class object.

Propagation of Operation

Propagation of operation means whenever the operation is performed on the object of assembly class it
triggers the operations of constituent part classes objects. This propagation of operation from starting object to
the parts objects signals aggregation.

The propagation of operation in a class model can be represented by the small arrow showing the direction of
propagation.

Key Takeaways

• Aggregation is a kind of association i.e. relationship between the objects and classes.
• Aggregation is the ‘has-a’ relationship.
• Aggregation can only occur in binary association.
• Multiplicity can be specified in aggregation by defining the individual pairing between a constituent
class and assembly class.
• An aggregate object is derived from several constituent objects.
• Aggregation is transitive and asymmetric.
• If the two objects are bounded by a ‘part-of‘ relationship then it is aggregation.
• The aggregation is denoted by the straight line with an empty arrowhead to the side of assembly class.
• Deletion of an assembly class object does not affect the objects of the constituent part class.
• Propagation of operations from the assembly class object to the constituent class objects denotes
aggregation.
Generalization and Inheritance

Generalization: Generalization and inheritance are powerful abstractions for sharing the structure and/or
behaviour of one or more classes.
Generalization is the relationship between a class, and it defines a hierarchy of abstraction in which subclasses
(one or more) inherit from one or more superclasses.
Generalization and inheritance are transitive across a subjective number of levels in the hierarchy.
Generalization is an “is-a-kind of” relationship, for example, Saving Account is a kind of Account, PG student
is kind of Student, etc.
The notation for generalization is a triangle connecting a super class to its subclasses. The superclass is
connected by a line to the top of the triangle. The subclasses are connected by lines to a horizontal bar attached
to the base of the triangle. Generalization is a very useful construct for both abstract modeling and
implementation. You can see in Figure 8, a generalization of Account class.

Inheritance: Inheritance is taken in the sense of code reuse within the object oriented development. During
modeling, we look at the resulting classes, and try to group similar classes together so that code reuse can be
enforced. Generalization, specialization, and inheritance have very close association. Generalization is used to
refer to the relationship among classes, and inheritance is used for sharing attributes and operations using the
generalization relationship. In respect of inheritance, generalization and specialization are two phases of a coin
in the sense that if a subclass is seen from a superclass the subclass is seen as a specialized version of superclass
and in, reverse, a superclass looks like general form of subclass.
During inheritance, a subclass may override a superclass feature by defining that feature with the same name.
The overriding features (the subclass feature with the same names of superclass features) refines and replaces
the overridden feature (the superclass feature).
Now let us look at the diagram given in Figure 9. In this diagram, Circle, Triangle, and Square classes are
inherited from Shape class. This is a case of single inheritance because here, one class inherits from only one
class.
Messages, aggregation and abstract classes in OOPS

Object-Oriented Programming System (OOPS) is the basic concept of many programming languages. It
is a paradigm based on the object which contains methods and data. This concept is used to make
programming a class and object model which behaves like a real-world scenario. In this article, we’ll learn
about messages, aggregation, composition and abstract classes in the OOPS paradigm.
Message Passing: Message Passing in terms of computers is communication between processes. It is a form
of communication used in object-oriented programming as well as parallel programming. Message passing in
Java is like sending an object i.e. message from one thread to another thread. It is used when threa ds do not
have shared memory and are unable to share monitors or semaphores or any other shared variables to
communicate. The following are the main advantages of the message passing technique:
1. This model is much easier to implement than the shared memory model.
2. Implementing this model in order to build parallel hardware is much easier because it is quite
tolerant of higher communication latencies.
In OOPs, there are many ways to implement the message passing technique like message passing through
constructors, message passing through methods or by passing different values. The following is a simple
implementation of the message passing technique by the values:
Java

// Java program to demonstrate


// message passing by value

import java.io.*;

// Implementing a message passing


// class
public class MessagePassing {

// Implementing a method to
// add two integers
void displayInt(int x, int y)
{
int z = x + y;
System.out.println(
"Int Value is : " + z);
}

// Implementing a method to multiply


// two floating point numbers
void displayFloat(float x, float y)
{
float z = x * y;
System.out.println(
"Float Value is : " + z);
}
}
class GFG {

// Driver code
public static void main(String[] args)
{
// Creating a new object
MessagePassing mp
= new MessagePassing();

// Passing the values to compute


// the answer
mp.displayInt(1, 100);
mp.displayFloat((float)3, (float)6.9);
}
}

Output:
Int Value is : 101
Float Value is : 20.7
Aggregation: This is a form of association in a special way. Aggregation is strictly directional association
(i.e.), one-way association and represents HAS-A relationship between the classes. Also, in aggregation,
ending one of the classes between the two does not affect another class. It is often denoted as a weak
association whereas composition is denoted as a strong association. In composition, the parent owns the child
entity that means child entity will not exist without parent entity and also, child entity cannot be accessed
directly. Whereas, in the association, the parent and child entity can exist independently. Let’s take an
example to understand this concept. Lets take a student class and an Address class. The address class
represents the student address. Since every student has an address, it has an has a relationship. However, the
address is completely independent of student. If some student leaves the school or the college, he or she still
has an address. This means that the address can survive independently without the student. Therefore, this is
an aggregation. The following is an implementation of the above example:
Java

// Java program to demonstrate


// an aggregation

// Implementing the address


// class
class Address {
int strNum;
String city;
String state;
String country;

// Constructor of the address


// class
Address(int street, String c,
String st, String count)
{
this.strNum = street;
this.city = c;
this.state = st;
this.country = coun;
}
}

// Creating a student class


class Student {
int rno;
String stName;

// HAS-A relationship with the


// Address class
Address stAddr;
Student(int roll, String name,
Address addr)
{
this.rno = roll;
this.stName = name;
this.stAddr = addr;
}
}

class GFG {

// Driver code
public static void main(String args[])
{

// Creating an address object


Address ad
= new Address(10, "Delhi",
"Delhi", "India");

// Creating a new student


Student st
= new Student(96, "Utkarsh", ad);

// Printing the details of


// the student
System.out.println("Roll no: "
+ st.rno);

System.out.println("Name: "
+ st.stName);

System.out.println("Street: "
+ st.stAddr.strNum);

System.out.println("City: "
+ st.stAddr.city);

System.out.println("State: "
+ st.stAddr.state);

System.out.println("Country: "
+ st.stAddr.country);
}
}

Output:
Roll no: 96
Name: Utkarsh
Street: 10
City: Delhi
State: Delhi
Country: India
Abstract Classes: Abstraction is a technique used in OOPS paradigm which shows only relevant details to the
user rather than showing unnecessary information on the screen helping in reduction of the program
complexity and efforts to understand. Every OOPS implemented language has a different kind of
implementation but has the same concept of hiding irrelevant data. Abstract classes are one such way in java
to implement the abstraction. In Java, abstract classes are declared using abstract keyword and can have
abstract as well as normal methods but, normal classes cannot have abstract methods declared in it. Abstract
classes either have a definition or they are implemented by the extended classes. Let’s take an example to
understand why abstraction is implemented. In this example, we will create cars manufactured in a specific
year. There can be many cars manufactured in a specific year. But the properties of cars doesn’t chan ge. So,
the name of the car here is abstract and the remaining properties are constant. The following is an
implementation of the above example:
Java

// Java program to demonstrate the


// abstract class

// Implementing the abstract class


// Car
abstract class Car {

// A normal method which contains


// the details of the car
public void details()
{
System.out.println(
"Manufacturing Year: 123");
}

// The name of the car might change


// from one car to another. So,
// implementing an abstract method
abstract public void name();
}

// A class Maserati which


// extends the Car
public class Maserati extends Car {

// Naming the car


public void name()
{
System.out.print(
"Maserati!");
}

// Driver code
public static void main(String args[])
{
Maserati car = new Maserati();
car.name();
}
}

Output:
Maserati!

Transition and conditions, state diagram, state diagram behavior,

A state diagram is used to represent the condition of the system or part of the system at finite instances of
time. It’s a behavioral diagram and it represents the behavior using finite state transitions. State diagrams are
also referred to as State machines and State-chart Diagrams. These terms are often used interchangeably.
So simply, a state diagram is used to model the dynamic behavior of a class in response to time and changing
external stimuli. We can say that each and every class has a state but we don’t model every class usi ng State
diagrams. We prefer to model the states with three or more states.
Uses of statechart diagram –
• We use it to state the events responsible for change in state (we do not show what processes
cause those events).
• We use it to model the dynamic behavior of the system .
• To understand the reaction of objects/classes to internal or external stimuli.
Firstly let us understand what are Behavior diagrams? There are two types of diagrams in UML :
1. Structure Diagrams – Used to model the static structure of a system, for example- class
diagram, package diagram, object diagram, deployment diagram etc.
2. Behavior diagram – Used to model the dynamic change in the system over time. They are used
to model and construct the functionality of a system. So, a behavior diagram simply guides us
through the functionality of the system using Use case diagrams, Interaction diagrams, Activity
diagrams and State diagrams.
Difference between state diagram and flowchart –
The basic purpose of a state diagram is to portray various changes in state of the class and not the
processes or commands causing the changes. However, a flowchart on the other hand portrays the
processes or commands that on execution change the state of class or an object of the class.

Figure – a state diagram for user verification


The state diagram above shows the different states in which the verification sub-system or class exist for a
particular system.

Basic components of a statechart diagram –

1. Initial state – We use a black filled circle represent the initial state of a System or a class.

Figure – initial state notation


2. Transition – We use a solid arrow to represent the transition or change of control from one state
to another. The arrow is labelled with the event which causes the change in state.

Figure – transition
3. State – We use a rounded rectangle to represent a state. A state represents the conditions or
circumstances of an object of a class at an instant of time.

Figure – state notation


4. Fork – We use a rounded solid rectangular bar to represent a Fork notation with incoming arrow
from the parent state and outgoing arrows towards the newly created states. We use the fork
notation to represent a state splitting into two or more concurrent states.
Figure – a diagram using the fork notation
5. Join – We use a rounded solid rectangular bar to represent a Join notation with incoming arrows
from the joining states and outgoing arrow towards the common goal state. We use the join
notation when two or more states concurrently converge into one on the occurrence of an event
or events.

Figure – a diagram using join notation


6. Self transition – We use a solid arrow pointing back to the state itself to represent a self
transition. There might be scenarios when the state of the object does not change upon the
occurrence of an event. We use self transitions to represent such cases.

Figure – self transition notation


7. Composite state – We use a rounded rectangle to represent a composite state also.We represent a
state with internal activities using a composite state.

Figure – a state with internal activities


8. Final state – We use a filled circle within a circle notation to represent the final state in a state
machine diagram.

Figure – final state notation

Steps to draw a state diagram –

1. Identify the initial state and the final terminating states.


2. Identify the possible states in which the object can exist (boundary values corresponding to
different attributes guide us in identifying different states).
3. Label the events which trigger these transitions.
Example – state diagram for an online order –

Figure – state diagram for an online order


The UMl diagrams we draw depend on the system we aim to represent. Here is just an example of how an
online ordering system might look like :
1. On the event of an order being received, we transit from our initial state to Unprocessed order
state.
2. The unprocessed order is then checked.
3. If the order is rejected, we transit to the Rejected Order state.
4. If the order is accepted and we have the items available we transit to the fulfilled order state.
5. However if the items are not available we transit to the Pending Order state.
6. After the order is fulfilled, we transit to the final state. In this example, we merge the two states
i.e. Fulfilled order and Rejected order into one final state.

Use case Models:

UML behavioral diagrams visualize, specify, construct, and document the dynamic aspects of a system. The
behavioral diagrams are categorized as follows: use case diagrams, interaction diagrams, state–chart diagrams,
and activity diagrams.
Use Case Model
Use case
A use case describes the sequence of actions a system performs yielding visible results. It shows the interaction
of things outside the system with the system itself. Use cases may be applied to the whole system as well as a
part of the system.
Actor
An actor represents the roles that the users of the use cases play. An actor may be a person (e.g. student,
customer), a device (e.g. workstation), or another system (e.g. bank, institution).
The following figure shows the notations of an actor named Student and a use case called Generate Performance
Report.

Use case diagrams


Use case diagrams present an outside view of the manner the elements in a system behave and how they can be
used in the context.
Use case diagrams comprise of −

• Use cases
• Actors
• Relationships like dependency, generalization, and association
Use case diagrams are used −
• To model the context of a system by enclosing all the activities of a system within a rectangle
and focusing on the actors outside the system by interacting with it.
• To model the requirements of a system from the outside point of view.
Example
Let us consider an Automated Trading House System. We assume the following features of the system −
• The trading house has transactions with two types of customers, individual customers and
corporate customers.
• Once the customer places an order, it is processed by the sales department and the customer is
given the bill.
• The system allows the manager to manage customer accounts and answer any queries posted by
the customer.

Interaction Diagrams
Interaction diagrams depict interactions of objects and their relationships. They also include the messages
passed between them. There are two types of interaction diagrams −

• Sequence Diagrams
• Collaboration Diagrams
Interaction diagrams are used for modeling −
• the control flow by time ordering using sequence diagrams.
• the control flow of organization using collaboration diagrams.
Sequence Diagrams
Sequence diagrams are interaction diagrams that illustrate the ordering of messages according to time.
Notations − These diagrams are in the form of two-dimensional charts. The objects that initiate the interaction
are placed on the x–axis. The messages that these objects send and receive are placed along the y–axis, in the
order of increasing time from top to bottom.
Example − A sequence diagram for the Automated Trading House System is shown in the following figure.
Collaboration Diagrams
Collaboration diagrams are interaction diagrams that illustrate the structure of the objects that send and receive
messages.
Notations − In these diagrams, the objects that participate in the interaction are shown using vertices. The links
that connect the objects are used to send and receive messages. The message is shown as a labeled arrow.
Example − Collaboration diagram for the Automated Trading House System is illustrated in the figure below.

State–Chart Diagrams
A state–chart diagram shows a state machine that depicts the control flow of an object from one state to another.
A state machine portrays the sequences of states which an object undergoes due to events and their responses
to events.
State–Chart Diagrams comprise of −

• States: Simple or Composite


• Transitions between states
• Events causing transitions
• Actions due to the events
State-chart diagrams are used for modeling objects which are reactive in nature.
Example
In the Automated Trading House System, let us model Order as an object and trace its sequence. The following
figure shows the corresponding state–chart diagram.

Activity Diagrams
An activity diagram depicts the flow of activities which are ongoing non-atomic operations in a state machine.
Activities result in actions which are atomic operations.
Activity diagrams comprise of −

• Activity states and action states


• Transitions
• Objects
Activity diagrams are used for modeling −

• workflows as viewed by actors, interacting with the system.


• details of operations or computations using flowcharts.
Example
The following figure shows an activity diagram of a portion of the Automated Trading House System.
State Model

A state model describes the timely behaviour of the class objects over a period of time. A state model has
multiple state diagrams where each state diagram describes a class in the model.

State model shows these changes in the object with the help of states, events, transitions and conditions.
Events are the incidents that occur to the object at a particular time whereas the state shows the value of the
object at a particular time. In this section, we will discuss state model along with its elements and importance.

What is State Modelling?

A state model describes the life of the objects over a period of time and it also defines the sequence of operations
that are implemented on the object over that period of time. Further, the state model also explains what kind of
operations are implemented, on what they are implemented and how they are implemented?

So, on an, all the state model thereby represents behaviour and control aspect of the entire system. The state
model is comprised of several state diagrams. Each state diagram represents a class with the aspects important
from the application point of view. The figure below will show you the sample state diagram that shows the
relation of events to states which we will discuss in the section below.

Elements of State Model

As we have learnt above that the state model consists of several state diagram where each state diagram
represents a class. A state diagram describes the relation between events and states which are the significant
elements of the state model. 4Let us discuss them one by one.

Events

Events are the incidents that take place at a particular time. For example, a train departs from the station, a
person switches off the button and so on. In the problem statement, you can identify events as the verbs in past
tense or you can identify them as the beginning of some condition.

The two non-relating events are said to be concurrent if they don’t have an impact on each other. Events can
also be the error condition like the transaction aborted, timeout condition. Well we have several types of events
but here we will discuss some common type as given below:

1. Signal Event

The signal event describes, sending and receiving of information from one object to another at a particular
time. This event specifies one-way transmission.

2. Change Event

A change event is an event that occurs whenever a boolean expression is satisfied. This boolean expression is
checked continuously and whenever the expression result changes from true to false or from false to true the
change event occurs.

Although we know that the change event is not checked continuously but it should be check so frequently that
it appears continuous from the application point of view. The UML representation of the change event is as
follow:

when(room temperature < heating point)

In UML the change event is expressed using ‘when’ keyword followed by the boolean expression.

3. Time Event

The time event is the event that occurs at a specified time or after the specified time elapsed. The absolute time
event is represented in the UML using the ‘when‘ keyword followed by the parenthesis with a time expression.
The Time interval event is represented by the ‘after‘ keyword followed parenthesis with expression evaluating
the time duration.
when(time = 18:30)
after(10 seconds)

States

The state represents the values of the attributes of an object at a particular time. Thus, the state defines the
behaviour of an object at a point in time. Like, the water at a point in time can either be in a liquid state or solid-
state or gaseous state.

A state can be identified as a verb with the suffix of ‘ing’ for example, eating, waiting, sitting. In UML the state
of an object is represented with the round box containing the state name.

While defining the state of an object, attributes whose values do not have an impact on the state of an object are
ignored. An object has a finite number of state and at a time it can be in one state only. As an event occurs the
object changes its current state depending on the type of event occurred.

The difference between the event and the state is that the event represents a particular point in the time scale
whereas state represents an interval in the time scale.

Transitions and Condition

When an object changes its current state to another state it is termed as the transition. The source state and the
target state of a transition are usually different but, in some cases, both source and target may be the same.

Transition triggers the change in the original state of an object on the occurrence of an event. Thereby the target
state or the next state of the object depends on the object’s original state and the event to which the object’s
original state has responded.

The occurrence of the transition also depends on the guard condition which is a boolean expression. The guard
condition is checked only once when an event occurs and if it turns out true the transition takes place.

State Diagram

The state diagram in a state model represents the order of the events which causes the change in states
sequentially. A state diagram is basically a graph, where every node in a graph denotes the state and every line
connecting the nodes are the transitions that respond to an event and cause changes in the state.

There are multiple state diagrams in a state model and each state diagram would represent a class and its timely
behaviour, important from the application point of view. The diagram below represents the state diagram for a
chess game.
In the above diagram, the rounded boxes with a name are the states. The arrowed lines are the transitions and
arrowhead points to the target state. The labels over the transition lines are the event that lets the transition and
change of states occur.

Each state diagram in the state model is presented in the rectangular frame and the name of the corresponding
state diagram is written in the pentagonal tag at the left corner of the rectangular frame as you can see in the
figure above. Guard conditions are optional are if required are written in square brackets just beside the
events.

Key Takeaways

• State model shows the behavioural aspect of the object over a period of time.
• A state model has several state diagram each for a class.
• Elements of state diagram are events, states, transitions and conditions.
• Events are the incidents that occur at a point in time.
• The state is the value of attributes of an object at a particular point of time.
• Transition is the change of state from one to another.
• The guard condition is the test which when turns true the transition occurs in the state.
• The state diagram shows the sequence of events and the sequence of changes occurred in the states due
to events.

So, this is all about the state modelling which represents the changes that occur in the state of the objects over
a period of time. We have discussed the elements of state modelling along with the examples.
Unit-3
In software engineering, a software development process is a process of planning and managing software
development. It typically involves dividing software development work into smaller, parallel, or sequential
steps or sub-processes to improve design and/or product management. It is also known as a software
development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and
artifacts that are created and completed by a project team to develop or maintain an application.
Most modern development processes can be vaguely described as agile. Other methodologies
include waterfall, prototyping, iterative and incremental development, spiral development, rapid application
development, and extreme programming.
A life-cycle "model" is sometimes considered a more general term for a category of methodologies and a
software development "process" a more specific term to refer to a specific process chosen by a specific
organization. For example, there are many specific software development processes that fit the spiral life-cycle
model. The field is often considered a subset of the systems development life cycle.

History
The software development methodology (also known as SDM) framework didn't emerge until the 1960s.
According to Elliott (2004), the systems development life cycle (SDLC) can be considered to be the oldest
formalized methodology framework for building information systems. The main idea of the SDLC has been "to
pursue the development of information systems in a very deliberate, structured and methodical way, requiring
each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out
rigidly and sequentially" within the context of the framework being applied. The main target of this
methodology framework in the 1960s was "to develop large scale functional business systems in an age of large
scale business conglomerates. Information systems activities revolved around heavy data
processing and number crunching routines".Requirements Gathering and Analysis: The first phase of the
custom software development process involves understanding the client's requirements and objectives. This
stage typically involves engaging in thorough discussions and conducting interviews with stakeholders to
identify the desired features, functionalities, and overall scope of the software. The development team works
closely with the client to analyze existing systems and workflows, determine technical feasibility, and define
project milestones.
Planning and Design: Once the requirements are understood, the custom software development team proceeds
to create a comprehensive project plan. This plan outlines the development roadmap, including timelines,
resource allocation, and deliverables. The software architecture and design are also established during this
phase. User interface (UI) and user experience (UX) design elements are considered to ensure the software's
usability, intuitiveness, and visual appeal.
Development: With the planning and design in place, the development team begins the coding process. This
phase involves writing, testing, and debugging the software code. Agile methodologies, such as Scrum or
Kanban, are often employed to promote flexibility, collaboration, and iterative development. Regular
communication between the development team and the client ensures transparency and enables quick feedback
and adjustments.
Testing and Quality Assurance: To ensure the software's reliability, performance, and security, rigorous testing
and quality assurance (QA) processes are carried out. Different testing techniques, including unit testing,
integration testing, system testing, and user acceptance testing, are employed to identify and rectify any issues
or bugs. QA activities aim to validate the software against the predefined requirements, ensuring that it functions
as intended.
Deployment and Implementation: Once the software passes the testing phase, it is ready for deployment and
implementation. The development team assists the client in setting up the software environment, migrating data
if necessary, and configuring the system. User training and documentation are also provided to ensure a smooth
transition and enable users to maximize the software's potential.
Maintenance and Support: After the software is deployed, ongoing maintenance and support become crucial to
address any issues, enhance performance, and incorporate future enhancements. Regular updates, bug fixes, and
security patches are released to keep the software up-to-date and secure. This phase also involves providing
technical support to end-users and addressing their queries or concerns. Methodologies, processes, and
frameworks range from specific prescriptive steps that can be used directly by an organization in day-to-day
work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of
a specific project or group. In some cases, a "sponsor" or "maintenance" organization distributes an official set
of documents that describe the process. Specific examples include:
1970s

• Structured programming since 1969


• Cap Gemini SDM, originally from PANDATA, the first English translation was published in 1974.
SDM stands for System Development Methodology
1980s

• Structured systems analysis and design method (SSADM) from 1980 onwards
• Information Requirement Analysis/Soft systems methodology
1990s

• Object-oriented programming (OOP) developed in the early 1960s and became a dominant
programming approach during the mid-1990s
• Rapid application development (RAD), since 1991
• Dynamic systems development method (DSDM), since 1994
• Scrum, since 1995
• Team software process, since 1998
• Rational Unified Process (RUP), maintained by IBM since 1998
• Extreme programming, since 1999
2000s

• Agile Unified Process (AUP) maintained since 2005 by Scott Ambler


• Disciplined agile delivery (DAD) Supersedes AUP
2010s

• Scaled Agile Framework (SAFe)


• Large-Scale Scrum (LeSS)
• DevOps
It is notable that since DSDM in 1994, all of the methodologies on the above list except RUP have been agile
methodologies - yet many organizations, especially governments, still use pre-agile processes (often waterfall
or similar). Software process and software quality are closely interrelated; some unexpected facets and effects
have been observed in practice
Among these, another software development process has been established in open source. The adoption of these
best practices known and established processes within the confines of a company is called inner source.

Prototyping
Software prototyping is about creating prototypes, i.e. incomplete versions of the software program being
developed.
The basic principles are:

• Prototyping is not a standalone, complete development methodology, but rather an approach to try
out particular features in the context of a full methodology (such as incremental, spiral, or rapid
application development (RAD)).
• Attempts to reduce inherent project risk by breaking a project into smaller segments and providing
more ease-of-change during the development process.
• The client is involved throughout the development process, which increases the likelihood of client
acceptance of the final implementation.
• While some prototypes are developed with the expectation that they will be discarded, it is possible
in some cases to evolve from prototype to working system.
A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems,
but this is true for all software methodologies.

Methodologies
Agile development
"Agile software development" refers to a group of software development frameworks based on iterative
development, where requirements and solutions evolve via collaboration between self-organizing cross-
functional teams. The term was coined in the year 2001 when the Agile Manifesto was formulated.
Agile software development uses iterative development as a basis but advocates a lighter and more people-
centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the
continuous feedback that it provides to successively refine and deliver a software system.
The Agile model also includes the following software development processes:[4]

• Dynamic systems development method (DSDM)


• Kanban
• Scrum
• Crystal
• Atern
• Lean software development
Continuous integration
Continuous integration is the practice of merging all developer working copies to a shared mainline several
times a day. Grady Booch first named and proposed CI in his 1991 method, although he did not advocate
integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate
integrating more than once per day – perhaps as many as tens of times per day.
Incremental development
Various methods are acceptable for combining linear and iterative systems development methodologies, with
the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments
and providing more ease-of-change during the development process.
There are three main variants of incremental development:

1. A series of mini-Waterfalls are performed, where all phases of the Waterfall are completed for a
small part of a system, before proceeding to the next increment, or
2. Overall requirements are defined before proceeding to evolutionary, mini-Waterfall
development of individual increments of a system, or
3. The initial software concept, requirements analysis, and design of architecture and system core
are defined via Waterfall, followed by incremental implementation, which culminates in
installing the final version, a working system.
Rapid application development

Rapid Application Development (RAD) Model


Rapid application development (RAD) is a software development methodology, which favors iterative
development and the rapid construction of prototypes instead of large amounts of up-front planning. The
"planning" of software developed using RAD is interleaved with writing the software itself. The lack of
extensive pre-planning generally allows software to be written much faster, and makes it easier to change
requirements.
The rapid development process starts with the development of preliminary data models and business process
models using structured techniques. In the next stage, requirements are verified using prototyping, eventually
to refine the data and process models. These stages are repeated iteratively; further development results in "a
combined business requirements and technical design statement to be used for constructing new systems".
The term was first used to describe a software development process introduced by James Martin in 1991.
According to Whitten (2003), it is a merger of various structured techniques, especially data-driven information
technology engineering, with prototyping techniques to accelerate software systems development.[7]
The basic principles of rapid application development are:

• Key objective is for fast development and delivery of a high quality system at a relatively low
investment cost.
• Attempts to reduce inherent project risk by breaking a project into smaller segments and providing
more ease-of-change during the development process.
• Aims to produce high quality systems quickly, primarily via iterative Prototyping (at any stage of
development), active user involvement, and computerized development tools. These tools may
include Graphical User Interface (GUI) builders, Computer Aided Software Engineering (CASE)
tools, Database Management Systems (DBMS), fourth-generation programming languages, code
generators, and object-oriented techniques.
• Key emphasis is on fulfilling the business need, while technological or engineering excellence is of
lesser importance.
• Project control involves prioritizing development and defining delivery deadlines or “timeboxes”.
If the project starts to slip, emphasis is on reducing requirements to fit the timebox, not in increasing
the deadline.
• Generally includes joint application design (JAD), where users are intensely involved in system
design, via consensus building in either structured workshops, or electronically facilitated
interaction.
• Active user involvement is imperative.
• Iteratively produces production software, as opposed to a throwaway prototype.
• Produces documentation necessary to facilitate future development and maintenance.
• Standard systems analysis and design methods can be fitted into this framework.
Waterfall development

The activities of the software development process represented in the waterfall model. There are several other
models to represent this process.
The waterfall model is a sequential development approach, in which development is seen as flowing steadily
downwards (like a waterfall) through several phases, typically:

• Requirements analysis resulting in a software requirements specification


• Software design
• Implementation
• Testing
• Integration, if there are multiple subsystems
• Deployment (or Installation)
• Maintenance
The first formal description of the method is often cited as an article published by Winston W. Royce in 1970,
although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a
flawed, non-working model.
The basic principles are:

• The Project is divided into sequential phases, with some overlap and splash back acceptable between
phases.
• Emphasis is on planning, time schedules, target dates, budgets, and implementation of an entire
system at one time.
• Tight control is maintained over the life of the project via extensive written documentation, formal
reviews, and approval/signoff by the user and information technology management occurring at the
end of most phases before beginning the next phase. Written documentation is an explicit deliverable
of each phase.
The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall
approach discourages revisiting and revising any prior phase once it is complete. This "inflexibility" in a pure
waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely
blamed for several large-scale government projects running over budget, over time and sometimes failing to
deliver on requirements due to the Big Design Up Front approach. Except when contractually required, the
waterfall model has been largely superseded by more flexible and versatile methodologies developed
specifically for software development.
Spiral development

Spiral model (Boehm, 1988)


In 1988, Barry Boehm published a formal software system development "spiral model," which combines some
key aspects of the waterfall model and rapid prototyping methodologies, in an effort to combine advantages
of top-down and bottom-up concepts. It provided emphasis in a key area many felt had been neglected by other
methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems.
The basic principles are:[1]

• Focus is on risk assessment and on minimizing project risk by breaking a project into smaller
segments and providing more ease-of-change during the development process, as well as providing
the opportunity to evaluate risks and weigh consideration of project continuation throughout the life
cycle.
• "Each cycle involves a progression through the same sequence of steps, for each part of the product
and for each of its levels of elaboration, from an overall concept-of-operation document down to the
coding of each individual program."
• Each trip around the spiral traverses four basic quadrants: (1) determine objectives, alternatives, and
constraints of the iteration, and (2) evaluate alternatives; Identify and resolve risks; (3) develop and
verify deliverables from the iteration; and (4) plan the next iteration.
• Begin each cycle with an identification of stakeholders and their "win conditions", and end each
cycle with review and commitment.
Shape Up
Shape Up is a software development approach introduced by Basecamp in 2018. It is a set of principles and
techniques that Basecamp developed internally to overcome the problem of projects dragging on with no clear
end. Its primary target audience is remote teams. Shape Up has no estimation and velocity tracking, backlogs,
or sprints, unlike Waterfall, Agile, or Scrum. Instead, those concepts are replaced with appetite, betting, and
cycles. As of 2022, besides Basecamp, notable organizations that have adopted Shape Up include UserVoice
and Block.
Cycles
Through trials and errors, Basecamp found that the ideal cycle length is 6 weeks. This 6 week period is long
enough to build a meaningful feature and still short enough to induce a sense of urgency.
Shaping
Shaping is the process of preparing work before being handed over to designers and engineers. Shaped work
spells out the solution's main UI elements, identifies rabbit holes, and outlines clear scope boundaries. It is
meant to be rough and to leave finer details for builders (designers and engineers) to solve, allowing the builders
to exercise their creativity and make trade-offs.[15] Shaped work is documented in the form of a pitch using an
online document solution that supports commenting, allowing team members to contribute technical
information asynchronously. Such comments are crucial for uncovering hidden surprises that may derail the
project.
Before a cycle begins, stakeholders hold a betting table, where pitches are reviewed. For each pitch, a decision
is made to either bet on it or drop it
Building
Shape Up is a two-track system where shapers and builders work in parallel. Work that is being shaped in the
current cycle may be given to designers and engineers to build in a future cycle.
Recognizing the technical uncertainties that come with building, progress is tracked using a chart that visualizes
the metaphor of the hill, aptly named the hill chart. The uphill phase is where builders are still working out their
approach, while the downhill is where unknowns have been eliminated. Builders proactively and
asynchronously self-report progress using an interactive online hill chart on Basecamp or Jira, shifting focus
from done or not-done statuses to unknown or solved problems. The use of hill chart replaces the process of
reporting linear statuses in scrum or Kanban standup.
Advanced methodologies
Other high-level software project methodologies include:

• Behavior-driven development and business process management.


• Chaos model - The main rule always resolve the most important issue first.
• Incremental funding methodology - an iterative approach
• Lightweight methodology - a general term for methods that only have a few rules and practices
• Structured systems analysis and design method - a specific version of waterfall
• Slow programming, as part of the larger Slow Movement, emphasizes careful and gradual work
without (or minimal) time pressures. Slow programming aims to avoid bugs and overly quick release
schedules.
• V-Model (software development) - an extension of the waterfall model
• Unified Process (UP) is an iterative software development methodology framework, based
on Unified Modeling Language (UML). UP organizes the development of software into four phases,
each consisting of one or more executable iterations of the software at that stage of development:
inception, elaboration, construction, and guidelines. Many tools and products exist to facilitate UP
implementation. One of the more popular versions of UP is the Rational Unified Process (RUP).
• Big Bang methodology - an approach for small or undefined projects, generally consisting of little
to no planning with high risk.

Process meta-models
Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process
adopted by an organization.

• ISO/IEC 12207 is the international standard describing the method to select, implement, and monitor
the life cycle for software.
• The Capability Maturity Model Integration (CMMI) is one of the leading models and is based on
best practices. Independent assessments grade organizations on how well they follow their defined
processes, not on the quality of those processes or the software produced. CMMI has replaced CMM.
• ISO 9000 describes standards for a formally organized process to manufacture a product and the
methods of managing and monitoring progress. Although the standard was originally created for the
manufacturing sector, ISO 9000 standards have been applied to software development as well. Like
CMMI, certification with ISO 9000 does not guarantee the quality of the end result, only that
formalized business processes have been followed.
• ISO/IEC 15504 Information technology—Process assessment is also known as Software Process
Improvement Capability Determination (SPICE), is a "framework for the assessment of software
processes". This standard is aimed at setting out a clear model for process comparison. SPICE is
used much like CMMI. It models processes to manage, control, guide and monitors software
development. This model is then used to measure what a development organization or project team
actually does during software development. This information is analyzed to identify weaknesses and
drive improvement. It also identifies strengths that can be continued or integrated into common
practice for that organization or team.
• ISO/IEC 24744 Software Engineering—Metamodel for Development Methodologies, is a power
type-based metamodel for software development methodologies.
• SPEM 2.0 by the Object Management Group.
• Soft systems methodology - a general method for improving management processes.
• Method engineering - a general method for improving information system processes.

The three basic approaches applied to software development methodology frameworks


A variety of such frameworks have evolved over the years, each with its own recognized strengths and
weaknesses. One software development methodology framework is not necessarily suitable for use by all
projects. Each of the available methodology frameworks is best suited to specific kinds of projects, based on
various technical, organizational, project, and team considerations.
Software development organizations implement process methodologies to ease the process of development.
Sometimes, contractors may require methodologies employed, an example is the U.S. defense industry, which
requires a rating based on process models to obtain contracts. The international standard for describing the
method of selecting, implementing, and monitoring the life cycle for software is ISO/IEC 12207.
A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality.
Some try to systematize or formalize the seemingly unruly task of designing software. Others apply project
management techniques to designing software. Large numbers of software projects do not meet their
expectations in terms of functionality, cost, or delivery schedule - see List of failed and overbudget custom
software projects for some notable examples.
Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process
improvement. Composed of line practitioners who have varied skills, the group is at the center of the
collaborative effort of everyone in the organization who is involved with software engineering process
improvement.
A particular development team may also agree to program environment details, such as which integrated
development environment is used one or more dominant programming paradigms, programming style rules, or
choice of specific software libraries or software frameworks. These details are generally not dictated by the
choice of model or general methodology.
Software development life cycle (SDLC)

Domain Class Model

Domain class model defines the real-world classes and the relationship between them. In our previous
content, we have studied that during the software development process the domain modelling occurs at the
analysis phase.

Domain analysis figures out real-world objects from the problem statement that are important from the
application point of view by creating the domain model.

Domain model figure outs three aspects of the real-world objects:

1. The static structure of the object by creating a domain class model.


2. Interaction between the objects by creating domain interaction model.
3. Life cycle histories of the objects by creating the domain state model.

In this section, we will discuss the domain class model and its modelling elements in brief.

What is Domain Class Model?

Domain class model is the first model created during analysis because it is easier and convenient to
define static entities which are independent of the application and are more stable in the progression of the
software development.

To create the domain class model, the information is gathered from the problem statement, related
applications, expert’s knowledge, business interviews, documents etc… You must not rely entirely on the
provided problem statement as not even a business expert can provide a precise problem statement.

Now, the problem is how to create a domain class model?

To create a domain class model, you have to perform the following steps:

1. Identify real-world classes.


2. Compose a data dictionary.
3. Identify the association.
4. Identify attributes.
5. Optimize classes using inheritance.
6. Test the access path to verify that they produce a sensible result.
7. Iterate to model to resolve the ambiguities.
8. Revise the level of abstraction.
9. Group classes to form packages.

It is impossible to construct the entire software at once and develop every aspect uniformly. The first
constructed model always has flaws and it is refined with several iterations. We will discuss each step of
constructing a domain class model in detail.
1. Identifying Classes

Find the classes for real-world objects. An object could be a real-world physical entity like a person, furniture,
country and many more as well as a conceptual real-world entity like route, transaction, cost. Make it clear
you must figure out classes that are relevant to the application.

You can’t find all the classes from the problem statement you have to exercise your general knowledge to
extract classes. The figure below shows the standard way of choosing classes.

Don’t think too much to figure out the classes, just note down the classes that occur in your mind. At the
initial stage don’t indulge in practising inheritance.

Now, from your class collection, you have to eliminate the unnecessary and incorrect classes. For this, you
can use the following guideline.

• Eliminate the redundant classes by reserving one class with the most expressive name.
• Eliminate the irrelevant classes that have nothing to do with the problem. You need an expert’s
judgment for this as it would cause the problem if the eliminated class is important.
• Eliminate the vague classes that have ill-defined boundaries and are broader in scope.
• Names describing the property of the object must be listed as attributed. But if the property is
independently too important then list it as a class, not as attributes. Same is with the operation, i.e. if
the operation has some of its own properties then it should be modelled as a class.
• The name of the class should reflect its nature, not the role it plays in association with the other
classes.
• Also, eliminate the names that are implementation construct.
• Ignore the classes that can be derived from other classes.

2. Preparing Data Dictionary

The name of the class can be interpreted in many ways. So, prepare a dictionary for every name. You can
write the information in a paragraph that describes the class, its attributes, operations and associations. You
can also note down the scope of the class in the current scenario and also about its limitations.

3. Identifying Associations

Now, when we have a set of classes its time to identify the structural relationships between them. Association
can be identified as a verb or verb phrases in the problem statement.

It is represented by communication (DealsWith), ownership (has, PartOf), physical location (PartOf,


ContainedIn, NextTo), Condition (WorksFor, Manages). Just note down all the relationship and don’t try to
refine relations too early.

After collection all kind of relations it times to eliminate and discard the unnecessary associations. For these
follow the guidelines below.

• If you have eliminated certain class then eliminate its association or redefine it for other class.
• Eliminate the association that is beyond the scope of the application.
• An association should be a perpetual relationship, not a temporary event. In the problem, statement
association is sometimes expressed as an action.
• Decompose the associations between three or more classes into a binary association, retaining every
information.
• Eliminate the association that can be defined with other association.
• The derived association may not add information but they are important for designing the class model.
• Association must be named carefully as the name define what the situation is?
• Association end names are not required if there is a single and well-defined association between a pair
of classes. When there is multiple association between the classes association end name must be
described precisely.
• Add qualified association when the object needs to be identified in a specific context.
• Add multiplicity to the association but don’t try to refine too much as multiplicity changes during the
analysis.
• In case you find any missing association add that missing association.
• Don’t spend much time in distinguishing between association and aggregation. As aggregation is also
an association but, with some extra conditions.

4. Identifying Attributes

For every class, identify the common nouns that describe the properties of the object in the application
domain. Attributes are not much described as classes and association.

Do not spend too much time in identifying attribute. Only consider attributes important from the applications
point of view. Name attributes in a meaningful sense. Figure out the attributes on the association, this attribute
defines the relationship between two objects instead of defining the property of the individual object.

After listing out attributes its time to eliminate the unnecessary and incorrect attribute.

• Eliminates the attributes whose independent existence is important from the application point of view,
consider that attribute as an object.
• Restate the attribute as a qualifier, if its value is depending on a certain situation.
• A name is considered as an attribute if its value doesn’t depend on the particular context and need not
be unique in a set.
• Eliminate the attribute whose only concern is to identify the object as class model implicitly include an
identifier.
• If the attribute presents value for the links then it is the attribute of association.
• Attributes defining the internal state of the object that are not visible to the outside world then
eliminate those attributes.
• Attributes that are not affecting the behavior of an object must be eliminated.
• Restate boolean attributes as enumeration.

5. Refining inheritance

We have left the task of generalizing and reorganizing the classes while identifying the classes. Generalizing
the classes help in reusing the class models.

We can implement inheritance among the classes in two directions:

• Bottom-up: It is generalizing the classes by discovering the common attributes, associations and
operation and defining it in the superclass.
• Top-down: It is specializing the classes by refining the features of the superclass in the subclasses.

6. Testing Access Paths

Now its time to trace the domain class model to see that it answers for every possible user question. If you
find that the model does not provide answers to some useful questions then it indicates that some information
is missing from the model.
7. Iterating Class Model

Software is never developed at once it passes through many iterations to resolve ambiguities in it. Similarly,
the class model doesn’t get perfect in one pass. While iterating the class model you have to find the missing
answers for the questions you get in testing the access path stage.

8. Shift Level of Abstraction

In the first pass, you have been focusing on the problem statement to figure out classes and associations. But
this is not enough you have to raise the level of abstraction at each iteration.

9. Grouping Classes into Packages

After discovering classes, identifying the association between them, listing attributes for classes now, we have
to group the classes that are more closely related to each other in packages.

Classes in different packages are less closely related to each other. It is not that you can not repeat classes in
different packages. Packages could be reused if your previous problem is similar to the current one.

If your new problem statement is similar to the previous one but more advance you can extend the packages.
It is based on your judgment if you want to reuse the packages or build a new model.

Domain Modeling :
Domain Modeling is understood as abstract modeling. a site model could be an illustration of the ideas or
objects shown within the drawback domain. It additionally captures the apparent relationships among these
objects. samples of such abstract objects area unit the Book, BookRegister, member register, LibraryMember,
etc. The counseled strategy is to quickly produce a rough abstract model wherever the stress is finding the
apparent ideas expressed within the needs whereas deferring an in-depth investigation. Later throughout the
event method, the abstract model is incrementally refined and extended. The 3 sorts of objects are known
throughout domain analysis. The objects known throughout domain analysis are classified into 3 types:
1. Boundary objects
2. Controller objects
3. Entity objects
The boundary and controller objects are consistently known from the employment case diagram whereas the
identification of entity objects needs to apply. So, the crux of the domain modeling activity is to spot the
entity models.
The purpose of various sorts of objects known throughout domain analysis and the way these objects move
among one another.
The different styles of objects known throughout domain analysis and the area of their relationship unit as
follows:
• Boundary objects: The boundary objects area unit those with that the actors move. These
embrace screens, menus, forms, dialogs, etc. The boundary objects area unit is chiefly answerable
for user interaction. Therefore, they ordinarily don’t embrace any process logic. However, they
will be answerable for confirming inputs, formatting, outputs, etc. The boundary objects were
earlier known as because of the interface objects. However, the term interface category is getting
used for Java, COM/DCOM, and UML with completely different means. A recommendation for
the initial identification of the boundary categories is to outline one boundary category per
actor/use case try.
• Entity objects: These ordinarily hold info like information tables and files that require to outlast
use case execution, e.g. Book, BookRegister, LibraryMember, etc. several of the entity objects
area unit “dumb servers”. they’re ordinarily answerable for storing information, winning
information, and performing some elementary styles of operation that don’t amendment usually.
• Controller objects: The controller objects coordinate the activities of a collection of entity
objects and interface with the boundary objects to produce the general behavior of the system. The
responsibilities appointed to a controller object area unit are closely associated with the belief of
a particular use case. The controller objects effectively decouple the boundary and entity objects
from each other creating a system tolerant to changes in the computer program and process logic.
The controller objects embody most of the logic committed to the employment case realization
(this logic might amendment time to time). A typical interaction of a controller object with
boundary and entity objects is shown below(figure) Normally, every use case is complete
victimization of one controller object. However, some use cases are complete while not
victimization any controller object, i.e. through boundary and entity objects solely. This is often
true to be used cases that win just some easy manipulation of the hold on info.
For example, let’s take into account the “query book availability” use case of the Library data
system (LIS). Realization of the employment case involves solely matching the given book name
against the books offered within the catalog. additional complicated use cases might need quite
one controller object to understand the employment case. a fancy use case will have many
controller objects like group action manager, resource arranger, and error handler. there’s another
state of affairs wherever a use case will have quite one controller object. generally, the employment
cases need the controller object to transit through a variety of states. In such cases, one controller
object may need to be created for every execution of the employment case.

Iteration in Object-Oriented Analysis & Design


Successful software development requires more than powerful concepts and notation; it also requires rigorous
software processes. The software process associated with OO development is inherently different from
traditional software life cycles. For example, whereas the traditional “waterfall”
analysis/design/implementation/test life cycle is sequential, the OO software lifecycle is iterative and
incremental. It is sometimes caricatured as “analyze a little, design a little, implement a little, test a little.”

Why Iterate

The advantage of an iterative software process is that it allows us, when there is reason to do so, to return to a
previous step, introduce a change, and propagate the ramifications of the change forward in the life
cycle. Consequently, as we learn more about the problem from prototypes and user feedback we can
continuously refine and extend our design. The advantage of an incremental software process is that it allows
the system to evolve in a series of phased releases, beginning with a skeleton architecture and culminating in a
deliverable software product. As a result, we can systematically extend the architecture as we learn by
experience with previous releases.

Disadvantages
The main disadvantage of the iterative and incremental lifecycle is that, for large projects, it is frequently a
challenge to manage. Since the various analysis/design/implementation/testing activities typically proceed in
parallel, coordinating activities within and across the releases requires sophisticated project management
skills. To successfully orchestrate these concurrent activities for a large project, a solid understanding of the
core OOAD activities and deliverables is essential.

OO Analysis and Design

At a high level of abstraction, the iterative, incremental activities of the OO software lifecycle seem similar to
those associated with traditional software life cycles. As is the case with traditional software processes, we
distinguish between analysis and design activities:
Object-Oriented Analysis : A software analysis process that specifies system requirements (OOA) using an
object model as the unifying paradigm. The primary activity of OOA is object modeling, although it is
sometimes extended to include other activities, such as use-case modeling.

Object-Oriented Design : A software design process that translates a logical object model (OOD) from object-
oriented analysis into a physical design specification. This OOD process typically includes refining and
extending the object model and defining the application architecture.

Iteration in Analysis and Design


However, unlike in the traditional software life cycle, in the OO life cycle the separation of concerns between
analysis and design is not strictly enforced. Developers are not only allowed, they are encouraged, to move
back and forth between analysis and design activities. The object model facilitates this fluidity, since it is central
to both OOA and OOD activities. For this reason it is sometimes convenient to view the OOA object model as
a logical object model, and the OOD object model as a physical object model.

SDLC Development stages:

What is Software Development Life Cycle (SDLC)? Learn SDLC Phases, Process, and Models:
Software Development Life Cycle (SDLC) is a framework that defines the steps involved in the development
of software at each phase. It covers the detailed plan for building, deploying and maintaining the software.

SDLC defines the complete cycle of development i.e. all the tasks involved in planning, creating, testing, and
deploying a Software Product.

What You Will Learn:


• Software Development Life Cycle Process
• SDLC Cycle
• SDLC Phases
o #1) Requirement Gathering and Analysis
o #2) Design
o #3) Implementation or Coding
o #4) Testing
o #5) Deployment
o #6) Maintenance
• Software Development Life Cycle Models
o #1) Waterfall Model
o #2) V-Shaped Model
o #3) Prototype Model
o #4) Spiral Model
o #5) Iterative Incremental Model
o #6) Big Bang Model
o #7) Agile Model
• Conclusion
o Recommended Reading
Software Development Life Cycle Process
SDLC is a process that defines the various stages involved in the development of software for delivering a
high-quality product. SDLC stages cover the complete life cycle of a software i.e. from inception to retirement
of the product.

Adhering to the SDLC process leads to the development of the software in a systematic and disciplined
manner.

Purpose:
Purpose of SDLC is to deliver a high-quality product which is as per the customer’s requirement.

SDLC has defined its phases as, Requirement gathering, Designing, Coding, Testing, and Maintenance. It is
important to adhere to the phases to provide the Product in a systematic manner.

For Example, A software has to be developed and a team is divided to work on a feature of the product and is
allowed to work as they want. One of the developers decides to design first whereas the other decides to code
first and the other on the documentation part.
This will lead to project failure because of which it is necessary to have a good knowledge and understanding
among the team members to deliver an expected product.

SDLC Cycle
SDLC Cycle represents the process of developing software.

Below is the diagrammatic representation of the SDLC cycle:

SDLC Phases
Given below are the various phases:
• Requirement gathering and analysis
• Design
• Implementation or coding
• Testing
• Deployment
• Maintenance
#1) Requirement Gathering and Analysis
During this phase, all the relevant information is collected from the customer to develop a product as per their
expectation. Any ambiguities must be resolved in this phase only.

Business analyst and Project Manager set up a meeting with the customer to gather all the information like
what the customer wants to build, who will be the end-user, what is the purpose of the product. Before
building a product a core understanding or knowledge of the product is very important.

For Example, A customer wants to have an application which involves money transactions. In this case, the
requirement has to be clear like what kind of transactions will be done, how it will be done, in which currency
it will be done, etc.
Once the requirement gathering is done, an analysis is done to check the feasibility of the development of a
product. In case of any ambiguity, a call is set up for further discussion.

Once the requirement is clearly understood, the SRS (Software Requirement Specification) document is
created. This document should be thoroughly understood by the developers and also should be reviewed by
the customer for future reference.

#2) Design
In this phase, the requirement gathered in the SRS document is used as an input and software architecture that
is used for implementing system development is derived.

#3) Implementation or Coding


Implementation/Coding starts once the developer gets the Design document. The Software design is translated
into source code. All the components of the software are implemented in this phase.

#4) Testing
Testing starts once the coding is complete and the modules are released for testing. In this phase, the
developed software is tested thoroughly and any defects found are assigned to developers to get them fixed.

Retesting, regression testing is done until the point at which the software is as per the customer’s expectation.
Testers refer SRS document to make sure that the software is as per the customer’s standard.

#5) Deployment
Once the product is tested, it is deployed in the production environment or first UAT (User Acceptance
testing) is done depending on the customer expectation.
In the case of UAT, a replica of the production environment is created and the customer along with the
developers does the testing. If the customer finds the application as expected, then sign off is provided by the
customer to go live.

#6) Maintenance
After the deployment of a product on the production environment, maintenance of the product i.e. if any issue
comes up and needs to be fixed or any enhancement is to be done is taken care by the developers.

Software Development Life Cycle Models


A software life cycle model is a descriptive representation of the software development cycle. SDLC models
might have a different approach but the basic phases and activity remain the same for all the models.
#1) Waterfall Model
Waterfall model is the very first model that is used in SDLC. It is also known as the linear sequential model.
In this model, the outcome of one phase is the input for the next phase. Development of the next phase starts
only when the previous phase is complete.

• First, Requirement gathering and analysis is done. Once the requirement is freeze then only the
System Design can start. Herein, the SRS document created is the output for the Requirement
phase and it acts as an input for the System Design.
• In System Design Software architecture and Design, documents which act as an input for the
next phase are created i.e. Implementation and coding.
• In the Implementation phase, coding is done and the software developed is the input for the
next phase i.e. testing.
• In the testing phase, the developed code is tested thoroughly to detect the defects in the
software. Defects are logged into the defect tracking tool and are retested once fixed. Bug
logging, Retest, Regression testing goes on until the time the software is in go-live state.
• In the Deployment phase, the developed code is moved into production after the sign off is
given by the customer.
• Any issues in the production environment are resolved by the developers which come under
maintenance.

Advantages of the Waterfall Model:


• Waterfall model is the simple model which can be easily understood and is the one in which all
the phases are done step by step.
• Deliverables of each phase are well defined, and this leads to no complexity and makes the
project easily manageable.
Disadvantages of Waterfall model:
• Waterfall model is time-consuming & cannot be used in the short duration projects as in this
model a new phase cannot be started until the ongoing phase is completed.
• Waterfall model cannot be used for the projects which have uncertain requirement or wherein
the requirement keeps on changing as this model expects the requirement to be clear in the
requirement gathering and analysis phase itself and any change in the later stages would lead to
cost higher as the changes would be required in all the phases.
#2) V-Shaped Model
V- Model is also known as Verification and Validation Model. In this model Verification & Validation goes
hand in hand i.e. development and testing goes parallel. V model and waterfall model are the same except that
the test planning and testing start at an early stage in V-Model.

a) Verification Phase:
(i) Requirement Analysis:
In this phase, all the required information is gathered & analyzed. Verification activities include reviewing the
requirements.

(ii) System Design:


Once the requirement is clear, a system is designed i.e. architecture, components of the product are created
and documented in a design document.

(iii) High-Level Design:


High-level design defines the architecture/design of modules. It defines the functionality between the two
modules.

(iv) Low-Level Design:


Low-level Design defines the architecture/design of individual components.

(v) Coding:
Code development is done in this phase.

b) Validation Phase:
(i) Unit Testing:
Unit testing is performed using the unit test cases that are designed and is done in the Low-level design phase.
Unit testing is performed by the developer itself. It is performed on individual components which lead to early
defect detection.
(ii) Integration Testing:
Integration testing is performed using integration test cases in High-level Design phase. Integration testing is
the testing that is done on integrated modules. It is performed by testers.
(iii) System Testing:
System testing is performed in the System Design phase. In this phase, the complete system is tested i.e. the
entire system functionality is tested.
(iv) Acceptance Testing:
Acceptance testing is associated with the Requirement Analysis phase and is done in the customer’s
environment.
Advantages of V – Model:
• It is a simple and easily understandable model.
• V –model approach is good for smaller projects wherein the requirement is defined and it
freezes in the early stage.
• It is a systematic and disciplined model which results in a high-quality product.
Disadvantages of V-Model:
• V-shaped model is not good for ongoing projects.
• Requirement change at the later stage would cost too high.
#3) Prototype Model
The prototype model is a model in which the prototype is developed prior to the actual software.

Prototype models have limited functional capabilities and inefficient performance when compared to the
actual software. Dummy functions are used to create prototypes. This is a valuable mechanism for
understanding the customers’ needs.

Software prototypes are built prior to the actual software to get valuable feedback from the customer.
Feedbacks are implemented and the prototype is again reviewed by the customer for any change. This process
goes on until the model is accepted by the customer.

Once the requirement gathering is done, the quick design is created and the prototype which is presented to
the customer for evaluation is built.

Customer feedback and the refined requirement is used to modify the prototype and is again presented to the
customer for evaluation. Once the customer approves the prototype, it is used as a requirement for building the
actual software. The actual software is build using the Waterfall model approach.

Advantages of Prototype Model:


• Prototype model reduces the cost and time of development as the defects are found much
earlier.
• Missing feature or functionality or a change in requirement can be identified in the evaluation
phase and can be implemented in the refined prototype.
• Involvement of a customer from the initial stage reduces any confusion in the requirement or
understanding of any functionality.
Disadvantages of Prototype Model:
• Since the customer is involved in every phase, the customer can change the requirement of the
end product which increases the complexity of the scope and may increase the delivery time of
the product.
#4) Spiral Model
The Spiral Model includes iterative and prototype approach.
Spiral model phases are followed in the iterations. The loops in the model represent the phase of the SDLC
process i.e. the innermost loop is of requirement gathering & analysis which follows the Planning, Risk
analysis, development, and evaluation. Next loop is Designing followed by Implementation & then testing.
Spiral Model has four phases:
• Planning
• Risk Analysis
• Engineering
• Evaluation

(i) Planning:
The planning phase includes requirement gathering wherein all the required information is gathered from the
customer and is documented. Software requirement specification document is created for the next phase.

(ii) Risk Analysis:


In this phase, the best solution is selected for the risks involved and analysis is done by building the prototype.

For Example, the risk involved in accessing the data from a remote database can be that the data access rate
might be too slow. The risk can be resolved by building a prototype of the data access subsystem.
(iii) Engineering:
Once the risk analysis is done, coding and testing are done.

(iv) Evaluation:
Customer evaluates the developed system and plans for the next iteration.

Advantages of Spiral Model:


• Risk Analysis is done extensively using the prototype models.
• Any enhancement or change in the functionality can be done in the next iteration.
Disadvantages of Spiral Model:
• The spiral model is best suited for large projects only.
• The cost can be high as it might take a large number of iterations which can lead to high time
to reach the final product.
#5) Iterative Incremental Model
The iterative incremental model divides the product into small chunks.

For Example, Feature to be developed in the iteration is decided and implemented. Each iteration goes
through the phases namely Requirement Analysis, Designing, Coding, and Testing. Detailed planning is not
required in iterations.
Once the iteration is completed, a product is verified and is delivered to the customer for their evaluation and
feedback. Customer’s feedback is implemented in the next iteration along with the newly added feature.
Hence, the product increments in terms of features and once the iterations are completed the final build holds
all the features of the product.

Phases of Iterative & Incremental Development Model:


• Inception phase
• Elaboration Phase
• Construction Phase
• Transition Phase
(i) Inception Phase:
Inception phase includes the requirement and scope of the Project.

(ii) Elaboration Phase:


In the elaboration phase, the working architecture of a product is delivered which covers the risk identified in
the inception phase and also fulfills the non-functional requirements.

(iii) Construction Phase:


In the Construction phase, the architecture is filled in with the code which is ready to be deployed and is
created through analysis, designing, implementation, and testing of the functional requirement.

(iv) Transition Phase:


In the Transition Phase, the product is deployed in the Production environment.

Advantages of Iterative & Incremental Model:


• Any change in the requirement can be easily done and would not cost as there is a scope of
incorporating the new requirement in the next iteration.
• Risk is analyzed & identified in the iterations.
• Defects are detected at an early stage.
• As the product is divided into smaller chunks it is easy to manage the product.
Disadvantages of Iterative & Incremental Model:
• Complete requirement and understanding of a product are required to break down and build
incrementally.
#6) Big Bang Model
Big Bang Model does not have any defined process. Money and efforts are put together as the input and
output come as a developed product which might be or might not be the same as what the customer needs.

Big Bang Model does not require much planning and scheduling. The developer does the requirement analysis
& coding and develops the product as per his understanding. This model is used for small projects only. There
is no testing team and no formal testing is done, and this could be a cause for the failure of the project.

Advantages of Big Bang Model:


• It’s a very simple Model.
• Less Planning and scheduling is required.
• The developer has the flexibility to build the software of their own.
Disadvantages of the Big Bang Model:
• Big Bang models cannot be used for large, ongoing & complex projects.
• High risk and uncertainty.
#7) Agile Model
Agile Model is a combination of the Iterative and incremental model. This model focuses more on flexibility
while developing a product rather than on the requirement.

In Agile, a product is broken into small incremental builds. It is not developed as a complete product in one
go. Each build increments in terms of features. The next build is built on previous functionality.

In agile iterations are termed as sprints. Each sprint lasts for2-4 weeks. At the end of each sprint, the product
owner verifies the product and after his approval, it is delivered to the customer.
Customer feedback is taken for improvement and his suggestions and enhancement are worked on in the next
sprint. Testing is done in each sprint to minimize the risk of any failures.

Advantages of Agile Model:


• It allows more flexibility to adapt to the changes.
• The new feature can be added easily.
• Customer satisfaction as the feedback and suggestions are taken at every stage.
Disadvantages:
• Lack of documentation.
• Agile needs experienced and highly skilled resources.
• If a customer is not clear about how exactly they want the product to be, then the project would
fail.
Conclusion
Adherence to a suitable life cycle is very important, for the successful completion of the Project. This, in turn,
makes the management easier.

Different Software Development Life Cycle models have their own Pros and Cons. The best model for any
Project can be determined by the factors like Requirement (whether it is clear or unclear), System Complexity,
Size of the Project, Cost, Skill limitation, etc.

Example, in case of an unclear requirement, Spiral and Agile models are best to be used as the required
change can be accommodated easily at any stage.
Waterfall model is a basic model and all the other SDLC models are based on that only.
Unit-4

System Design is the process of designing the architecture, components, and interfaces for a system so that
it meets the end-user requirements. System Design for tech interviews is something that can’t be ignored!
Almost every IT giant whether it be Facebook, Amazon, Google, Apple or any other ask various questions
based on System Design concepts such as scalability, load-balancing, caching, etc. in the interview. This
specifically designed System Design tutorial will help you to learn and master System Design concepts in
the most efficient way from basics to advanced level.
System design refers to the process of defining the architecture, modules, interfaces, data for a system to
satisfy specified requirements. It is a multi-disciplinary field that involves trade-off analysis, balancing
conflicting requirements, and making decisions about design choices that will impact the overall system.

Here are some steps for approaching a system design tutorial:

1. Understand the requirements: Before starting the design process, it is important to understand the
requirements and constraints of the system. This includes gathering information about the
problem space, performance requirements, scalability needs, and security concerns.
2. Identify the major components: Identify the major components of the system and how they
interact with each other. This includes determining the relationships between different
components and how they contribute to the overall functionality of the system.
3. Choose appropriate technology: Based on the requirements and components, choose the
appropriate technology to implement the system. This may involve choosing hardware and
software platforms, databases, programming languages, and tools.
4. Define the interface: Define the interface between different components of the system, including
APIs, protocols, and data formats.
5. Design the data model: Design the data model for the system, including the schema for the
database, the structure of data files, and the data flow between components.
6. Consider scalability and performance: Consider scalability and performance implications of the
design, including factors such as load balancing, caching, and database optimization.
7. Test and validate the design: Validate the design by testing the system with realistic data and use
cases, and make changes as needed to address any issues that arise.
8. Deploy and maintain the system: Finally, deploy the system and maintain it over time, including
fixing bugs, updating components, and adding new features as needed.
It’s important to keep in mind that system design is an iterative process, and the design may change as new
information is gathered and requirements evolve. Additionally, it’s important to communicate the design
effectively to all stakeholders, including developers, users, and stakeholders, to ensure that the system
meets their needs and expectations.

Performance estimation
Performance estimation denotes a task of estimating the loss that a predictive model will incur on unseen data.
These procedures are part of the pipeline in every machine learning task and are used for assessing the overall
generalisation ability of models. In this paper we address the application of these methods to time series
forecasting tasks. For independent and identically distributed data the most common approach is cross-
validation. However, the dependency among observations in time series raises some caveats about the most
appropriate way to estimate performance in these datasets and currently there is no settled way to do so. Machine
learning plays an increasingly important role in science and technology. Performance estimation is part of any
machine learning task pipeline. This task is related to a procedure of using the available data to estimate the loss
that a model will incur on unseen data. Machine learning practitioners typically use these methods for model
selection, meta-parameter tuning and assessing the overall generalisation ability of the models. In effect,
obtaining reliable estimates of the performance of models is a critical issue in predictive analytics tasks.
Choosing a performance estimation method often depends on the data one is trying to model. For example,
when one can assume independence and an identical distribution among observations, cross-validation is
typically the most appropriate method. This is mainly due to its efficient use of data .

However, there are problems in which the observations in the data are dependent, such as time series. This raises
some caveats about using standard cross-validation in such datasets. Notwithstanding, there are particular time
series settings in which variants of cross-validation can be used, such as in stationary or small-sized datasets
where the efficient use of all the data by cross-validation is beneficial .

Reuse Oriented Model


Reuse Oriented Model (ROM), also known as reuse-oriented development (ROD), it can be steps of the
software development for specific duration in which software is redesigned through creating a sequence of
prototypes known as models, every system is derived from the previous one with constant series of defined
rules.
The reuse-oriented model isn’t always sensible in its pure form due to cause of an entire repertoire of reusable
additives that might not be available. In such cases, several new system components need to be designed. If it
is not done, ROM has to compromise in perceived requirements, leading to a product that does not meet exact
requirements of user. This model depends upon perception that maintenance might be viewed as a pastime
involving reuse of existing system components.
Design reuse is the process of building new software applications and tools by reusing previously developed
designs. New features and functionalities may be added by incorporating minor changes.Design reuse involves
the use of designed modules, such as logic and data, to build a new and improved product. The reusable
components, including code segments, structures, plans and reports, minimize implementation time and are less
expensive. This avoids reinventing existing software by using techniques already developed and to create and
test the software.

Design reuse is used in a variety of fields, from software and hardware to manufacturing and aeronautics.

The reuse model has 4 fundamental steps which are followed :


1. To identify components of old system that are most suitable for reuse.
2. To understand all system components.
3. To modify old system components to achieve new requirements.
4. To integrate all of modified parts into new system.
A specific framework is required for categorization of components and consequently required modification.
The complete reuse version may begin from any segment of the existence cycle – need, planning, code,
design, or analyze data – not like other models.
Advantages :
• It can reduce total cost of software development.
• The risk factor is very low.
• It can save lots of time and effort.
• It is very efficient in nature.
Disadvantages :
• Reuse-oriented model is not always worked as a practice in its true form.
• Compromises in requirements may lead to a system that does not fulfill requirement of user.
• Sometimes using old system component, that is not compatible with new version of component,
this may lead to an impact on system evolution.

Breaking a system into sub systems

In system design the first step for all except the smallest applications is to break the system into subsystems. In
a subsystems classes share common properties, have similar functionality, have the same physical location, or
execute on the same hardware. A subsystem is a package of classes, associations, operations, events and
constraints that are interrelated and have a reasonably well defined interface to the rest of the system. The
interface specifies all interactions with the subsystem to allow independent subsystem design. Relationships
between subsystems may be client supplier or per to per. A system may be broken into layers and partitions.
Layers define an abstract world and work like a client of services for layers below and as a supplier of services
for layers above it. Layers may be of opened or closed architecture. In opened architecture a layer knows of all
layers below it and in closed architecture a layer only knows about the immediate lower layer. Layers do not
know of layer above them. Partitions vertically divide systems into independent or weakly coupled subsystems.
Each partition provides a particular service. Simple system architectures as pipelines and stars are used to reduce
the complexity of the system.

allocation of subsystems, management of data storage systems identifying concurrency


System design is all about planning the systems architecture where the developers have to take some high-level
decisions. The developers organize the whole system into subsystems further allocate these subsystems to
hardware and software.

During the development of software, the system design stage occurs after the analysis stage. The analysis stage
concentrates on what should be done whereas the system design stage concentrates on how it should be done.

Before structuring the architecture of a system, you can take a survey of several systems with similar architecture
and draw up their corresponding problems. You can structure an additional architecture combining the
information received from the survey. You may not get a solution to all the problems by surveying the previous
system but you can resolve many of your problems

Estimate System Performance

Before going into design details, you must draw up a rough estimate regarding the performance of the system.
You don’t have to be accurate just check whether the system is feasible or not.

Don’t get into many details and don’t be more calculative. Apply your common sense and be fast to make
simplifying assumptions.

Designing a Reus

e Plan

Reuse is an important feature of object orientation and it is often easy to reuse thing that already exists rather
than creating them from scratch.

It is not that following the object-oriented technology you have to develop reusable thing as it would require a
lot of experience. Very few developers create new reusable thing rest reuse the existing ones.

While developing software what we can reuse? We can reuse libraries, framework and patterns. Well reusing
the models is the most practical form of reuse.

1. Libraries

A library has a collection of classes that can be used in many contexts. In a library, classes must be organized
so well that it is easy for the user to find them. Classes must be thoroughly described so that the user gets what
he is searching for.

There are many more points for a good reusable library:

• Classes in the library should be arranged around a theme and the classes should show complete
behavior.
• Names and signatures of polymorphic operations must be consistent across all the classes in the library
• Library classes must be extensible so that user can easily define subclasses.
2. Frameworks

The framework of an application is an outline of the program which when elaborate helps in constructing the
complete application. During elaboration, the users specialize the program by setting its behavior, specific
towards an individual application.

Generally, the class library accompanies framework and it is specific to the category of application.

3. Pattern
Pattern encompasses a few classes and relationships whereas the framework is a view of the entire subsystem.
Different patterns point to different phases in the lifecycle of software development.

Organizing System into Subsystem

It is a common sense that one can not design the entire system at once. So, the developers have to break the
entire system into subsystems. A subsystem is not a class, function, or an object; it is a group of class,
association, operations, events. Every subsystem shares a small interface with another subsystem.

Each subsystem has to provide a service to other subsystems from the proper functioning of an application.
When it comes to breaking of a system to subsystem it can be organized in two ways as a sequence of the
horizontal layer or a sequence of vertical partition.

1. Layers

In the layered system, the layers are designed such that a layer is built in terms of the layer below it whereas it
provides the basis for implementing the layer above it.

For example, consider designing the graphical interface i.e. an application window which is implemented
using screen operation which in turn is implemented using pixel operations. The layered system can be of two
forms of closed layer architecture and open layer architecture.

In the closed layer architecture, each layer is implemented in terms of the immediate layer below it. If any
changes are made to a layer it only affects a layer next to it and thereby reduces dependencies between the
layers. It is easy to make changes in close layer architecture.

In open layer architecture, a layer can be implemented taking basis as any lower layer i.e. you go to any depth.
It’s one advantage is that you don’t need to redefine the operation for each layer. But it has a disadvantage is
that if you make any change in a layer then it will affect several layers.

2. Partitions
Breaking the system into subsystems with vertical partition provides several independent systems that are
weakly coupled and each subsystem provides one type of service.

For example, in an operating system, you have a file manager, memory management, process control. Each
serves a different purpose but avoids more dependencies.

Well, you can divide your system into subsystems by combining the layer and partition technique.

Identify Concurrency

If we talk of the real world all the objects are concurrent. But when we implement object in the system a
single processor can support many objects if all of them are not active at a time. Although a single processor
supports many objects.

Objects that are active mutually can be folded to a single thread of control.
Identify Inherent Concurrency

Inherent concurrency occurs when two distinct objects receive events at the same point of time without any
interaction. The two inherently concurrent objects can be implemented in a single hardware unit.

Allocate Subsystem to Hardware

Now, after decomposing a system into a subsystem you have to allocate subsystems to the hardware unit. A
hardware unit can be a processor or a functional unit.

First, you have to prepare a rough estimate of what hardware resources would be required for a good
performance. Then decide whether you allocate the subsystem to a hardware or software.

Allocate task to the processors for various software subsystems. Checkout the connectivity between the
physical units in which you have implemented subsystems.

Managing Data Store

Data structures, database, files are the kind of data stores that you can use individually or in combination.
Each has its own accessing time, cost, reliability and many other trade-offs which are considered for choosing
appropriate data store for the subsystems.

Managing Global Resources

It is the responsibility of a system designer to discover the global resources of the system. The system
designer must also build a mechanism to access and control global resources.

Global resources could be physical units such as processors, space such as disk space, logical names such as
class names or file names. It could also to access to the shared database.

Determine a Software Control Strategy

It is not important that you should implement the same control flow for all subsystem but it is better to choose
a single kind of control flow for the entire system.

There are two kinds of control flow external control flow and internal control flow.

1. External Control Flow

The external control flow is further classified into three types of procedure-driven control flow, event-driven
and concurrent. The procedure driven control flow the procedure makes a call for external input and wait for
the input to arrive. When the input arrives, the control resumes to the procedure that had requested for the input.
Here the control lies within the program code.

In event-driven control, the control lies in the dispatcher. The system developer attaches procedures of the
application to the events and whenever the events occur the dispatcher call out the procedure (call back). The
procedure calls dispatcher to send the input and transfer the control to dispatcher rather than holding the control
until input arrives.

In the concurrent control, the control lies with several independent objects concurrently. When an object with a
task waits for some input the other objects with task keeps on executing.
2. Internal Control Flow: In the internal control flow the control lies inside the process.

Handling Boundary Conditions

Apart from the steady-state behavior of the system, the developers must also address the boundary condition
of the system like initialization, termination and failure.

A system must be able to initialize the global variables, guardian object, constant data, class hierarchy by
itself. Termination of the system is easier. It must release the resources the reserved resources, however, the
system with concurrent tasks must inform the other task about the termination of the system.

An unorganized termination of a system is termed as failure. Failure may arise due to the errors or exhaustion
of the resources or a system breakdown. So, a developer should design a system with a perfect exit plan on the
occurrence of any error.

Set Architectural Style

You can choose the architectural style based on the features of your application. You can use the architectural
style as a starting point for designing the system it reduces your efforts. Below we have discussed some
architectural styles:

• Batch transformation architecture executes the transformation on the entire input set at once
• Continuous transformation architecture performs continuous execution on the inputs arriving one after
other.
• Interactive interface architecture is one where external interaction is more
• Dynamic simulation architecture is one which simulates the evolving of the real-world object.
• The real-time system architecture is one which has constraints over timing.
• Transaction manager architecture which involves updating, storing the concurrent transaction from a
physically distinct location.

Handling Global Resources

Handling Global Resources


The system designer must identify global resources and determine mechanisms for controlling access to them.
There are several kinds of global resources:

• Physical system: Example includes processors, tape drives and communication channels.
• Space: Example includes keyboard, buttons on a mouse, display screen
• Logical names: Example includes object IDs, filenames, and class names.
• Access to shared data: Example includes Databases

Physical resource such as processors, tape drives etc. can control their own access by establishing a protocol for
obtaining access. For a logical resource like Object ID or a database, there arises a need to provide access in a
shared environment without any conflicts. One strategy to avoid conflict may be to employ a guardian object
which controls access to all other resources. Any request to access a resource has to pass through a guardian
object only.
Another strategy may be to partition a resource logically and assign subsets to different guardian objects for
independent control. In a critical real time application passing the entire access requests or resources through a
guardian object may be not be desirable and it may become necessary to provide direct access to resources. In
this case, to avoid conflicts, a lock can be placed on the subsets of resources. A lock is a logical object associated
with a shared resource which gives the right of accessing the resource to the lock holder. Guardian object can
still exist to allocate the lock. However, direct access to resources must not be implemented unless it is
absolutely desirable.

Choosing a Software Control Strategy

It is best to choose a single control style for the whole system. There are two kinds of control flows in a software
system: External control and internal control. External control concerns the flow of externally visible events
among the objects in the system. There are three kinds of external events: procedural-driven sequential, event-
driven sequential and concurrent.
Procedural-driven Control: In a procedure-driven system, the control lies within the program code.
Procedures request external input and then wait for it, when input arrives, control resumes within the procedure
that made the call. The major advantage of procedure-driven control is that it is easy to implement with
conventional languages, the disadvantage is that it requires the concurrency inherent in objects to be mapped
into a sequential flow of control.

Event-driven Control: In the sequential model, the control resides within a dispatcher or monitor that the
language, subsystem or operating system provides. In event-driven, the developers attach application procedures
to events and the dispatcher calls the procedures when the corresponding events occur. Usually event driven
systems are used for external control in preference to procedure driven systems, because the mapping from
events to program constructs is simpler and more powerful. Event driven systems are more modular and can
handle error conditions better than procedure-driven systems. For example, the GUI consists of many built in
objects (like text boxes, tool icons menus etc). The user interacts with these GUI objects either through mouse
clicks or through pressing keys on keyboard. These user interactions with GUI objects are called events and
these event notifications are given to the program (whom the user is interacting with) by Windows operating
system. Now the programmer's task is to write the code that executes on the occurrence of these events
automatically, and this type of control is called event-driven control. Concurrent system Control: Here control
resides concurrently in several independent objects, each as a separate task. A task can wait for input, but other
tasks continue execution. The operating system keeps track of the raised events while a task is being performed
so that events are not lost. Scheduling conflicts among tasks are also resolved by the operating system. Internal
control refers to the flow of control within a process. It exists only in the implementation and therefore is neither
inherently concurrent nor sequential.

Handling boundary Conditions


Although most of the system design concerns steady-state behavior system designer must consider boundary
conditions as well and address issues like initialization, termination and failure (the unplanned termination of
the system).

• Initialization: It refers to initialization of constant data, parameters, global variables, tasks, guardian
objects, and classes as per their hierarchy. Initialization of a system containing concurrent tasks must
be done in a manner so that tasks can be started without prolonged delays. There is quite possibility
that one object has been initialized at an early stage and the other object on which it is dependent is
not initialized even after considerable time. This may lead to halting of system tasks.
• Termination: Termination requires that objects must release the reserved resources. In case of
concurrent system, a task must intimate other tasks about its termination.
• Failure: Failure is the unplanned termination of the system, which can occur due to system fault or
due to user errors or due to exhaustion of system resources, or from external breakdown or bugs from
external system. The good design must not affect remaining environment in case of any failure and
must provide mechanism for recording details of system activities and error logs.
Choosing Software control Implementation

There are two kinds of control internal and external. External control is the flow of externally
visible events among the objects in the system. It might bee implemented in three ways: Procedure-driven
sequential, Event-driven sequential and concurrent. In procedure-driven control, control resides within the
program code. The procedure issues request for external input and wait for it. It is easy to implement but it
requires the concurrency inherited in objects to be mapped into a sequential flow of control. In event driven
sequential implementation control resides in a dispatcher or a monitor provided by the operating system,
language or subsystem. Application procedures are attached to events and are called by the dispatcher when
they occur. Provides greater flexibility than procedure driven sequential implementation. In concurrent systems
control resides concurrent within several independent objects. Internal control can be implemented the same
way as external control but there is one important difference: there is no waiting for events from external
independent objects, internal events are generated by objects as part of the implementation algorithm, so their
response pattern are predictable.

Architectural Design in Software Engineering


Requirements of the software should be transformed into an architecture that describes the software’s top-level
structure and identifies its components. This is accomplished through architectural design (also called system
design), which acts as a preliminary ‘blueprint’ from which software can be developed. IEEE defines
architectural design as ‘the process of defining a collection of hardware and software components and their
interfaces to establish the framework for the development of a computer system.’ This framework is
established by examining the software requirements document and designing a model for providing
implementation details. These details are used to specify the components of the system along with their inputs,
outputs, functions, and the interaction between them. An architectural design performs the following functions.
1. It defines an abstraction level at which the designers can specify the functional and performance behaviour
of the system.
2. It acts as a guideline for enhancing the system (when ever required) by describing those features of the
system that can be modified easily without affecting the system integrity.
3. It evaluates all top-level designs.

4. It develops and documents top-level design for the external and internal interfaces.
5. It develops preliminary versions of user documentation.
6. It defines and documents preliminary test requirements and the schedule for software integration.
7. The sources of architectural design are listed below.
8. Information regarding the application domain for the software to be developed
9. Using data-flow diagrams
10. Availability of architectural patterns and architectural styles.
Architectural design is of crucial importance in software engineering during which the essential requirements
like reliability, cost, and performance are dealt with. This task is cumbersome as the software engineering
paradigm is shifting from monolithic, stand-alone, built-from-scratch systems to componentized, evolvable,
standards-based, and product line-oriented systems. Also, a key challenge for designers is to know precisely
how to proceed from requirements to architectural design. To avoid these problems, designers adopt strategies
such as reusability, componentization, platform-based, standards-based, and so on.
Though the architectural design is the responsibility of developers, some other people like user representatives,
systems engineers, hardware engineers, and operations personnel are also involved. All these stakeholders must
also be consulted while reviewing the architectural design in order to minimize the risks and errors.
Architectural Design Representation
Architectural design can be represented using the following models.

▪ Structural model: Illustrates architecture as an ordered collection of program components


▪ Dynamic model: Specifies the behavioral aspect of the software architecture and indicates how
the structure or system configuration changes as the function changes due to change in the external
environment
▪ Process model: Focuses on the design of the business or technical process, which must be
implemented in the system
▪ Functional model: Represents the functional hierarchy of a system
▪ Framework model: Attempts to identify repeatable architectural design patterns encountered in
similar types of application. This leads to an increase in the level of abstraction.

Architectural Design Output


The architectural design process results in an Architectural Design Document (ADD). This document
consists of a number of graphical representations thatcomprises software models along with associated
descriptive text. The softwaremodels include static model, interface model, relationship model, and dynamic
processmodel. They show how the system is organized into a process at run-time.
Architectural design document gives the developers a solution to the problem stated in the Software
Requirements Specification (SRS). Note that it considers only those requirements in detail that affect the
program structure. In addition to ADD, other outputs of the architectural design are listed below.

▪ Various reports including audit report, progress report, and configuration status accounts report
▪ Various plans for detailed design phase, which include the following
▪ Software verification and validation plan
▪ Software configuration management plan
▪ Software quality assurance plan
▪ Software project management plan.

Architectural Styles

Architectural styles define a group of interlinked systems that share structural and semantic properties. In short,
the objective of using architectural styles is to establish a structure for all the components present in a system.
If an existing architecture is to be re-engineered, then imposition of an architectural style results in fundamental
changes in the structure of the system. This change also includes re-assignment of the functionality performed
by the components.
By applying certain constraints on the design space, we can make different style-specific analysis from an
architectural style. In addition, if conventional structures are used for an architectural style, the other
stakeholders can easily understand the organization of the system.
A computer-based system (software is part of this system) exhibits one of the many available architectural
styles. Every architectural style describes a system category that includes the following.

▪ Computational components such as clients, server, filter, and database to execute the desired
system function
▪ A set of connectors such as procedure call, events broadcast, database protocols, and pipes to
provide communication among the computational components
▪ Constraints to define integration of components to form a system
▪ A semantic model, which enable the software designer to identify the characteristics of the system
as a whole by studying the characteristics of its components.

Some of the commonly used architectural styles are data-flow architecture, object oriented architecture, layered
system architecture, data-centered architecture, and call and return architecture. Note that the use of an
appropriate architectural style promotes design reuse, leads to code reuse, and supports interoperability.
Data-flow Architecture

Data-flow architecture is mainly used in the systems that accept some inputs and transform it into the desired
outputs by applying a series of transformations. Each component, known as filter, transforms the data and
sends this transformed data to other filters for further processing using the connector, known as pipe. Each
filter works as an independent entity, that is, it is not concerned with the filter which is producing or consuming
the data. A pipe is a unidirectional channel which transports the data received on one end to the other end. It
does not change the data in anyway; it merely supplies the data to the filter on the receiver end.

Most of the times, the data-flow architecture degenerates a batch sequential system. In this system, a batch of
data is accepted as input and then a series of sequential filters are applied to transform this data. One
common example of this architecture is UNIX shell programs. In these programs, UNIX processes act as filters
and the file system through which UNIX processes interact, act as pipes. Other well-known examples of this
architecture are compilers, signal processing systems, parallel programming, functional programming, and
distributed systems. Some advantages associated with the data-flow architecture are listed below.

▪ It supports reusability.
▪ It is maintainable and modifiable.
▪ It supports concurrent execution.
▪ Some disadvantages associated with the data-flow architecture are listed below.
▪ It often degenerates to batch sequential system.
▪ It does not provide enough support for applications requires user interaction.
▪ It is difficult to synchronize two different but related streams.

Object-oriented Architecture

In object-oriented architectural style, components of a system encapsulate data and operations, which are
applied to manipulate the data. In this style, components are represented as objects and they interact with each
other through methods (connectors). This architectural style has two important characteristics, which are listed
below.

▪ Objects maintain the integrity of the system.


▪ An object is not aware of the representation of other objects.
▪ Some of the advantages associated with the object-oriented architecture are listed below.
▪ It allows designers to decompose a problem into a collection of independent objects.
▪ The implementation detail of objects is hidden from each other and hence, they can be changed
without affecting other objects.

Layered Architecture

In layered architecture, several layers (components) are defined with each layer performing a well-defined set
of operations. These layers are arranged in a hierarchical manner, each one built upon the one below it. Each
layer provides a set of services to the layer above it and acts as a client to the layer below it. The interaction
between layers is provided through protocols (connectors) that define a set of rules to be followed during
interaction. One common example of this architectural style is OSI-ISO (Open Systems Interconnection-
International Organization for Standardization) communication system.

Data-centered Architecture

A data-centered architecture has two distinct components: a central data structure or data store (central
repository) and a collection of client software. The datastore (for example, a database or a file) represents the
current state of the data andthe client software performs several operations like add, delete, update, etc., onthe
data stored in the data store. In some cases, the data storeallows the client software to access the data
independent of any changes or theactions of other client software.
In this architectural style, new components corresponding to clients can be added and existing components can
be modified easily without taking into account other clients. This is because client components operate
independently of one another.
A variation of this architectural style is blackboard system in which the data store is transformed into a
blackboard that notifies the client software when the data (of their interest) changes. In addition,
the information can be transferred among the clients through the blackboard component.
Some advantages of the data-centered architecture are listed below.

▪ Clients operate independently of one another.


▪ Data repository is independent of the clients.
▪ It adds scalability (that is, new clients can be added easily).
▪ It supports modifiability.
▪ It achieves data integration in component-based development using blackboard.
Call and Return Architecture

A call and return architecture enables software designers to achieve a program structure, which can be easily
modified. This style consists of the following two substyles.

▪ Main program/subprogram architecture: In this, function is decomposed into a control


hierarchy where the main program invokes a number of program components, which in turn may
invoke other components.

▪ Remote procedure call architecture: In this, components of the main or subprogram architecture
are distributed over a network across multiple computers.
Unit-5
Design Classes
The Requirements Model defines a set of analysis classes. Each describes some element of the problem
domain, focus on an aspect of the problem that is visible. The level of abstraction of the analysis class is
comparatively high. The set of design classes refine analysis classes and providing design detail that enables
classes to execute a software infrastructure that supports business solutions.

Types:

There are 5 different types of design classes that represent a different layer of design architecture that can be
developed:
1. User interface classes define abstraction that mandatory for human-computer interaction [HCI]. In
cases, HCI occurs within the context of metaphor, and design classes for the interface may be
visible representations of elements of metaphor.
2. Business domain classes are often refinements of analysis classes defined earlier. The class
identifies the attributes that are required to implement some elements of the business domain.
3. Process classes implement lower-level business Preoccupation need to manage business domain
classes.
4. Persistent classes represent the data stores that will persist beyond the execution of software.
5. System classes implement software management and control function that permits the system to
operate and convey within its computing environment and with the outside world
As architecture forms, the level of abstraction is reduced as each analysis class transformed into two design
representations. That is, analysis classes represent data objects using the jargon of the business domain.
Design classes present notably more technical detail as a guide for implementation.
Arlow and Neustadt suggest each design class reviewed to ensure that is “well-formed”. They define four
characteristics of a well-formed design class:

Characteristics:

1. Complete and sufficient: A design class should be complete encapsulation of all attributes and
method that can be reasonably be expected to exist for class. For example, the class scene defined
for video editing software is complete only if it contains all attributes and methods that can
agreeably be associated with the creation of a video scene. Sufficiently ensure that design class
contains only those methods that are sufficient to achieve the intent of class, no more and no less.
2. Primitiveness: Method associate with design class should be focused on accomplishing one
service for class. Once service implemented with the method, the class should not provide another
way to accomplish the same thing. For example, the class Video Clip for video editing software
might have attributes start point and end point to specify start and endpoint of clip.
3. High Cohesion:/strong> A cohesion design class has a small, concentrated set of authority
and single-mindedly applies attributes and methods to implement those responsibilities. For
example, the class video clip might contain set of method for editing the video clip. As long as
each method focus solely on attributes associated with video clip, cohesion is maintained.
4. Low Coupling: Within the design model, it is necessary for design classes to get together with
one another. However, get together should be kept to an acceptable minimum. If the design
model is highly coupled, the system is difficult to implement to test and to maintain over
time, In general, design classes within subsystem should have only limited knowledge of other
classes. This restriction called the Law of Demeter, suggest that method should only send
message to methods in neighboring classes.
What is recursing downwards, Computer Engineering

What is recursing downwards? And its ways.

The design process generally works top-down; you start with the higher level operations and proceed to describe
lower level operations. You can also work bottom-up; but you risk designing operations that are never needed.
Downward recursion proceeds in two ways:

o By functionality
o By mechanism

What is an algorithm?
An Algorithm is a procedure to solve a particular problem in a finite number of steps for a finite -sized
input.
The algorithms can be classified in various ways. They are:

1. Implementation Method
2. Design Method
3. Design Approaches
4. Other Classifications
In this article, the different algorithms in each classification method are discussed.
The classification of algorithms is important for several reasons:
Organization: Algorithms can be very complex and by classifying them, it becomes easier to organize,
understand, and compare different algorithms.
Problem Solving: Different problems require different algorithms, and by having a classification, it can
help identify the best algorithm for a particular problem.
Performance Comparison: By classifying algorithms, it is possible to compare their performance in terms
of time and space complexity, making it easier to choose the best algorithm for a particular use case.
Reusability: By classifying algorithms, it becomes easier to re-use existing algorithms for similar
problems, thereby reducing development time and improving efficiency.
Research: Classifying algorithms is essential for research and development in computer science, as it helps
to identify new algorithms and improve existing ones.
Overall, the classification of algorithms plays a crucial role in computer science and helps to improve the
efficiency and effectiveness of solving problems.
Classification by Implementation Method: There are primarily three main categories into which an
algorithm can be named in this type of classification. They are:

1. Recursion or Iteration: A recursive algorithm is an algorithm which calls itself again and again
until a base condition is achieved whereas iterative algorithms use loops and/or data
structures like stacks, queues to solve any problem. Every recursive solution can be implemented
as an iterative solution and vice versa.
Example: The Tower of Hanoi is implemented in a recursive fashion while Stock Span problem
is implemented iteratively.
2. Exact or Approximate: Algorithms that are capable of finding an optimal solution for any
problem are known as the exact algorithm. For all those problems, where it is not possible to find
the most optimized solution, an approximation algorithm is used. Approximate algorithms are
the type of algorithms that find the result as an average outcome of sub outcomes to a problem.
Example: For NP-Hard Problems, approximation algorithms are used. Sorting algorithms are the
exact algorithms.
3. Serial or Parallel or Distributed Algorithms: In serial algorithms, one instruction is executed
at a time while parallel algorithms are those in which we divide the problem into subproblems
and execute them on different processors. If parallel algorithms are distributed on different
machines, then they are known as distributed algorithms.
Classification by Design Method: There are primarily three main categories into which an algorithm can
be named in this type of classification. They are:

1. Greedy Method: In the greedy method, at each step, a decision is made to choose the local
optimum, without thinking about the future consequences.
Example: Fractional Knapsack, Activity Selection.
2. Divide and Conquer: The Divide and Conquer strategy involves dividing the problem into sub-
problem, recursively solving them, and then recombining them for the final answer.
Example: Merge sort, Quicksort.
3. Dynamic Programming: The approach of Dynamic programming is similar to divide and
conquer. The difference is that whenever we have recursive function calls with the same result,
instead of calling them again we try to store the result in a data structure in the form of a table
and retrieve the results from the table. Thus, the overall time complexity is reduced. “Dynamic”
means we dynamically decide, whether to call a function or retrieve values from the table.
Example: 0-1 Knapsack, subset-sum problem.
4. Linear Programming: In Linear Programming, there are inequalities in terms of inputs and
maximizing or minimizing some linear functions of inputs.
Example: Maximum flow of Directed Graph
5. Reduction(Transform and Conquer): In this method, we solve a difficult problem by
transforming it into a known problem for which we have an optimal solution. Basically, the goal
is to find a reducing algorithm whose complexity is not dominated by the resulting reduced
algorithms.
Example: Selection algorithm for finding the median in a list involves first sorting the list and
then finding out the middle element in the sorted list. These techniques are also called transform
and conquer.
6. Backtracking: This technique is very useful in solving combinatorial problems that have a
single unique solution. Where we have to find the correct combination of steps that lead to
fulfillment of the task. Such problems have multiple stages and there are multiple options at
each stage. This approach is based on exploring each available option at every stage one-by-one.
While exploring an option if a point is reached that doesn’t seem to lead to the solution, the
program control backtracks one step, and starts exploring the next option. In this way, the
program explores all possible course of actions and finds the route that leads to the solution.
Example: N-queen problem, maize problem.
7. Branch and Bound: This technique is very useful in solving combinatorial optimization
problem that have multiple solutions and we are interested in find the most optimum solution. In
this approach, the entire solution space is represented in the form of a state space tree. As the
program progresses each state combination is explored, and the previous solution is replaced by
new one if it is not the optimal than the current solution.
Example: Job sequencing, Travelling salesman problem.
Classification by Design Approaches : There are two approaches for designing an algorithm. these
approaches include
1. Top-Down Approach :
2. Bottom-up approach
• Top-Down Approach: In the top-down approach, a large problem is divided into small sub-
problem. and keep repeating the process of decomposing problems until the complex
problem is solved.
• Bottom-up approach: The bottom-up approach is also known as the reverse of top-down
approaches.
In approach different, part of a complex program is solved using a programming language and
then this is combined into a complete program.
Top-Down Approach:
Breaking down a complex problem into smaller, more manageable sub-problems and solving each sub-
problem individually.
Designing a system starting from the highest level of abstraction and moving towards the lower levels.
Bottom-Up Approach:
Building a system by starting with the individual components and gradually integrating them to form a
larger system.
Solving sub-problems first and then using the solutions to build up to a solution of a larger problem.
Note: Both approaches have their own advantages and disadvantages and the choice between them often
depends on the specific problem being solved.
Here are examples of the Top-Down and Bottom-Up approaches in code:
Top-Down Approach (in Python):

• Python

def solve_problem(problem):
if problem == "simple":
return "solved"
elif problem == "complex":
sub_problems = break_down_complex_problem(problem)
sub_solutions = [solve_problem(sub) for sub in sub_problems]
return combine_sub_solutions(sub_solutions)

Bottom-Up Approach (in Python):

• Python

def solve_sub_problems(sub_problems):
return [solve_sub_problem(sub) for sub in sub_problems]

def combine_sub_solutions(sub_solutions):
# implementation to combine sub-solutions to solve the larger problem

def solve_problem(problem):
if problem == "simple":
return "solved"
elif problem == "complex":
sub_problems = break_down_complex_problem(problem)
sub_solutions = solve_sub_problems(sub_problems)
return combine_sub_solutions(sub_solutions)

Other Classifications: Apart from classifying the algorithms into the above broad categories, the algorithm
can be classified into other broad categories like:

1. Randomized Algorithms: Algorithms that make random choices for faster solutions are known
as randomized algorithms.
Example: Randomized Quicksort Algorithm.
2. Classification by complexity: Algorithms that are classified on the basis of time taken to get a
solution to any problem for input size. This analysis is known as time complexity analysis.
Example: Some algorithms take O(n), while some take exponential time.
3. Classification by Research Area: In CS each field has its own problems and needs efficient
algorithms.
Example: Sorting Algorithm, Searching Algorithm, Machine Learning etc.
4. Branch and Bound Enumeration and Backtracking: These are mostly used in Artificial
Intelligence.
What is refactoring?
Refactoring is the process of restructuring code, while not changing its original functionality. The goal of
refactoring is to improve internal code by making many small changes without altering the code's external
behavior.
Computer programmers and software developers refactor code to improve the design, structure and
implementation of software. Refactoring improves code readability and reduces complexities. Refactoring can
also help software developers find bugs or vulnerabilities hidden in their software.
The refactoring process features many small changes to a program's source code. One approach to refactoring,
for example, is to improve the structure of source code at one point and then extend the same changes
systematically to all applicable references throughout the program. The thought process is that all the small,
behavior-preserving changes to a body of code have a cumulative effect. These changes preserve the software's
original behavior and do not modify its behavior.
Martin Fowler, considered the father of refactoring, consolidated many best practices from across the software
development industry into a specific list of refactorings and described methods to implement them in his
book Refactoring: Improving the Design of Existing Code.
What is the purpose of refactoring?

Refactoring improves code by making it:

• More efficient by addressing dependencies and complexities.

• More maintainable or reusable by increasing efficiency and readability.

• Cleaner so it is easier to read and understand.

• Easier for software developers to find and fix bugs or vulnerabilities in the code.

Code modification is done without changing any functions of the program itself. Many basic editing
environments support simple refactorings like renaming a function or variable across an entire code base.

This image shows the process of making several small code refactorings.
When should code be refactored?

Refactoring can be performed after a product has been deployed, before adding updates and new features to
existing code, or as a part of day-to-day programming.

When the process is performed after deployment, it is normally done before developers move on to the next
project. An organization may be able to refactor more code at this point in the software delivery lifecycle,
where the developers have increased availability and more time to work on the source code changes needed.

A better time to perform refactoring, though, is before adding updates or new features to existing code. When
performed at this point, refactoring makes it easier for developers to build onto the existing code because they
are going back and simplifying the code, making it easier to read and understand.

When an organization has a strong grasp on the refactoring process, it can make it a regular process.
Whenever a developer needs to add something to a code base, they can look at the existing code to see if it is
structured in a way that would make the process of adding new code straightforward. If it is not, then the
developer can refactor the existing code. Once the new code is added, the developer can refactor the same
code again to make it clearer.

What are the benefits of refactoring?

Refactoring can provide the following benefits:

• Makes the code easier to understand and read because the goal is to simplify code and reduce
complexities.

• Improves maintainability and makes it easier to spot bugs or make further changes.

• Encourages a more in-depth understanding of code. Developers have to think further about how
their code will mix with code already in the code base.

• Focus remains only on functionality. Not changing the code's original functionality ensures the
original project does not lose scope.

What are the challenges of refactoring?

Challenges do come with the process, however. Some of these include:

• The process will take extra time if a development team is in a rush and refactoring is not planned
for.

• Without clear objectives, refactoring can lead to delays and extra work.

• Refactoring cannot address software flaws by itself, as it is made to clean code and make it less
complex.
Techniques to perform code refactoring

Organizations can use different refactoring techniques in different instances. Some examples include:

• Red, green. This widely used refactoring method in Agile development involves three steps. First,
the developers determine what needs to be developed; second, they get their project to pass testing;
and third, they refactor that code to make improvements.

• Inline. This technique focuses on simplifying code by eliminating unnecessary elements.

• Moving features between objects. This technique creates new classes, while moving functionality
between new and old data classes.

• Extract. This technique breaks down code into smaller pieces and then moves those pieces to a
different method. Fragmented code is replaced with a call to the new method.

• Refactoring by abstraction. This technique reduces the amount of duplicate code. This is done
when there is a large amount of code to be refactored.

• Compose. This technique streamlines code to reduce duplications using multiple refactoring
methods, including extraction and inline.

Code refactoring best practices

Best practices to follow for refactoring include:

• Plan for refactoring. It may be difficult to make time for the time-consuming practice otherwise.

• Refactor first. Developers should do this before adding updates or new features to existing code
to reduce technical debt.

• Refactor in small steps. This gives developers feedback early in the process so they can find
possible bugs, as well as include business requests.

• Set clear objectives. Developers should determine the project scope and goals early in the code
refactoring process. This helps to avoid delays and extra work, as refactoring is meant to be a form
of housekeeping, not an opportunity to changes functions or features.

• Test often. This helps to ensure refactored changes do not introduce new bugs.

• Automate wherever possible. Automation tools make refactoring easier and faster, thus,
improving efficiency.

• Fix software defects separately. Refactoring is not meant to address software flaws.
Troubleshooting and debugging should be done separately.

• Understand the code. Review the code to understand its processes, methods, objects, variables
and other elements.
• Refactor, patch and update regularly. Refactoring generates the most return on investment when
it can address a significant issue without taking too much time and effort.

• Focus on code deduplication. Duplication adds complexities to code, expanding the software's
footprint and wasting system resources.

Code Optimization in Compiler Design

The code optimization in the synthesis phase is a program transformation technique, which tries to improve
the intermediate code by making it consume fewer resources (i.e. CPU, Memory) so that faster-running
machine code will result. Compiler optimizing process should meet the following objectives :
• The optimization must be correct, it must not, in any way, change the meaning of the program.
• Optimization should increase the speed and performance of the program.
• The compilation time must be kept reasonable.
• The optimization process should not delay the overall compiling process.

When to Optimize?

Optimization of the code is often performed at the end of the development stage since it reduces readability
and adds code that is used to increase the performance.

Why Optimize?

Optimizing an algorithm is beyond the scope of the code optimization phase. So the program is optimized.
And it may involve reducing the size of the code. So optimization helps to:
•Reduce the space consumed and increases the speed of compilation.
•Manually analyzing datasets involves a lot of time. Hence we make use of software like Tableau
for data analysis. Similarly manually performing the optimization is also tedious and is better done
using a code optimizer.
• An optimized code often promotes re-usability.
Types of Code Optimization: The optimization process can be broadly classified into two types :
1. Machine Independent Optimization: This code optimization phase attempts to improve
the intermediate code to get a better target code as the output. The part of the intermediate code
which is transformed here does not involve any CPU registers or absolute memory locations.
2. Machine Dependent Optimization: Machine-dependent optimization is done after the target
code has been generated and when the code is transformed according to the target machine
architecture. It involves CPU registers and may have absolute memory references rather than relative
references. Machine-dependent optimizers put efforts to take maximum advantage of the memory
hierarchy.
Code Optimization is done in the following different ways:

1. Compile Time Evaluation:

• C

(i) A = 2*(22.0/7.0)*r
Perform 2*(22.0/7.0)*r at compile time.
(ii) x = 12.4
y = x/2.3
Evaluate x/2.3 as 12.4/2.3 at compile time.

2. Variable Propagation:

• C

//Before Optimization
c=a*b
x=a
till
d=x*b+4

//After Optimization
c=a*b
x=a
till
d=a*b+4

3. Constant Propagation:
• If the value of a variable is a constant, then replace the variable with the constant. The variable
may not always be a constant.
Example:
• C

(i) A = 2*(22.0/7.0)*r
Performs 2*(22.0/7.0)*r at compile time.
(ii) x = 12.4
y = x/2.3
Evaluates x/2.3 as 12.4/2.3 at compile time.
(iii) int k=2;
if(k) go to L3;
It is evaluated as :
go to L3 ( Because k = 2 which implies condition is always true)
4. Constant Folding:

• Consider an expression : a = b op c and the values b and c are constants, then the value of a can be
computed at compile time.
Example:
• C

#define k 5
x=2*k
y=k+5

This can be computed at compile time and the values of x and y are :
x = 10
y = 10

Note: Difference between Constant Propagation and Constant Folding:


• In Constant Propagation, the variable is substituted with its assigned constant where as in Constant
Folding, the variables whose values can be computed at compile time are considered and
computed.

5. Copy Propagation:

• It is extension of constant propagation.


• After a is assigned to x, use a to replace x till a is assigned again to another variable or value or
expression.
• It helps in reducing the compile time as it reduces copying.
Example :
• C

//Before Optimization
c=a*b
x=a
till
d=x*b+4

//After Optimization
c=a*b
x=a
till
d=a*b+4
6. Common Sub Expression Elimination:

• In the above example, a*b and x*b is a common sub expression.

7. Dead Code Elimination:

• Copy propagation often leads to making assignment statements into dead code.
• A variable is said to be dead if it is never used after its last definition.
• In order to find the dead variables, a data flow analysis should be done.
Example:
• C

c=a*b
x=a
till
d=a*b+4

//After elimination :
c=a*b
till
d=a*b+4

8. Unreachable Code Elimination:

• First, Control Flow Graph should be constructed.


• The block which does not have an incoming edge is an Unreachable code block.
• After constant propagation and constant folding, the unreachable branches can be eliminated.

• C++

#include <iostream>
using namespace std;

int main() {
int num;
num=10;
cout << "GFG!";
return 0;
cout << num; //unreachable code
}
//after elimination of unreachable code
int main() {
int num;
num=10;
cout << "GFG!";
return 0;
}

9. Function Inlining:

• Here, a function call is replaced by the body of the function itself.


• This saves a lot of time in copying all the parameters, storing the return address, etc.

10. Function Cloning:

• Here, specialized codes for a function are created for different calling parameters.
• Example: Function Overloading

11. Induction Variable and Strength Reduction:

• An induction variable is used in the loop for the following kind of assignment i = i + constant. It is
a kind of Loop Optimization Technique.
• Strength reduction means replacing the high strength operator with a low strength.
Examples:
• C

Example 1 :
Multiplication with powers of 2 can be replaced by shift left operator which is less
expensive than multiplication
a=a*16
// Can be modified as :
a = a<<4

Example 2 :
i = 1;
while (i<10)
{
y = i * 4;
}
//After Reduction
i=1
t=4
{
while( t<40)
y = t;
t = t + 4;
}

Loop Optimization Techniques:

1. Code Motion or Frequency Reduction:


• The evaluation frequency of expression is reduced.
• The loop invariant statements are brought out of the loop.
Example:
• C

a = 200;
while(a>0)
{
b = x + y;
if (a % b == 0)
printf(“%d”, a);
}

//This code can be further optimized as


a = 200;
b = x + y;
while(a>0)
{
if (a % b == 0}
printf(“%d”, a);
}

2. Loop Jamming:
• Two or more loops are combined in a single loop. It helps in reducing the compile time.
Example:
• C

// Before loop jamming


for(int k=0;k<10;k++)
{
x = k*2;
}

for(int k=0;k<10;k++)
{
y = k+3;
}

//After loop jamming


for(int k=0;k<10;k++)
{
x = k*2;
y = k+3;
}

3. Loop Unrolling:
• It helps in optimizing the execution time of the program by reducing the iterations.
• It increases the program’s speed by eliminating the loop control and test instructions.
Example:
• C

//Before Loop Unrolling

for(int i=0;i<2;i++)
{
printf("Hello");
}

//After Loop Unrolling

printf("Hello");
printf("Hello");

Where to apply Optimization?

Now that we learned the need for optimization and its two types,now let’s see where to apply these
optimization.
• Source program: Optimizing the source program involves making changes to the algorithm or
changing the loop structures. The user is the actor here.
• Intermediate Code: Optimizing the intermediate code involves changing the address calculations
and transforming the procedure calls involved. Here compiler is the actor.
• Target Code: Optimizing the target code is done by the compiler. Usage of registers, and select
and move instructions are part of the optimization involved in the target code.
• Local Optimization: Transformations are applied to small basic blocks of statements. Techniques
followed are Local Value Numbering and Tree Height Balancing.
• Regional Optimization: Transformations are applied to Extended Basic Blocks. Techniques
followed are Super Local Value Numbering and Loop Unrolling.
• Global Optimization: Transformations are applied to large program segments that include
functions, procedures, and loops. Techniques followed are Live Variable Analysis and Global Code
Replacement.
• Interprocedural Optimization: As the name indicates, the optimizations are applied inter
procedurally. Techniques followed are Inline Substitution and Procedure Placement.

Advantages of Code Optimization:

Improved performance: Code optimization can result in code that executes faster and uses fewer resources,
leading to improved performance.
Reduction in code size: Code optimization can help reduce the size of the generated code, making it easier to
distribute and deploy.
Increased portability: Code optimization can result in code that is more portable across different platforms,
making it easier to target a wider range of hardware and software.
Reduced power consumption: Code optimization can lead to code that consumes less power, making it more
energy-efficient.
Improved maintainability: Code optimization can result in code that is easier to understand and maintain,
reducing the cost of software maintenance.

Disadvantages of Code Optimization:

Increased compilation time: Code optimization can significantly increase the compilation time, which can
be a significant drawback when developing large software systems.
Increased complexity: Code optimization can result in more complex code, making it harder to understand
and debug.
Potential for introducing bugs: Code optimization can introduce bugs into the code if not done carefully,
leading to unexpected behavior and errors.
Difficulty in assessing the effectiveness: It can be difficult to determine the effectiveness of code
optimization, making it hard to justify the time and resources spent on the process.

Implementation of control

The designer must refine the strategy for implementing the state-event models present in the dynamic model.
As part of system design, you will have chosen a basic strategy for realizing the dynamic model. Now during
object design, you must flesh out this strategy There are three basic approaches to implementing the dynamic
model:

• Using the location within the program to hold state (procedure-driven system)
• Direct implementation of a state machine mechanism (event-driven system)
• Using concurrent tasks

Adjustment of inheritance

As object design progresses, the definitions of classes and operations can often be adjusted to increase the
amount of inheritance. The designer should:

• Rearrange and adjust classes and operations to increase the inheritance


• Abstract common behaviour out of groups of classes
• Use delegation to share behaviour when inheritance is semantically invalid

You might also like