SY SE Unit wise notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 72

Unit 1 - Introduction to Software Engineering

1.1 Introduction:
Software is a program or set of programs containing instructions that provide desired functionality.
And Engineering is the process of designing and building something that serves a particular purpose
and finds a cost-effective solution to problems.
Software engineering is the process of designing, developing, testing, and maintaining software.
It is a systematic and disciplined approach to software development that aims to create high-quality,
reliable, and maintainable software.
Software engineering includes a variety of techniques, tools, and methodologies, including
requirements analysis, design, testing, and maintenance.
Some key principles of software engineering include:
1. Modularity: Breaking the software into smaller, reusable components that can be developed and tested
independently.
2. Abstraction: Hiding the implementation details of a component and exposing only the necessary functionality
to other parts of the software.
3. Encapsulation: Wrapping up the data and functions of an object into a single unit, and protecting the internal
state of an object from external modifications.
4. Reusability: Creating components that can be used in multiple projects, which can save time and resources.
5. Maintenance: Regularly updating and improving the software to fix bugs, add new features, and address
security vulnerabilities.
6. Testing: Verifying that the software meets its requirements and is free of bugs.
7. Design Patterns: Solving recurring problems in software design by providing templates for solving them.
8. Agile methodologies: Using iterative and incremental development processes that focus on customer
satisfaction, rapid delivery, and flexibility.
9. Continuous Integration & Deployment: Continuously integrating the code changes and deploying them into
the production environment.

Software Engineering is mainly used for large projects based on software systems rather than single programs
or applications. The main goal of software Engineering is to develop software application for improving the
quality, budget and time efficiency. Software Engineering ensures that the software that has to built should be
consistent, correct, also on budget, on time and within the required requirements.

There are Four main Attributes of Software Engineering:-


● Efficiency
● Reliability
● Robustness
● Maintainability

Objectives of Software Engineering:


1. Maintainability –
It should be feasible for the software to evolve to meet changing requirements.
2. Efficiency –
The software should not make wasteful use of computing devices such as memory, processor cycles, etc.
3. Correctness –
A software product is correct if the different requirements as specified in the SRS document have been correctly
implemented.
4. Reusability –
A software product has good reusability if the different modules of the product can easily be reused to develop
new products.
5. Testability –
Here software facilitates both the establishment of test criteria and the evaluation of the software with respect to
those criteria.
6. Reliability –
It is an attribute of software quality. The extent to which a program can be expected to perform its desired
function, over an arbitrary time period.
7. Portability –
In this case, the software can be transferred from one computer system or environment to another.
8. Adaptability –
In this case, the software allows differing system constraints and the user needs to be satisfied by making
changes to the software.
9. Interoperability – Capability of 2 or more functional units to process data cooperatively.

1.2 The Problem Domain


The problem domain is the software that solves problem of some users where larger systems or businesses may
depend on the software, and where problems in the software can lead to indirect loss and this software is termed as
industrial strength software.
A problem domain is the area of expertise or application that needs to be examined to solve a problem. A problem
domain is simply looking at only the topics you are interested in, and excluding everything else.
Problem Domain describes the area undergoing analysis, and includes everything that needs to be understood in
order to achieve the goal of the project. This may includes all inputs and outputs of a process, any related systems,
and internal and external project stakeholders.
1.3 Software Engineering Challenges and Approach
Some common challenges include:
1. The rapid advancement of technology
For the IT sector, every technological innovation is a blessing. The rapid advancement of technology puts additional
pressure on software development professionals to take advantage of these trends when creating new software
products to stand out from the crowd and obtain a competitive advantage. It is one of the major software engineering
problems.

2. Increasing customer demands in the development stage


The majority of software projects are conceptual in nature and are focused on creating software solutions that satisfy
a range of consumer needs. Even the simplest application or product requires developers to fully grasp the
underlying business concept and incorporate the necessary functionality to meet the ever-increasing client needs. It
is among the software engineer coding challenges faced in software development.

3. Time limitation
The deadlines set for software engineers are incredibly short and are one of the major challenges of being a software
engineer. After all, it is a game of time. When engineers collaborate with several clients across various time zones,
the process becomes considerably more difficult. These time restraints frequently cause development teams to work
less productively, resulting in subpar-quality products.

4. Limited infrastructure/ resources


The lack of resources or IT infrastructure to carry out projects successfully is another issue that most software
development companies deal with and is among the major problems of software engineering. It could be caused by a
lack of high-performance programming tools, strong computing platforms, ineffective data storage structures, or bad
networks and connections. These obstacles lower software engineers' productivity and effectiveness, which affects
the end result.

5. Understanding the large and complex system requirements is difficult


We are all aware that engineers do not communicate with clients directly because clients are contacted through the
project manager or bidder procedure. As a result, engineers face challenges when dealing with customers who have
personal ideas about the software they want to use because they rarely have direct contact with them. It is one of the
key challenges in software engineering.
We all know that practically every development project requires pre-production and testing; therefore, the same
problem arises when someone works on an unproven project. When working on a complicated project, managing
your time and focusing on each component might be challenging.

6. Undefined system boundaries


There might not be a clear set of prerequisites for implementation. The customer might add additional unrelated and
irrelevant functionalities on top of the crucial ones, which would result in a very high implementation cost that
might go beyond the predetermined budget.

7. Customers are not clear about their needs


The lengthy list of features that clients frequently want in software may not always be clear to them. It may occur
when they have a basic understanding of their requirements but haven't made many preparations for the execution
phase.
8. Conflicting requirements
Two different project stakeholders can make expectations that are incompatible with one another's execution. A
single customer may occasionally articulate two requirements that are incompatible.

9. Partitioning the system suitably to reduce complexity


Occasionally, the projects can be divided into smaller modules or functionalities, which are then handled by other
teams. Larger, more complicated projects frequently call for additional segmentation, and the partitions must be kept
separate from one another and without any overlap.

10. Validating and tracing requirements


Project requirements that are constantly changing make it harder for software engineers to work. Before beginning
the implementation phase, it is crucial to double-check the list of requirements. Additionally, both forward and
backward tracing should be possible.

11. Identifying critical requirements


It's crucial to identify the needs that must be fulfilled at all costs. Prioritizing the requirements will allow for the
most urgent ones to be implemented first.

12. Resolving the "to be determined" requirements


Developers make sincere assumptions in the absence of specified specifications. They frequently avoid asking the
product owners or customers their questions. Using what they know, they carry out the assignment. Later, they must
recode the requirements using the information from the flaws.

13. Proper documentation, proper meetings time, and budget constraints


Confirm your grasp of the requirement by creating a clear requirement document. The aims, scope, limitations, and
functional requirements of a product or software program are made clear to teams and individual stakeholders
through documentation.

Approaches to solve the challenges:


1. Maintaining accurate records
As you work on your project, don't forget to address any unforeseen maintenance needs.
Engineers in charge of maintaining a product have access to software documentation such as software design, source
code, and development methodology.

2. Attempting to understand from the viewpoint of stakeholders


When stakeholders are concerned about the problems and understand that their ideas, opinions, and contributions are
respected, software engineering projects are successful. There is evidence to suggest that the way in which the
project manager incorporates stakeholders in the decision-making process across the various stages of the project is
closely related to the effective execution of software engineering projects.

3. Establishing proper communication with stakeholders


When stakeholders are concerned about the problems and understand that their ideas, opinions, and contributions are
respected, software engineering projects are successful. There is evidence to suggest that the way in which the
project manager incorporates stakeholders in the decision-making process across the various stages of the project is
closely related to the effective execution of software engineering projects.

4. Recognizing conflicting requirements from the stakeholder side


It is important to correctly examine and prioritize requirements. Keeping a healthy balance between the
requirements and only accepting those that are valid and for which good solutions can be offered within the
constraints of the project timeline and budget will help to avoid conflicts.

5. Creating informative and well-structured conversations with end consumers


Poor conversations are a major issue for novice engineers. Imagine becoming stuck when coding and being unable
to communicate the problem to your team, which could have an impact on project cost, time, and productivity.
Before a project starts, specify the methods and frequency of conversation.

6. Performing proper market research and competitors' analysis


A target audience has been identified, comprising the people who are or will use your product or service and their
demands. This is one of the key outcomes of the market analysis. A single app cannot serve the needs of all users. If
you attempt, you probably won't have any audience at all. Every product is designed for a certain audience, and
yours should be as well.

1.4 Software Process


Software processes in software engineering refer to the methods and techniques used to develop and maintain
software. Some examples of software processes include:
● Waterfall: a linear, sequential approach to software development, with distinct phases such as requirements
gathering, design, implementation, testing, and maintenance.
● Agile: a flexible, iterative approach to software development, with an emphasis on rapid prototyping and
continuous delivery.
● Scrum: a popular Agile methodology that emphasizes teamwork, iterative development, and a flexible, adaptive
approach to planning and management.
● DevOps: a set of practices that aims to improve collaboration and communication between development and
operations teams, with an emphasis on automating the software delivery process.
Each process has its own set of advantages and disadvantages, and the choice of which one to use depends on the
specific project and organization.

1.5 Characteristics of Software Process


1. It is intangible, meaning it cannot be seen or touched.
2. It is non-perishable, meaning it does not degrade over time.
3. It is easy to replicate, meaning it can be copied and distributed easily.
4. It can be complex, meaning it can have many interrelated parts and features.
5. It can be difficult to understand and modify, especially for large and complex systems.
6. It can be affected by changing requirements, meaning it may need to be updated or modified as the needs of
users change.
7. It can be affected by bugs and other issues, meaning it may need to be tested and debugged to ensure it works as
intended.

1.6 Software Development Process Models

A> Waterfall model:


The classical waterfall model is the basic software development life cycle model. It is very simple but idealistic.
Earlier this model was very popular but nowadays it is not used. But it is very important because all the other
software development life cycle models are based on the classical waterfall model.
The classical waterfall model divides the life cycle into a set of phases. This model considers that one phase can be
started after the completion of the previous phase. That is the output of one phase will be the input to the next phase.
Thus the development process can be considered as a sequential flow in the waterfall. Here the phases do not
overlap with each other.

The sequential phases in Waterfall model are


● Requirement Gathering and analysis

● System Design

● Implementation

● Integration and Testing

● Deployment of system

● Maintenance

Waterfall Model - Application


Every software developed is different and requires a suitable SDLC approach to be followed based on the internal and externa
● Requirements are very well documented, clear and fixed.
● Product definition is stable.
● Technology is understood and is not dynamic.
● There are no ambiguous requirements.
● Ample resources with required expertise are available to support the product.
● The project is short.
Waterfall Model - Disadvantages
The disadvantage of waterfall development is that it does not allow much reflection or revision. Once an application
is in the testing stage, it is very difficult to go back and change something that was not well-documented or thought
upon in the concept stage.
The major disadvantages of the Waterfall Model are as follows
● No working software is produced until late during the life cycle.
● High amounts of risk and uncertainty.
● Not a good model for complex and object-oriented projects.
● Poor model for long and ongoing projects.
● Not suitable for the projects where requirements are at a moderate to high risk of changing. So, risk and
uncertainty is high with this process model.
● It is difficult to measure progress within stages.
● Cannot accommodate changing requirements.
● Adjusting scope during the life cycle can end a project.
● Integration is done as a "big-bang. at the very end, which doesn't allow identifying any technological or
business bottleneck or challenges early.

B> Prototype model:


Prototyping is defined as the process of developing a working replication of a product or system that has to be
engineered.
The Prototyping Model is one of the most popularly used Software Development Life Cycle Models (SDLC
models). This model is used when the customers do not know the exact project requirements beforehand. In this
model, a prototype of the end product is first developed, tested and refined as per customer feedback repeatedly till a
final acceptable prototype is achieved which forms the basis for developing the final product.

There are four types of models available:


A) Rapid Throwaway Prototyping –
This technique offers a useful method of exploring ideas and getting customer feedback for each of them. In this
method, a developed prototype need not necessarily be a part of the ultimately accepted prototype. Customer
feedback helps in preventing unnecessary design faults and hence, the final prototype developed is of better quality.

B) Evolutionary Prototyping –
In this method, the prototype developed initially is incrementally refined on the basis of customer feedback till it
finally gets accepted. In comparison to Rapid Throwaway Prototyping, it offers a better approach which saves time
as well as effort. This is because developing a prototype from scratch for every iteration of the process can
sometimes be very frustrating for the developers.

C) Incremental Prototyping – In this type of incremental Prototyping, the final expected product is broken into
different small pieces of prototypes and being developed individually. In the end, when all individual pieces are
properly developed, then the different prototypes are collectively merged into a single final product in their
predefined order. It’s a very efficient approach that reduces the complexity of the development process, where the
goal is divided into sub-parts and each sub-part is developed individually. The time interval between the project’s
beginning and final delivery is substantially reduced because all parts of the system are prototyped and tested
simultaneously. Of course, there might be the possibility that the pieces just do not fit together due to some lack of
ness in the development phase – this can only be fixed by careful and complete plotting of the entire system before
prototyping starts.
D) Extreme Prototyping – This method is mainly used for web development. It is consists of three sequential
independent phases:
D.1) In this phase a basic prototype with all the existing static pages are presented in the HTML format.
D.2) In the 2nd phase, Functional screens are made with a simulated data process using a prototype services layer.
D.3) This is the final step where all the services are implemented and associated with the final prototype.
Advantages –

● The customers get to see the partial product early in the life cycle. This ensures a greater level of customer
satisfaction and comfort.
● New requirements can be easily accommodated as there is scope for refinement.
● Missing functionalities can be easily figured out.
● Errors can be detected much earlier thereby saving a lot of effort and cost, besides enhancing the quality of the
software.
● The developed prototype can be reused by the developer for more complicated projects in the future.
● Flexibility in design.

Disadvantages –
● Costly with respect to time as well as money.
● There may be too much variation in requirements each time the prototype is evaluated by the customer.
● Poor Documentation due to continuously changing customer requirements.
● It is very difficult for developers to accommodate all the changes demanded by the customer.
● There is uncertainty in determining the number of iterations that would be required before the prototype is
finally accepted by the customer.
● After seeing an early prototype, the customers sometimes demand the actual product to be delivered soon.
● Developers in a hurry to build prototypes may end up with sub-optimal solutions.
● The customer might lose interest in the product if he/she is not satisfied with the initial prototype.

A>Incremental Model
Incremental Model is a process of software development where requirements divided into multiple standalone
modules of the software development cycle. In this model, each module goes through the requirements, design,
implementation and testing phases. Every subsequent release of the module adds function to the previous release.
The process continues until the complete system achieved.

phases of incremental model are:


1. Requirement analysis: In the first phase of the incremental model, the product analysis expertise identifies the
requirements. And the system functional requirements are understood by the requirement analysis team. To develop
the software under the incremental model, this phase performs a crucial role.
2. Design & Development: In this phase of the Incremental model of SDLC, the design of the system functionality
and the development method are finished with success. When software develops new practicality, the incremental
model uses style and development phase.
3. Testing: In the incremental model, the testing phase checks the performance of each existing function as well as
additional functionality. In the testing phase, the various methods are used to test the behavior of each task.
4. Implementation: Implementation phase enables the coding phase of the development system. It involves the final
coding that design in the designing and development phase and tests the functionality in the testing phase. After
completion of this phase, the number of the product working is enhanced and upgraded up to the final system
product
Advantage of Incremental Model
o Errors are easy to be recognized.
o Easier to test and debug
o More flexible.
o Simple to manage risk because it handled during its iteration.
o The Client gets important functionality early.
Disadvantage of Incremental Model
o Need for good planning
o Total Cost is high.
o Well defined module interfaces are needed.
B> Spiral model:
Spiral model is one of the most important Software Development Life Cycle models, which provides support
for Risk Handling. In its diagrammatic representation, it looks like a spiral with many loops. The exact number of
loops of the spiral is unknown and can vary from project to project. Each loop of the spiral is called a Phase of the
software development process. The exact number of phases needed to develop the product can be varied by the
project manager depending upon the project risks. As the project manager dynamically determines the number of
phases, so the project manager has an important role to develop a product using the spiral model.

The Spiral Model is a risk-driven model, meaning that the focus is on managing risk through multiple iterations of
the software development process. It consists of the following phases:

1. Planning: The first phase of the Spiral Model is the planning phase, where the scope of the project is determined
and a plan is created for the next iteration of the spiral.
2. Risk Analysis: In the risk analysis phase, the risks associated with the project are identified and evaluated.
3. Engineering: In the engineering phase, the software is developed based on the requirements gathered in the
previous iteration.
4. Evaluation: In the evaluation phase, the software is evaluated to determine if it meets the customer’s
requirements and if it is of high quality.
5. Planning: The next iteration of the spiral begins with a new planning phase, based on the results of the
evaluation.
6. The Spiral Model is often used for complex and large software development projects, as it allows for a more
flexible and adaptable approach to software development. It is also well-suited to projects with significant
uncertainty or high levels of risk.
Advantages of Spiral Model:
Below are some advantages of the Spiral Model.
1. Risk Handling: The projects with many unknown risks that occur as the development proceeds, in that case,
Spiral Model is the best development model to follow due to the risk analysis and risk handling at every phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and complex projects.
3. Flexibility in Requirements: Change requests in the Requirements at later phase can be incorporated accurately
by using this model.
4. Customer Satisfaction: Customer can see the development of the product at the early phase of the software
development and thus, they habituated with the system by using it before completion of the total product.
5. Iterative and Incremental Approach: The Spiral Model provides an iterative and incremental approach to
software development, allowing for flexibility and adaptability in response to changing requirements or
unexpected events.
6. Emphasis on Risk Management: The Spiral Model places a strong emphasis on risk management, which helps
to minimize the impact of uncertainty and risk on the software development process.
7. Improved Communication: The Spiral Model provides for regular evaluations and reviews, which can improve
communication between the customer and the development team.
8. Improved Quality: The Spiral Model allows for multiple iterations of the software development process, which
can result in improved software quality and reliability

Disadvantages of Spiral Model:


Below are some main disadvantages of the spiral model.
1. Complex: The Spiral Model is much more complex than other SDLC models.
2. Expensive: Spiral Model is not suitable for small projects as it is expensive.
3. Too much dependability on Risk Analysis: The successful completion of the project is very much dependent on
Risk Analysis. Without very highly experienced experts, it is going to be a failure to develop a project using this
model.
4. Difficulty in time management: As the number of phases is unknown at the start of the project, so time
estimation is very difficult.
5. Complexity: The Spiral Model can be complex, as it involves multiple iterations of the software development
process.
6. Time-Consuming: The Spiral Model can be time-consuming, as it requires multiple evaluations and reviews.
7. Resource Intensive: The Spiral Model can be resource-intensive, as it requires a significant investment in
planning, risk analysis, and evaluations.

C> Rational Unified Process model


A Rational Unified Process is an approach for software engineering for delegating activities and responsibilities
inside a software development organization. Its primary purpose is to enable the creation of high-quality software
that satisfies the end user’s requirements within a predictable budget and timeframe.
RUP is a systematic way to allocate tasks and responsibilities within a development team that offers best practices
and guidelines for effective software development. By doing so, it is able to produce high-quality software on time
and within budget while satisfying the demands of its customers.
phases of the life cycle of RUP:
1. Inception –
● Communication and planning are the main ones.
● Identifies the scope of the project using a use-case model allowing managers to estimate costs and time
required.
● Customers’ requirements are identified and then it becomes easy to make a plan for the project.
● The project plan, Project goal, risks, use-case model, and Project description, are made.
● The project is checked against the milestone criteria and if it couldn’t pass these criteria then the project
can be either canceled or redesigned.
2. Elaboration –
● Planning and modeling are the main ones.
● A detailed evaluation and development plan is carried out and diminishes the risks.
● Revise or redefine the use-case model (approx. 80%), business case, and risks.
● Again, checked against milestone criteria and if it couldn’t pass these criteria then again project can be
canceled or redesigned.
● Executable architecture baseline.
3. Construction –
● The project is developed and completed.
● System or source code is created and then testing is done.
● Coding takes place.
4. Transition –
● The final project is released to the public.
● Transit the project from development into production.
● Update project documentation.
● Beta testing is conducted.
● Defects are removed from the project based on feedback from the public.
5. Production –
● The final phase of the model.
● The project is maintained and updated accordingly.
Advantages:
1. It provides good documentation, it completes the process in itself.
2. It provides risk-management support.
3. It reuses the components, and hence total time duration is less.
4. Good online support is available in the form of tutorials and training.

Disadvantages:
1. Team of expert professional is required, as the process is complex.
2. Complex and not properly organized process.
3. More dependency on risk management.
4. Hard to integrate again and again.

D>Time Boxing model


In time boxing model, development is done iteratively as in the iterative enhancement model. However, in time
boxing model, each iteration is done in a timebox of fixed duration. The functionality to be developed is adjusted to
fit the duration of the timebox. Moreover, each timebox is divided into a sequence of fixed stages where each stage
performs a clearly defined task (analysis, implementation, and deploy) that can be done independently. This model
also requires that the time duration of each stage is approximately equal so that pipelining concept is employed to
have the reduction in development time and product releases.
There is a dedicated team for each stage so that the work can be done in pipelining. Thus, stages should be chosen
in such a way that each stage perform some logical unit of work that becomes the input for next stage.
Advantages
1. Speeds up the development process and shortens the delivery time
2. Well suited to develop projects with a number of features in short time period.
Disadvantages
1. Project management becomes more complex.
2. Not suited to projects in which entire development work cannot be divided into multiple iterations of almost,
equal duration.

E> Agile process model


The meaning of Agile is swift or versatile."Agile process model" refers to a software development approach based
on iterative development. Agile methods break tasks into smaller iterations, or parts do not directly involve long
term planning. The project scope and requirements are laid down at the beginning of the development process. Plans
regarding the number of iterations, the duration and the scope of each iteration are clearly defined in advance.
Each iteration is considered as a short time "frame" in the Agile process model, which typically lasts from one to
four weeks. The division of the entire project into smaller parts helps to minimize the project risk and to reduce the
overall project delivery time requirements. Each iteration involves a team working through a full software
development life cycle including planning, requirements analysis, design, coding, and testing before a working
product is demonstrated to the client.
Phases of Agile Model:
1. Requirements gathering: In this phase, you must define the requirements. You should explain business
opportunities and plan the time and effort needed to build the project. Based on this information, you can evaluate
technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with stakeholders to define requirements.
You can use the user flow diagram or the high-level UML diagram to show the work of new features and show how
it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins. Designers and developers
start working on their project, which aims to deploy a working product. The product will undergo various stages of
improvement, so it includes simple, minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives feedback about the
product and works through the feedback.
Advantage(Pros) of Agile Method:
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages(Cons) of Agile Model:
1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken throughout
various phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers allotted to
another project, maintenance of the finished project can become a difficulty.
Unit 2 - Software Requirement Analysis & Specification
2.1 Need of SRS:-
In order to form a good SRS, here you will see some points which can be used and should be considered to form a
structure of good SRS.
These are as follows:
● (i) Purpose of this Document – At first, main aim of why this document is necessary and what’s purpose of
document is explained and described.
● (ii) Scope of this document – In this, overall working and main objective of document and what value it will
provide to customer is described and explained. It also includes a description of development cost and time
required.
● (iii) Overview – In this, description of product is explained. It’s simply summary or overall review of product.

Software Requirement Specification (SRS) Format as name suggests, is complete specification and description of
requirements of software that needs to be fulfilled for successful development of software system. These
requirements can be functional as well as non-functional depending upon type of requirement. The interaction
between different customers and contractor is done because its necessary to fully understand needs of
customers. Depending upon information gathered after interaction, SRS is developed which describes requirements
of software that may include changes and modifications that is needed to be done to increase quality of product and
to satisfy customer’s demand.

Uses of SRS document:


1. Development team require it for developing product according to the need.
2. Test plans are generated by testing group based on the describe external behavior.
3. Maintenance and support staff need it to understand what the software product is supposed to do.
4. Project manager base their plans and estimates of schedule, effort and resources on it.
5. customer rely on it to know that product they can expect.
6. As a contract between developer and customer.
7. in documentation purpose.

2.2 Characteristics of Good SRS


Following are the characteristics of a good SRS document:

1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is said to be correct if it
covers all the requirements that are actually expected from the system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all the pages, resolving
the to be determined parts to as much extent as possible as well as covering all the functional and non-
functional requirements properly.

3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set of requirements.
Examples of conflict include differences in terminologies used at separate places, logical conflicts like time
period of report generation, etc.

4. Unambiguousness:
A SRS is said to be unambiguous if all the requirements stated have only 1 interpretation. Some of the ways to
prevent unambiguousness include the use of modeling techniques like ER diagrams, proper reviews and buddy
checks, etc.

5. Ranking for importance and stability:


There should a criterion to classify the requirements as less or more important or more specifically as desirable
or essential. An identifier mark can be used with every requirement to indicate its rank or stability.

6. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily accepting changes to the system
to some extent. Modifications should be properly indexed and cross-referenced.

7. Verifiability:
A SRS is verifiable if there exists a specific technique to quantifiably measure the extent to which every
requirement is met by the system. For example, a requirement starting that the system must be user-friendly is
not verifiable and listing such requirements should be avoided.

8. Traceability:
One should be able to trace a requirement to design component and then to code segment in the program.
Similarly, one should be able to trace a requirement to the corresponding test cases.

9. Design Independence:
There should be an option to choose from multiple design alternatives for the final system. More specifically,
the SRS should not include any implementation details.

10. Testability:
A SRS should be written in such a way that it is easy to generate test cases and test plans from the document.

11. Understandable by the customer:


An end user maybe an expert in his/her specific domain but might not be an expert in computer science. Hence,
the use of formal notations and symbols should be avoided to as much extent as possible. The language should
be kept easy and clear.

12. Right level of abstraction:


If the SRS is written for the requirements phase, the details should be explained explicitly. Whereas, for a
feasibility study, fewer details can be used. Hence, the level of abstraction varies according to the purpose of the
SRS.

Advantages of having a good Software Requirements Specification (SRS) document include:


1. Improved communication and understanding between stakeholders and developers, as the SRS clearly defines
the requirements for the software system
2. Increased efficiency in the software development process, as a well-written SRS can help to reduce the need for
rework and change requests
3. Improved quality of the final software system, as a well-written SRS helps to ensure that all requirements are
met
4. Increased stakeholder satisfaction, as a well-written SRS helps to ensure that the software system meets the
needs of the business and its users
5. Improved traceability and verifiability, as a well-written SRS can be traced to other documents and artifacts and
its requirements can be tested and validated.
6. Clarity and Completeness: A good SRS document provides clear and complete specifications for the software
project, which ensures that all stakeholders have a common understanding of the requirements and objectives.

7. Traceability: A good SRS document includes detailed requirements and specifications, which enables
traceability throughout the software development process.

8. Testability: A good SRS document can serve as a basis for test cases and verification, which can ensure that the
software meets the requirements and specifications.

9. Improved Communication: A good SRS document can serve as a communication tool between different
stakeholders, such as project managers, developers, testers, and customers.

10. Reduced Rework: A good SRS document can help to identify and resolve issues early in the development
process, which can reduce the need for rework and improve the overall quality of the software.

Disadvantages of having a poorly written SRS include:


1. Confusion and misunderstandings between stakeholders and developers, as the requirements are not clearly
defined or are vague
Increased rework and change requests, as the SRS does not accurately capture the requirements of the software
system
2. Reduced quality of the final software system, as a poorly written SRS can result in requirements being missed
or not fully met
3. Reduced stakeholder satisfaction, as a poorly written SRS does not accurately capture the needs of the business
and its users
4. Reduced traceability and verifiability, as a poorly written SRS can’t be traced to other documents and artifacts
and its requirements can’t be tested and validated.
5. It is important to note that, regardless of the quality of the SRS, gathering and validating requirements is an
iterative process, that should be continuously reviewed and updated throughout the software development
process.
6. Time-consuming: Creating a good SRS document can be a time-consuming process, especially for complex
software projects, which can delay the development process.

7. Changes and Updates: Changes or updates to the SRS document can cause delays in the software development
process and can be difficult to manage.

8. Lack of Flexibility: A detailed SRS document can restrict the flexibility of the development process, which can
be challenging for projects that require agile development methodologies.

9. Limited Stakeholder Involvement: The process of creating an SRS document can limit the involvement of
stakeholders in the software development process, which can lead to a lack of collaboration and input from
different perspectives.

10. Ambiguity: A poorly written SRS document can lead to ambiguity and misunderstandings, which can cause
issues throughout the software development process.

2.3 Requirement Process


The requirement process is the sequence of activities that need to be performed in the requirements phase and
that culminate in producing a high-quality document containing the SRS. The requirements process typically
consists of three basic tasks: problem or requirement analysis, requirements specification, and requirements
validation.

A software requirements specification (SRS) is a document that captures complete description about how the system
is expected to perform. It is usually signed off at the end of requirements engineering phase.
Qualities of SRS:
● Correct
● Unambiguous
● Complete
● Consistent
● Ranked for importance and/or stability
● Verifiable
● Modifiable
● Traceable

2.5 Functional Specification with Use Cases


Functional Specification:

Functional requirements often form the core of a requirements document. The traditional approach for specifying
functionality is to specify each function that the system should provide. Use cases specify the functionality of a
system by specifying the behavior of the system, captured as interactions of the users with the system. Use cases can
be used to describe the business processes of the larger business or organization that deploys the software, or it
could just describe the behavior of the software system. We will focus on describing the behavior of software
systems that are to be built.

Though use cases are primarily for specifying behavior, they can also be used effectively for analysis. Later when
we discuss how to develop use cases, we will discuss how they can help in eliciting requirements also.

Use Case:

In software and systems engineering, a use case is a list of actions or event steps, typically defining the interactions
between a role (known in the Unified Modeling Language as an actor) and a system, to achieve a goal. The actor
can be a human, an external system, or time. In systems engineering, use cases are used at a higher level than within
software engineering, often representing missions or stakeholder goals. Another way to look at it is a use case
describes a way in which a real-world actor interacts with the system. In a system use case you include high-level
implementation decisions. System use cases can be written in both an informal manner and a formal manner.

How to plan use case?


Following example will illustrate on how to plan use cases:
Use Case: What is the main objective of this use case. For eg. Adding a software component, adding certain
functionality etc.
Primary Actor: Who will have the access to this use case. In the above examples, administrators will have the
access.
Scope: Scope of the use case
Level: At what level the implementation of the use case be.
Flow: What will be the flow of the functionality that needs to be there. More precisely, the work flow of the use
case.
2.6 Other Approaches for Analysis:

A> Data Flow Diagram


DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is represented
by DFD. It also gives insight into the inputs and outputs of each entity and the process itself. DFD does not
have control flow and no loops or decision rules are present. Specific operations depending on the type of
data can be explained by a flowchart.
It is a graphical tool, useful for communicating with users ,managers and other personnel. it is useful for
analyzing existing as well as proposed system.
Data Flow Diagram can be represented in several ways. The DFD belongs to structured-analysis modeling
tools. Data Flow diagrams are very popular because they help us to visualize the major steps and data
involved in software-system processes.
Symbols Used in DFD
● Square Box: A square box defines source or destination of the system. It is also called entity. It is represented
by rectangle.
● Arrow or Line: An arrow identifies the data flow i.e. it gives information to the data that is in motion.
● Circle or bubble chart: It represents as a process that gives us information. It is also called processing box.
● Open Rectangle: An open rectangle is a data store. In this data is store either temporary or permanently.

Advantages of DFD
● It helps us to understand the functioning and the limits of a system.
● It is a graphical representation which is very easy to understand as it helps visualize contents.
● Data Flow Diagram represent detailed and well explained diagram of system components.
● It is used as the part of system documentation file.
● Data Flow Diagrams can be understood by both technical or nontechnical person because they are very easy to
understand.
Disadvantages of DFD
● At times DFD can confuse the programmers regarding the system.
● Data Flow Diagram takes long time to be generated, and many times due to this reasons analysts are denied
permission to work on it.

Levels in Data Flow Diagrams (DFD)


0-level DFD:
It is also known as a context diagram. It’s designed to be an abstraction view, showing the system as a single process
with its relationship to external entities. It represents the entire system as a single bubble with input and output data
indicated by incoming/outgoing arrows.

1-level DFD:
In 1-level DFD, the context diagram is decomposed into multiple bubbles/processes. In this level, we highlight
the main functions of the system and breakdown the high-level process of 0-level DFD into subprocesses.

. 2-level DFD:
2-level DFD goes one step deeper into parts of 1-level DFD. It can be used to plan or record the specific/necessary
detail about the system’s functioning.
B> Entity Relationship Diagram:
ER Diagram:
● Entity Relational model is a model for identifying entities to be represented in the database and representation
of how those entities are related.
● The ER data model specifies enterprise schema that represents the overall logical structure of a database
graphically.
● E-R diagrams are used to model real-world objects like a person, a car, a company and the relation between
these real-world objects.
Features of ER model
i) E-R diagrams are used to represent E-R model in a database, which makes them easy to be converted into
relations (tables).
ii) E-R diagrams provide the purpose of real-world modeling of objects which makes them intently useful.
iii) E-R diagrams require no technical knowledge and no hardware support.
iv) These diagrams are very easy to understand and easy to create even by a naive user.
v) It gives a standard solution of visualizing the data logically.

Entity, Entity Type, Entity Set –


An Entity may be an object with a physical existence – a particular person, car, house, or employee – or it may be an
object with a conceptual existence – a company, a job, or a university course.
An Entity is an object of Entity Type and a set of all entities is called as an entity set. e.g.; E1 is an entity having
Entity Type Student and set of all students is called Entity Set. In ER diagram, Entity Type is represented as:
Attribute(s):
Attributes are the properties that define the entity type. For example, Roll_No, Name, DOB, Age, Address,
Mobile_No are the attributes that define entity type Student. In ER diagram, the attribute is represented by an oval.

1. Key Attribute –
The attribute which uniquely identifies each entity in the entity set is called key attribute.For example, Roll_No
will be unique for each student. In ER diagram, key attribute is represented by an oval with underlying lines.

2. Composite Attribute –
An attribute composed of many other attribute is called as composite attribute. For example, Address attribute of
student Entity type consists of Street, City, State, and Country. In ER diagram, composite attribute is represented by
an oval comprising of ovals.
3. Multivalued Attribute –
An attribute consisting more than one value for a given entity. For example, Phone_No (can be more than one for a
given student). In ER diagram, a multivalued attribute is represented by a double oval.

4. Derived Attribute –
An attribute that can be derived from other attributes of the entity type is known as a derived attribute. e.g.; Age
(can be derived from DOB). In ER diagram, the derived attribute is represented by a dashed oval

Cardinality:
The number of times an entity of an entity set participates in a relationship set is known as cardinality.
Cardinality can be of different types:
1. One-to-one – When each entity in each entity set can take part only once in the relationship, the cardinality is
one-to-one. Let us assume that a male can marry one female and a female can marry one male. So the relationship
will be one-to-one.
the total number of tables that can be used in this is 2.
2. Many to one – When entities in one entity set can take part only once in the relationship set and entities in
other entity sets can take part more than once in the relationship set, cardinality is many to one. Let us assume
that a student can take only one course but one course can be taken by many students. So the cardinality will be n to
1. It means that for one course there can be n students but for one student, there will be only one course.
The total number of tables that can be used in this is

3. Many to many – When entities in all entity sets can take part more than once in the relationship cardinality is
many to many. Let us assume that a student can take more than one course and one course can be taken by many
students. So the relationship will be many to many.
the total number of tables that can be used in this is 3.
Unit 3 - Software Architecture and Design
3.1 Introduction to Software Design
Software Design is the process to transform the user requirements into some suitable form, which helps the
programmer in software coding and implementation. During the software design phase, the design document is
produced, based on the customer requirements as documented in the SRS document. Hence the aim of this phase
is to transform the SRS document into the design document.

 Different modules required.


 Control relationships among modules.
 Interface among different modules.
 Data structure among the different modules.
 Algorithms required to implement among the individual modules.

Objectives of Software Design:

1. Correctness:
A good design should be correct i.e. it should correctly implement all the functionalities of the system.
2. Efficiency:
A good software design should address the resources, time, and cost optimization issues.
3. Understandability:
A good design should be easily understandable, for which it should be modular and all the modules are
arranged in layers.
4. Completeness:
The design should have all the components like data structures, modules, and external interfaces, etc.
5. Maintainability:
A good software design should be easily amenable to change whenever a change request is made from the
customer side
Different levels of Software Design:
There are three different levels of software design. They are:

1. Architectural Design:
The architecture of a system can be viewed as the overall structure of the system & the way in which
structure provides conceptual integrity of the system. The architectural design identifies the software as a
system with many components interacting with each other. At this level, the designers get the idea of the
proposed solution domain.

2. Preliminary or high-level design:


Here the problem is decomposed into a set of modules, the control relationship among various modules
identified, and also the interfaces among various modules are identified. The outcome of this stage is called
the program architecture. Design representation techniques used in this stage are structure chart and UML.

3. Detailed design:
Once the high-level design is complete, a detailed design is undertaken. In detailed design, each module is
examined carefully to design the data structure and algorithms. The stage outcome is documented in the form
of a module specification document.

3.2 Role of Software Architecture


A Software Architect provides a solution that the technical team can create and design for the entire application. A
software architect should have expertise in the following areas −
Design Expertise
 Expert in software design, including diverse methods and approaches such as object-oriented design, event-
driven design, etc.
 Lead the development team and coordinate the development efforts for the integrity of the design.
 Should be able to review design proposals and tradeoff among themselves.
Domain Expertise
 Expert on the system being developed and plan for software evolution.
 Assist in the requirement investigation process, assuring completeness and consistency.
 Coordinate the definition of domain model for the system being developed.
Technology Expertise
 Expert on available technologies that helps in the implementation of the system.
 Coordinate the selection of programming language, framework, platforms, databases, etc.

Methodological Expertise
 Expert on software development methodologies that may be adopted during SDLC (Software Development
Life Cycle).
 Choose the appropriate approaches for development that helps the entire team.
Hidden Role of Software Architect
 Facilitates the technical work among team members and reinforcing the trust relationship in the team.
 Information specialist who shares knowledge and has vast experience.
 Protect the team members from external forces that would distract them and bring less value to the project.
Deliverables of the Architect
 A clear, complete, consistent, and achievable set of functional goals
 A functional description of the system, with at least two layers of decomposition
 A concept for the system
 A design in the form of the system, with at least two layers of decomposition
 A notion of the timing, operator attributes, and the implementation and operation plans
 A document or process which ensures functional decomposition is followed, and the form of interfaces is
controlled
Quality Attributes
Quality is a measure of excellence or the state of being free from deficiencies or defects. Quality attributes are the
system properties that are separate from the functionality of the system.
Implementing quality attributes makes it easier to differentiate a good system from a bad one. Attributes are overall
factors that affect runtime behavior, system design, and user experience.
They can be classified as −
Static Quality Attributes
Reflect the structure of a system and organization, directly related to architecture, design, and source code. They are
invisible to end-user, but affect the development and maintenance cost, e.g.: modularity, testability, maintainability,
etc.
Dynamic Quality Attributes
Reflect the behavior of the system during its execution. They are directly related to system’s architecture, design,
source code, configuration, deployment parameters, environment, and platform.
They are visible to the end-user and exist at runtime, e.g. throughput, robustness, scalability, etc.

3.4 Component & Connector View

Components:-Components are generally units of computation or data stores in the system. A component has a
name, which is generally chosen to represent the role of the component or the function it performs.

A component is of a component-type, where the type represents a generic component, defining the general
computation and the interfaces a component of that type must have. Note that though a component has a type, in the
C&C architecture view.
In a diagram representing a C&C architecture view of a system, it is highly desirable to have a different
representation for different component types, so the different types can be identified visually. In a box-and-line
diagram, often all components are represented as rectangular boxes. Such an approach will require that types of the
components are described separately

Connectors: - The different components of a system are likely to interact while the system is in operation to provide
the services expected of the system. After all, components exist to provide parts of the services and features of the
system, and these must be combined to deliver the overall system functionality. For composing a system from its
components, information about the interaction between components is necessary.
Interaction between components may be through a simple means supported by the underlying process execution
infrastructure of the operating system. For example, a component may interact with another using the procedure call
mechanism (a connector,) which is provided by the runtime environment for the programming language.

3.5 Design Concepts: Design Principles:


1. Should not suffer from “Tunnel Vision” –
While designing the process, it should not suffer from “tunnel vision” which means that is should not only
focus on completing or achieving the aim but on other effects also.

2. Traceable to analysis model –


The design process should be traceable to the analysis model which means it should satisfy all the
requirements that software requires to develop a high-quality product.

3. Should not “Reinvent The Wheel” –


The design process should not reinvent the wheel that means it should not waste time or effort in creating
things that already exist. Due to this, the overall development will get increased.

4. Minimize Intellectual distance –


The design process should reduce the gap between real-world problems and software solutions for that
problem meaning it should simply minimize intellectual distance.

5. Exhibit uniformity and integration –


The design should display uniformity which means it should be uniform throughout the process without any
change. Integration means it should mix or combine all parts of software i.e. subsystems into one system.

6. Accommodate change –
The software should be designed in such a way that it accommodates the change implying that the software
should adjust to the change that is required to be done as per the user’s need.

7. Degrade gently –
The software should be designed in such a way that it degrades gracefully which means it should work
properly even if an error occurs during the execution.

8. Assessed or quality –
The design should be assessed or evaluated for the quality meaning that during the evaluation, the quality of
the design needs to be checked and focused on.
9. Review to discover errors –
The design should be reviewed which means that the overall evaluation should be done to check if there is
any error present or if it can be minimized.

10. Design is not coding and coding is not design –


Design means describing the logic of the program to solve any problem and coding is a type of language that
is used for the implementation of a design.

3.6 Difference between conceptual design and technical design :

Conceptual Design Technical Design

Conceptual design is an initial/starting phase in the A Technical design is a phase in which the event team
process of planning, during which the broad outlines of writes the code and describes the minute detail of either
function and sort of something are coupled. the whole design or some parts of it.

It is written in the customer’s language and designed It describes any other thing that converts the
according to the client’s requirements. requirements into a solution to the customer’s problem.
It describes that what will happen to the data in the
It describes the functions or methods of the system.
system.

It shows the conceptual model i.e, what should a system


It shows the data flow and the structure of the data.
look like.

It also includes the process and sub-processes besides It includes the functioning and working of the
the strategies. conceptual design.

It starts when a system requirement comes and the phase


It starts after setting the system requirements.
looks for a potential solution.

At the end of this phase, the solutions to the problems At the end of this phase, and after analyzing the
are sent for review. technical design, the specification is initiated.

3.7 Cohesion:
Cohesion is the indication of the relationship within the module. It is the concept of intra-module. Cohesion has
many types but usually, high cohesion is good for software.
Coupling:
Coupling is also the indication of the relationships between modules. It is the concept of the Inter-module. The
coupling has also many types but usually, the low coupling is good for software.

3.8 Open/Closed Principle(OCP) –


The Open/Closed Principle (OCP) is a fundamental principle in software engineering that aims to improve the
design and maintainability of software systems. It states that software entities (such as classes, modules, and
functions) should be open for extension but closed for modification.

In other words, once a software entity (such as a class) is defined and implemented, it should be possible to extend
its behavior without modifying its source code. This can be achieved by using inheritance, interfaces, or other
techniques that allow new functionality to be added to a software entity without changing its existing
implementation.
The OCP promotes a modular and flexible design approach, which enables software systems to evolve over time
without breaking existing functionality. By designing software entities that are open for extension but closed for
modification, developers can reduce the risk of introducing bugs or unintended side effects when making changes to
existing code. Additionally, this principle can help to improve the maintainability and scalability of software
systems, as it encourages the use of loosely coupled components that can be easily modified or replaced without
affecting the rest of the system.

3.10 Function Oriented Design :


Function oriented design is the result of focusing attention to the function of the program. This is based on the
stepwise refinement. Stepwise refinement is based on the iterative procedural decomposition. Stepwise
refinement is a top-down strategy where a program is refined as a hierarchy of increasing levels of details.
We start with a high level description of what the program does. Then, in each step, we take one part of our high
level description and refine it. Refinement is actually a process of elaboration. The process should proceed from a
highly conceptual model to lower level details. The refinement of each module is done until we reach the
statement level of our programming language.
3.11 Object Oriented Design :
Object oriented design is the result of focusing attention not on the function performed by the program, but
instead on the data that are to be manipulated by the program. Thus, it is orthogonal to function -oriented design.
Object-oriented design begins with an examination of the real world “things”. These things are characteristics
individually in terms of their attributes and behavior.

Objects are independent entities that may readily be changed because all state and representation information is
held within the object itself. Object may be distributed and may execute sequentially or in parallel.

COMPARISON
FACTORS FUNCTION ORIENTED DESIGN OBJECT ORIENTED DESIGN

The basic abstractions, which are The basic abstractions are not the real world
Abstraction given to the user, are real world functions but are the data abstraction where the
functions. real world entities are represented.

Functions are grouped together by Function are grouped together on the basis of the
Function which a higher level function is data they operate since the classes are associated
obtained. with their methods.

carried out using structured analysis


execute and structured design i.e, data flow Carried out using UML
diagram

In this approach the state information is not


In this approach the state information
State represented in a centralized memory but is
is often represented in a centralized
information implemented or distributed among the objects of
shared memory.
the system.

Approach It is a top down approach. It is a bottom up approach.

Begins basis Begins by considering the use case Begins by identifying objects and classes.
COMPARISON
FACTORS FUNCTION ORIENTED DESIGN OBJECT ORIENTED DESIGN

diagrams and the scenarios.

In function oriented design we


Decompose decompose in function/procedure We decompose in class level.
level.

This approach is mainly used for evolving


This approach is mainly used for
Use system which mimics a business or business
computation sensitive application.
case.

3.12 High Level Design :


High Level Design in short HLD is the general system design means it refers to the overall system design. It
describes the overall description/architecture of the application. It includes the description of system architecture,
data base design, brief description on systems, services, platforms and relationship among modules. It is also
known as macro level/system design. It is created by solution architect. It converts the Business/client
requirement into High Level Solution. It is created first means before Low Level Design.

3.13 Detailed Design (Low Level Design):


Low Level Design in short LLD is like detailing HLD means it refers to component-level design process. It
describes detailed description of each and every module means it includes actual logic for every system
component and it goes deep into each modules specification. It is also known as micro level/detailed design. It is
created by designers and developers. It converts the High Level Solution into Detailed solution. It is created
second means after High Level Design.

S.No. HIGH LEVEL DESIGN LOW LEVEL DESIGN

High Level Design is the general system design Low Level Design is like detailing HLD means
01.
means it refers to the overall system design. it refers to component-level design process.

02. High Level Design in short called as HLD. Low Level Design in short called as LLD.

03. It is also known as macro level/system design. It is also known as micro level/detailed design.

It describes the overall description/architecture of the It describes detailed description of each and
04.
application. every module.

High Level Design expresses the brief functionality Low Level Design expresses details functional
05.
of each module. logic of the module.

06. It is created by solution architect. It is created by designers and developers.


S.No. HIGH LEVEL DESIGN LOW LEVEL DESIGN

Here in Low Level Design participants are


Here in High Level Design the participants are
07. design team, Operation Teams, and
design team, review team, and client team.
Implementers.

It is created second means after High Level


08. It is created first means before Low Level Design.
Design.

In HLD the input criteria is Software Requirement In LLD the input criteria is reviewed High Level
09.
Specification (SRS). Design (HLD).

High Level Solution converts the Business/client Low Level Design converts the High Level
10.
requirement into High Level Solution. Solution into Detailed solution.

In HLD the output criteria is data base design, In LLD the output criteria is program
11.
functional design and review record. specification and unit test plan.

3.14 Verification

As mentioned, verification is the process of determining if the software in question is designed and
developed according to specified requirements. Specifications act as inputs for the software development process.
The code for any software application is written based on the specifications document.

Verification is done to check if the software being developed has adhered to these specifications at every stage of the
development life cycle. The verification ensures that the code logic is in line with specifications.

Depending on the complexity and scope of the software application, the software testing team uses different
methods of verification, including inspection, code reviews, technical reviews, and walkthroughs. Software testing
teams may also use mathematical models and calculations to make predictive statements about the software and
verify its code logic.

The main advantages of the verification are:

1. It acts as a quality gateway at every stage of the software development process.

2. It enables software teams to develop products that meet design specifications and customer needs.

3. It saves time by detecting the defects at the early stage of software development.

4. It reduces or eliminates defects that may arise at the later stage of the software development process.

3.15 Metrics:
A software metric is a measure of software characteristics which are measurable or countable. Software metrics are
valuable for many reasons, including measuring software performance, planning work items, measuring
productivity, and many other uses.
Within the software development process, many metrics are that are all connected. Software metrics are similar to
the four functions of management: Planning, Organization, Control, or Improvement.

Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software product. The two important
software characteristics are:
1. Size and complexity of software.
2. Quality and reliability of software.
These metrics can be computed for different stages of SDLC.
2. Process Metrics: These are the measures of various characteristics of the software development process. For
example, the efficiency of fault detection. They are used to measure the characteristics of methods, techniques, and
tools that are used for developing software.
Advantage of Software Metrics
 Comparative study of various design methodology of software systems.
 For analysis, comparison, and critical study of different programming language concerning their
characteristics.
 In comparing and evaluating the capabilities and productivity of people involved in software development.
 In the preparation of software quality specifications.
 In the verification of compliance of software systems requirements and specifications.
 In making inference about the effort to be put in the design and development of the software systems.
 In getting an idea about the complexity of the code.
 In taking decisions regarding further division of a complex module is to be done or not.
 In guiding resource manager for their proper utilization.
 In comparison and making design tradeoffs between software development and maintenance cost.
 In providing feedback to software managers about the progress and quality during various phases of the
software development life cycle.
 In the allocation of testing resources for testing the code.
Disadvantage of Software Metrics
 The application of software metrics is not always easy, and in some cases, it is difficult and costly.
 The verification and justification of software metrics are based on historical/empirical data whose validity
is difficult to verify.
 These are useful for managing software products but not for evaluating the performance of the technical
staff.
 The definition and derivation of Software metrics are usually based on assuming which are not
standardized and may depend upon tools available and working environment.
 Most of the predictive models rely on estimates of certain variables which are often not known precisely.
Unit 4 - Testing
4.1 Testing Fundamentals
Testing fundamentals refer to the core principles and concepts that guide the process of software testing. They
provide a framework for ensuring the quality and reliability of software applications. Here are the key testing
fundamentals:
1. Test Objectives: Clearly define the purpose and goals of testing. This includes identifying what needs to
be tested, the expected outcomes, and the criteria for determining success.
2. Test Planning: Develop a comprehensive test plan that outlines the testing approach, test scope, test
deliverables, and resource requirements. It involves defining test objectives, test strategies, and test
schedules.
3. Test Design: Create test cases that effectively cover different scenarios and aspects of the software. Test
design involves identifying test conditions, selecting appropriate test data, and defining the expected
results.
4. Test Execution: Run the test cases and observe the actual results. Test execution includes setting up test
environments, executing test scripts, and recording the outcomes. It is important to accurately capture any
defects or issues encountered during testing.
5. Defect Management: Establish a process for capturing, tracking, and resolving defects identified during
testing. This involves logging defects, prioritizing them based on severity, assigning them to appropriate
stakeholders, and verifying their resolution.
6. Test Reporting: Generate test reports to communicate the progress, findings, and quality of the software
under test. Test reports provide stakeholders with a clear overview of the testing activities, including test
coverage, defect metrics, and overall test results.
7. Test Automation: Utilize automation tools and frameworks to enhance the efficiency and effectiveness of
testing. Test automation involves writing and executing automated test scripts, generating test data, and
analyzing test results.
8. Test Environment: Create and maintain a stable and representative test environment that closely
resembles the production environment. This includes hardware, software, network configurations, and any
other dependencies required for testing.
9. Test Data: Identify and prepare relevant and realistic test data to ensure comprehensive testing coverage.
Test data should encompass both typical and boundary conditions, error scenarios, and different
combinations of inputs.
10. Test Maintenance: Continuously review and update test artifacts to keep them aligned with the evolving
software requirements and changes. Test maintenance involves modifying test cases, adding new test
scenarios, and incorporating lessons learned from previous testing cycles.
By adhering to these testing fundamentals, organizations can establish a structured and systematic approach to
testing, leading to improved software quality and customer satisfaction.

4.2 Testing Process


The testing process is a systematic approach to verifying and validating software applications to ensure their quality
and reliability. It involves a series of activities that are performed in a sequential manner. Here are the key steps
involved in the testing process:
1. Test Planning: In this initial phase, the testing objectives, scope, and strategies are defined. The test plan is
created, which outlines the overall testing approach, timelines, resources required, and the test deliverables.
2. Test Design: In this phase, test cases are created based on the defined requirements and design
specifications. Test design involves identifying test conditions, selecting test data, and determining the
expected results. Test design techniques such as equivalence partitioning, boundary value analysis, and
decision tables may be used to ensure comprehensive test coverage.
3. Test Environment Setup: A test environment is prepared to mimic the production environment where the
software will be deployed. This includes setting up the necessary hardware, software, network
configurations, and other dependencies required for testing. The test environment should closely resemble
the production environment to ensure accurate test results.
4. Test Execution: In this phase, the test cases are executed in the test environment. Testers run the tests,
input the test data, and observe the actual results. This step involves manual or automated execution of test
scripts, depending on the nature of the tests. Test execution may involve functional, integration, system,
performance, security, and other types of tests based on the project requirements.
5. Defect Management: When issues or defects are encountered during test execution, they are logged in a
defect tracking system. Defects are documented with relevant details such as steps to reproduce, severity,
and priority. The defects are then assigned to the development team for resolution. Once fixed, the defects
go through retesting to ensure they are resolved satisfactorily.
6. Test Reporting and Metrics: Throughout the testing process, regular test reports are generated to
communicate the progress, findings, and quality of the software. Test reports provide stakeholders with an
overview of the testing activities, including test coverage, defect metrics, and overall test results. Metrics
such as test execution status, defect density, and test coverage are used to assess the effectiveness and
efficiency of the testing process.
7. Test Closure: The final phase of the testing process involves evaluating the overall testing effort,
reviewing the test results, and documenting lessons learned. The test closure activities include analyzing
the testing process, identifying areas for improvement, and updating the test artifacts for future reference.
It's important to note that the testing process is iterative and may involve multiple testing cycles or iterations,
especially in agile development methodologies. The process is guided by the testing fundamentals, industry best
practices, and specific project requirements to ensure that the software meets the desired quality standards.

4.3 Black box Testing

Black box testing is a software testing technique that focuses on testing the functionality of a system without
examining its internal structure or implementation details. It is performed from an external perspective, treating the
system as a "black box" where the tester has no knowledge of the internal workings.
Here are some key points about black box testing:
1. Objective: The main objective of black box testing is to validate whether the system meets the specified
requirements and functions correctly, without considering how the system achieves that functionality.
2. No knowledge of internals: Testers performing black box testing have no access to the source code,
architecture, or design details of the system being tested. They rely solely on the system's inputs, outputs,
and documented specifications.
3. Focus on inputs and outputs: Black box testing concentrates on the input values provided to the system
and verifies if the expected outputs are produced. It aims to identify any discrepancies between the
expected behavior and the actual behavior of the system.
4. Test cases design: Test cases are designed based on functional requirements, specifications, use cases, and
other available documentation. The goal is to cover different scenarios and ensure that all desired system
functionalities are tested.
5. Techniques used: Various techniques can be applied in black box testing, including equivalence
partitioning, boundary value analysis, decision table testing, state transition testing, and error guessing.
These techniques help in selecting test cases that are likely to expose defects.
6. Test coverage: Black box testing aims to achieve high test coverage by ensuring that all relevant
functionalities and combinations of inputs are tested. The focus is on validating the behavior of the system
as perceived by the end user.
7. Advantages: Black box testing provides several advantages, including early detection of functional issues,
independence from implementation details, and the ability to assess the system from a user's perspective. It
also allows for parallel testing efforts since multiple testers can work simultaneously.
8. Limitations: While black box testing is effective at validating system functionality, it may not uncover all
defects or issues. It relies heavily on the quality of requirements and specifications documentation.
Additionally, it may not be as efficient in finding certain types of defects, such as those related to
performance or security vulnerabilities.
Overall, black box testing plays a crucial role in ensuring that software systems meet the specified requirements
and behave as expected. By focusing on the external behavior of the system, it provides valuable insights into the
system's functionality and helps in improving its quality.

• It is method of software testing that examines the functionality of an application without looking into internal
structure or working.
• It is a method in which internal structure of the application is not known to the tester.
• It checks the behavior of application hence it is called behavioral testing.
• This type of testing focus on functional requirements. • It check performance error and behavioral error.
• In black box testing check possible inputs and expected output

Black Box Testing Techniques:

1) Equivalence Partitioning
2) Boundary Value Analysis
3) Cause effect table
4) State Transition
5) Exploratory Testing
6) Error Guessing

1) Equivalence Partitioning

Black box Equivalence Partitioning testing is a technique used in software testing to improve test coverage while
minimizing the number of test cases. It is based on the principle that if a specific input value within a range of values
causes a certain behavior or outcome, then any other value within the same range will likely produce the same
behavior or outcome.
Here is a detailed explanation of the black box Equivalence Partitioning testing technique:
1. Identify Input Domain: Start by identifying the input variables or data fields that are relevant to the
functionality being tested. For example, if testing a login feature, the input variables could be username and
password.
2. Partition the Input Domain: Divide the input domain into equivalence partitions. An equivalence
partition is a range or set of input values that are expected to produce the same behavior or outcome. The
idea is to select representative values from each partition to test, assuming that if one value works correctly,
others in the same partition should also work correctly.
3. Determine Valid and Invalid Partitions: Identify both valid and invalid equivalence partitions. Valid
partitions represent input values that should be accepted and processed correctly by the software. Invalid
partitions represent input values that should be rejected or produce errors.
4. Select Test Cases: From each partition, select representative test cases that cover different scenarios and
boundary conditions. The goal is to choose test cases that maximize coverage while minimizing
redundancy. For example, if a partition represents a range of ages from 18 to 65, selecting test cases like
20, 40, and 60 would be sufficient to cover the partition.
5. Execute Test Cases: Execute the selected test cases using the black box approach, without considering the
internal structure or implementation of the software. Provide the input values to the software and observe
the outputs or behaviors. Compare the actual results with the expected results.
6. Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual
outcomes. Any failures or deviations from expected behavior should be considered as defects or issues to
be addressed by the development team.
7. Iterate and Refine: Based on the test results, refine the equivalence partitions and test cases if necessary.
If defects are found, additional test cases can be added to cover the specific scenarios that caused the
failures.
The benefits of black box Equivalence Partitioning testing include reducing the number of test cases required,
ensuring adequate coverage of input values, and identifying defects early in the testing process. By focusing on
representative values within each partition, this technique helps optimize test coverage while maximizing efficiency.
It's important to note that Equivalence Partitioning is just one technique within the broader black box testing
approach, and it should be combined with other techniques like boundary value analysis, decision tables, and error
guessing to achieve comprehensive test coverage.
• In this technique divide the input domain into various class of data.
• Test each partition once this will reduce lots of rework and also give good test coverage.

• Example : If input condition age between 18-60


• < 18 18-60 >60
Invalid Valid Invalid

2) Boundary Value Analysis

Black box Boundary Value Analysis (BVA) testing is a technique used in software testing to determine test cases
based on the boundaries of input values. It aims to identify potential errors or defects that may occur at the
boundaries or edges of valid and invalid input ranges. By focusing on these critical values, BVA helps improve test
coverage and increase the likelihood of finding defects. Here is a detailed explanation of the black box Boundary
Value Analysis testing technique:
1. Identify Input Variables: Start by identifying the input variables or data fields that are relevant to the
functionality being tested. These could be numeric values, strings, dates, or any other input types.
2. Determine Boundary Values: For each input variable, determine the boundary values. These include the
minimum and maximum valid values, as well as values that are just below and above these boundaries. For
example, if testing a form that accepts ages between 18 and 65, the boundary values would be 17, 18, 65,
and 66.
3. Identify Invalid Boundary Values: Identify values that are just outside the valid range and would be
considered invalid inputs. For example, for an input field that only accepts positive integers, the invalid
boundary values would be -1, 0, and 1.
4. Select Test Cases: From the identified boundary values, select representative test cases that cover different
scenarios and edge cases. Typically, at least one test case is selected from each valid and invalid boundary
value set. For example, if there are three valid boundary values (17, 18, 65), at least one test case is selected
for each of these values.
5. Execute Test Cases: Execute the selected test cases using the black box approach, without considering the
internal structure or implementation of the software. Provide the input values to the software and observe
the outputs or behaviors. Compare the actual results with the expected results.
6. Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual
outcomes. Pay particular attention to the behavior at the boundaries and verify that the software handles the
input values correctly.
7. Iterate and Refine: Based on the test results, refine the boundary values and test cases if necessary. If
defects are found, additional test cases can be added to cover the specific scenarios that caused the failures.
The benefits of black box Boundary Value Analysis testing include focusing testing efforts on critical values that are
likely to uncover defects, providing comprehensive test coverage, and maximizing the effectiveness of testing
resources.
It's important to note that Boundary Value Analysis is just one technique within the broader black box testing
approach, and it should be combined with other techniques like Equivalence Partitioning, decision tables, and error
guessing to achieve thorough test coverage.

• In software there is high chance of error occurs at boundary values.


• In BVA actually select the test case at the edge of class
• Example : If input condition age between 18-60 In this check boundary value just below to the boundary
Min age is 18 so check age 17

In this check boundary value just above to the boundary Max age is 60 so check age 61

3) Cause effect table


Black box Cause-Effect Table testing, also known as Decision Table testing, is a technique used in software testing
to systematically identify and test various combinations of inputs and their corresponding outputs. It helps ensure
comprehensive test coverage by considering different conditions and business rules that influence the behavior of
the software. Here is a detailed explanation of the black box Cause-Effect Table testing technique:
1. Identify Conditions and Actions: Start by identifying the relevant conditions or inputs that affect the
behavior of the software. Conditions can be specific states, variables, or factors that determine the outcome
of the software. Also, identify the actions or outputs that are expected based on these conditions.
2. Create the Cause-Effect Table: Create a table with columns representing the conditions and actions. Each
row in the table represents a specific combination of conditions and the corresponding expected actions.
The table is populated with all possible combinations of conditions and actions.
3. Define Conditions and Actions Values: For each condition and action, define the possible values or states
they can take. These values should cover the complete range of scenarios and boundary conditions. For
example, if testing a login functionality, conditions could be "valid username," "valid password," and
"account locked," while actions could be "allow access," "display error message," or "lock account."
4. Determine Rules and Interactions: Analyze the relationships between conditions and actions to identify
the rules and interactions that determine the expected outcomes. Specify the rules in a concise and
unambiguous manner. For example, "If the username is valid and the password is valid, allow access.
Otherwise, display an error message."
5. Generate Test Cases: Generate test cases by selecting representative combinations from the Cause-Effect
Table. Aim to cover all possible combinations of conditions and actions, including valid and invalid
scenarios, as well as boundary conditions. Each test case should represent a unique combination of
conditions and actions.
6. Execute Test Cases: Execute the selected test cases using the black box approach. Provide the input values
or conditions to the software and observe the outputs or actions. Compare the actual results with the
expected results based on the defined rules.
7. Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual
outcomes. Pay attention to the actions triggered by different combinations of conditions and verify that the
software behaves as expected based on the defined rules.
8. Iterate and Refine: Based on the test results, refine the Cause-Effect Table and test cases if necessary. If
defects are found, additional test cases can be added to cover the specific combinations of conditions and
actions that caused the failures.
The benefits of black box Cause-Effect Table testing include systematic coverage of different combinations of
inputs, identification of missing or ambiguous rules, and the ability to test complex business logic or decision-
making processes.
It's important to note that Cause-Effect Table testing is just one technique within the broader black box testing
approach, and it should be combined with other techniques like Equivalence Partitioning, Boundary Value Analysis,
and error guessing to achieve comprehensive test coverage.

• This test technique logical relationship between the inputs is checked if-else logic.
• Using this technique we consider condition and action.
• We take condition as input and action as output.
• Example : if (age > =18 && age <=60) {openaccout(); }

4) State Transition

Black box State Transition Table testing is a technique used in software testing to analyze and test the behavior of a
system or software application based on different states and transitions between those states. It focuses on capturing
the transitions that occur when certain events or conditions cause a change in the system's state. Here is a detailed
explanation of the black box State Transition Table testing technique:
1. Identify States: Start by identifying the distinct states that the system or software application can be in.
States represent the different modes, conditions, or statuses of the system. For example, if testing an e-
commerce website, states could include "logged out," "logged in," "adding items to the cart," and
"checkout."
2. Define Events: Identify the events or actions that can trigger a transition from one state to another. Events
represent user interactions, system inputs, or external factors that cause a change in the system's state. For
example, events could be "clicking the login button," "adding an item to the cart," or "submitting the
order."
3. Create the State Transition Table: Create a table with columns representing the current state, events, and
the resulting next state. Each row in the table represents a specific combination of the current state and the
triggering event, along with the resulting next state. The table is populated with all possible combinations
of states, events, and next states.
4. Determine Rules and Conditions: Analyze the transitions between states and identify any rules or
conditions that govern those transitions. Specify the rules in a concise and unambiguous manner. For
example, "If the current state is 'logged out' and the event is 'clicking the login button,' the next state should
be 'logged in.'"
5. Generate Test Cases: Generate test cases by selecting representative combinations from the State
Transition Table. Aim to cover all possible combinations of states and events, including valid and invalid
scenarios, as well as boundary conditions. Each test case should represent a unique combination of current
state, event, and the expected next state.
6. Execute Test Cases: Execute the selected test cases using the black box approach. Trigger the specified
events or actions in the corresponding current states and observe the resulting next states. Compare the
actual results with the expected results based on the defined rules.
7. Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual
next states. Pay attention to the transitions triggered by different combinations of current states and events
and verify that the system behaves as expected based on the defined rules.
8. Iterate and Refine: Based on the test results, refine the State Transition Table and test cases if necessary.
If defects are found, additional test cases can be added to cover the specific combinations of current states
and events that caused the failures.
The benefits of black box State Transition Table testing include a systematic approach to testing state-dependent
behaviors, uncovering defects related to state transitions, and identifying missing or incorrect rules.
It's important to note that State Transition Table testing is just one technique within the broader black box testing
approach, and it should be combined with other techniques like Equivalence Partitioning, Boundary Value Analysis,
and decision tables to achieve comprehensive test coverage.

• We apply state transition technique when an application gives a different output for the same input
depending on output is produce on what has happen in the earlier stage.
• Example : Vending Machine , Traffic light
Vending Machine dispatch a product when inserted coin is valid.

5) Exploratory Testing

Black box State Exploratory Testing is a technique used in software testing to explore and test the behavior of a
system or software application based on different states and user interactions without relying on predetermined test
cases. It involves a tester's exploration and experimentation to uncover defects and understand the system's response
in various states. Here is a detailed explanation of the black box State Exploratory Testing technique:
1. Understand the System: Gain a thorough understanding of the system or software application being
tested, including its intended functionality, states, and possible user interactions. This knowledge will guide
the exploration process.
2. Identify States: Identify the different states that the system can be in during its lifecycle or operation.
States represent the various modes, conditions, or statuses of the system. For example, states could include
"logged out," "logged in," "idle," or "processing."
3. Define User Interactions: Determine the various user interactions, inputs, or events that can affect the
system's behavior or transition between states. These interactions could be user actions, system inputs, or
external events. For example, interactions could include clicking buttons, entering data, or receiving
external notifications.
4. Explore State Transitions: Start exploring the system by interacting with it and observing how it responds
in different states. Trigger events or perform actions that transition the system from one state to another.
Pay attention to the system's behavior, output, or any unexpected responses.
5. Observe System Reactions: Observe how the system reacts or responds to the different interactions and
state transitions. Note any anomalies, unexpected behaviors, or errors that occur. Pay attention to edge
cases, boundary conditions, and scenarios that are likely to reveal defects.
6. Document and Report Findings: Document the observations, findings, and any defects or issues
encountered during the exploration. Provide clear and concise descriptions of the steps taken, the observed
behavior, and any relevant details. Report the findings to the development team for further investigation
and resolution.
7. Repeat and Expand Exploration: Continuously explore and experiment with the system, focusing on
different states, combinations of interactions, and edge cases. Vary the order of interactions, explore
different paths, and simulate real-world scenarios to uncover potential defects and understand the system's
behavior comprehensively.
8. Iterate and Improve: Based on the findings and insights gained through exploration, refine the testing
approach and prioritize areas for further investigation. Use the knowledge gained to inform the creation of
additional test cases or to refine existing test cases for more targeted testing.
The benefits of black box State Exploratory Testing include the ability to uncover defects that might not be covered
by predefined test cases, a deeper understanding of the system's behavior in different states, and the identification of
real-world scenarios that can lead to unexpected issues.
It's important to note that State Exploratory Testing should be performed alongside other black box testing
techniques to achieve comprehensive test coverage and ensure that both expected and unexpected behaviors are
tested.

• Usually this technique is carried out by domain expert.


• They perform testing just exploring the functionalities of the application without the knowledge of the
requirement.

6) Error Guessing

Black box Error Guessing is a technique used in software testing to uncover defects or errors by leveraging the
tester's intuition, experience, and knowledge of potential vulnerabilities or weaknesses in the system. It is an
informal and heuristic approach that focuses on imagining potential errors that may not be explicitly specified in
requirements or test cases. Here is a detailed explanation of the black box Error Guessing technique:
1. Understand the System: Gain a thorough understanding of the system or software application being
tested, including its intended functionality, inputs, and expected outputs. Familiarize yourself with common
errors or issues that can occur in similar systems.
2. Leverage Experience and Knowledge: Draw upon your experience as a tester, knowledge of the system
domain, and understanding of common software pitfalls or vulnerabilities. Consider the types of errors that
are likely to be present in the system based on similar systems you have encountered in the past.
3. Guess Potential Errors: Based on your experience and knowledge, imagine potential errors or defects that
could occur in the system. Think about various failure scenarios, boundary conditions, edge cases, or
unexpected user behaviors that could lead to errors. Formulate these potential errors in the form of specific
test cases or test scenarios.
4. Design Test Cases: Design test cases that specifically target the potential errors or defects you have
identified. Each test case should represent a specific scenario or condition that could lead to an error.
Ensure that the test cases cover a wide range of possible error scenarios.
5. Execute Test Cases: Execute the test cases designed to uncover potential errors in the system. Provide the
inputs or trigger the conditions specified in the test cases and observe the system's behavior. Compare the
actual results with the expected results, focusing on identifying any unexpected behaviors, failures, or
errors.
6. Document and Report Findings: Document the observed errors, failures, or unexpected behaviors
encountered during the testing process. Clearly describe the steps taken, the inputs provided, and the
observed outcomes. Report the findings to the development team, providing clear and concise descriptions
of the errors or issues discovered.
7. Repeat and Expand: Continuously apply the error guessing approach throughout the testing process,
iterating on potential error scenarios and exploring additional test cases based on new insights or
observations. Leverage the testing team's collective experience and knowledge to refine and expand the
error guessing efforts.
The benefits of black box Error Guessing include the ability to uncover defects that may not be explicitly
documented, leveraging the tester's intuition and experience to identify potential issues, and addressing areas that
may have been overlooked by other testing techniques.
It's important to note that Error Guessing should not replace other formal testing techniques but should complement
them. It is most effective when used alongside techniques such as Equivalence Partitioning, Boundary Value
Analysis, and Cause-Effect Table testing to achieve comprehensive test coverage.

• It is used to find bugs in software application based on tester’s prior experience.


• In this we don’t follow any specific rules. It is unplanned testing technique.
Example :
-Submitting a form without entering values
-Entering invalid input such as alphabet in the numerical field

4.4 White Box Testing

• This is method of testing in which internal structure of the application is tested.


• It is a method in which internal structure of the application is known to the tester.
• Evaluates all logical decision & find whether they are true or false.
• Evaluate all loops to check their boundaries.
• Evaluate all internal data structures to confirm there validity.
• In white box testing check each and every line of code

White box testing, also known as structural or glass box testing, is a software testing technique that focuses on
examining the internal structure, design, and implementation of a software application. It involves testing based on
an understanding of the underlying code, algorithms, and logic used in the system. Here is a detailed explanation of
white box testing:
1. Understand the Internal Structure: Gain a deep understanding of the software application's internal
structure, including the code, modules, functions, and data flows. Review the design and architecture
documents to familiarize yourself with the system's components and their relationships.
2. Identify Testable Units: Identify the specific units or components within the application that will be
tested. This can include functions, methods, classes, modules, or individual lines of code. These units are
typically referred to as code modules or code entities.
3. Design Test Cases: Design test cases based on the internal structure and implementation details of the
software. This includes selecting specific paths through the code, exercising different conditions, and
covering various code branches and logical scenarios. Test cases should be designed to verify the
correctness, robustness, and efficiency of the code.
4. Code Coverage: Aim for high code coverage, which measures the percentage of code that is exercised by
the test cases. Different levels of code coverage include statement coverage, branch coverage, path
coverage, and condition coverage. The goal is to ensure that all possible code paths and conditions are
tested.
5. Execute Test Cases: Execute the designed test cases by providing specific inputs and verifying the outputs
or behavior of the code. Monitor and track the results of each test case, comparing the actual outcomes with
the expected outcomes.
6. Debugging and Fixing Defects: When defects are found, utilize the knowledge of the code structure to
identify the root causes of the issues. Debugging tools and techniques can be employed to pinpoint the
locations of defects in the code. Once identified, defects are fixed by modifying the code to correct the
errors.
7. Performance and Security Testing: White box testing can also include performance and security testing.
By understanding the internal workings of the system, performance bottlenecks and vulnerabilities can be
identified and addressed. Techniques such as load testing, stress testing, and penetration testing can be
applied.
8. Test Automation: White box testing lends itself well to test automation. Automation frameworks can be
used to execute the test cases, monitor code coverage, and track the results. This allows for efficient and
repeatable testing, especially for regression testing.
The benefits of white box testing include the ability to uncover defects at the code level, ensure thorough coverage
of different code paths and conditions, optimize performance and security, and improve the overall quality of the
software.
It's important to note that white box testing is most effective when combined with other testing techniques such as
black box testing. This allows for a comprehensive and balanced approach to software testing, addressing both the
internal structure and the external behavior of the application.

White Box Testing Techniques:

1) Basic Path Testing

White box testing is a software testing technique that involves examining the internal structure, design, and
implementation details of a system. It aims to ensure that all components of the system, such as individual functions,
methods, and code paths, are tested thoroughly. Basic Path Testing is one of the techniques used in white box testing
to achieve this goal.
Basic Path Testing focuses on testing all possible paths within a program's source code. A path refers to a sequence
of instructions or statements executed during the program's execution. Here's a detailed explanation of Basic Path
Testing:
1. Control Flow Graph (CFG): The first step in Basic Path Testing is to construct a Control Flow Graph
(CFG) for the program. A CFG is a graphical representation that depicts the flow of control between
various program statements. It consists of nodes (representing statements) and edges (representing possible
control flow between statements).
2. Cyclomatic Complexity: The next step is to calculate the cyclomatic complexity of the program using the
CFG. Cyclomatic complexity is a metric that quantifies the number of independent paths in a program. It
helps in determining the minimum number of test cases required to achieve full path coverage.
3. Identify Paths: Based on the CFG, all possible paths from the program's entry point to the exit point are
identified. This includes both linear paths (single path from start to end) and branching paths (multiple
paths due to conditional statements, loops, etc.).
4. Generate Test Cases: Test cases are created to cover each identified path. For linear paths, a single test
case is sufficient to cover the entire path. For branching paths, multiple test cases are required to cover
different scenarios and ensure that all conditional branches are exercised.
5. Path Execution: The test cases are executed, and the program's behavior is observed. The objective is to
determine if the program behaves as expected for each path and whether any errors or unexpected
behaviors occur.
6. Path Coverage Analysis: After executing the test cases, the coverage of paths is analyzed. The goal is to
ensure that all paths within the program have been exercised at least once. Coverage analysis helps in
identifying any missed or untested paths.
7. Iterative Process: Basic Path Testing is an iterative process. If any paths are missed during the initial
round of testing, additional test cases are designed to cover those paths. The process continues until all
paths are covered, or a predetermined coverage criterion is met.
Benefits of Basic Path Testing:
 Thorough coverage: Basic Path Testing aims to achieve comprehensive coverage of a program's paths,
ensuring that all logical branches and decision points are tested.
 Identifying defects: By examining all possible paths, this technique can help in uncovering errors, logic
flaws, or missing conditions within the code.
 Optimization: It allows developers to optimize the code by identifying redundant or unreachable paths,
eliminating dead code, and improving the overall efficiency of the program.
Limitations of Basic Path Testing:
 Complexity: In large and complex programs, the number of paths can be vast, making it impractical to test
every single path. Prioritization and selection of critical paths become crucial in such cases.
 Limited to code-level testing: Basic Path Testing focuses solely on the internal structure of the program
and may not address issues related to integration, user interface, or system-level behavior.
Overall, Basic Path Testing is a powerful technique within white box testing that helps ensure thorough coverage of
a program's internal paths. It enables testers to dive deep into the code and identify potential defects, leading to more
robust and reliable software.
- Flow graph notation
- Independent path
- Deriving test cases
- Graph matrix

2) Control Structure Testing

White box testing techniques focus on examining the internal structure and logic of a software application. Control
Structure Testing is one such technique that aims to test the control flow or the flow of program execution within the
application. It involves designing test cases to ensure that all possible control flow paths, decision points, loops, and
conditions are exercised. Here is a detailed explanation of Control Structure Testing:
1. Understand the Control Structure: Gain a thorough understanding of the control flow within the
software application. This includes identifying the decision points (such as if-else statements, switch-case
statements), loops (such as for loops, while loops), and other control structures that dictate the program's
execution.
2. Control Flow Graph: Construct a control flow graph, which is a visual representation of the program's
control flow. The control flow graph consists of nodes representing statements or blocks of code, and edges
representing the flow of control between the nodes. This graph provides a clear visualization of the control
flow paths within the application.
3. Identify Coverage Criteria: Select a coverage criteria based on the control structure. Coverage criteria
define the objectives of testing and help ensure that all control flow paths are tested. Common coverage
criteria for Control Structure Testing include statement coverage, branch coverage, condition coverage, and
path coverage.
4. Design Test Cases: Design test cases to achieve the desired coverage criteria. Each test case should be
designed to exercise a specific control flow path or combination of paths. This involves providing inputs or
test data that will cause the program to follow different branches, make decisions, and traverse loops.
5. Execute Test Cases: Execute the designed test cases by running the software application with the provided
inputs. Observe the program's behavior and compare the actual outputs or outcomes with the expected ones.
Ensure that each control flow path is exercised and that all decision points and conditions are tested.
6. Coverage Analysis: Analyze the coverage achieved based on the selected coverage criteria. Determine the
percentage of control flow paths covered by the executed test cases. This analysis helps identify any gaps in
testing and guides the creation of additional test cases to improve coverage if necessary.
7. Debugging and Fixing Defects: When defects or errors are identified during Control Structure Testing,
utilize the knowledge of the control flow to pinpoint the causes of the issues. Debugging techniques can be
applied to trace the program's execution and identify the specific statements or conditions that lead to the
defects. Once identified, defects can be fixed by modifying the code accordingly.
8. Iteration and Refinement: Iterate on the testing process by creating additional test cases to improve
coverage and address any missed control flow paths or scenarios. Refine the test cases based on the insights
gained from previous test runs and defect analysis.
Control Structure Testing helps ensure that the application's control flow is thoroughly tested, reducing the chances
of errors, logic flaws, and unexpected behaviors. It complements other white box testing techniques and should be
combined with other testing approaches, such as black box testing, to achieve comprehensive test coverage.
Note: Control Structure Testing is just one technique within the broader scope of white box testing. Other white box
testing techniques, such as data flow testing and path testing, can be employed to further analyze and test the internal
workings of the software application.

- Condition Testing
- Data flow testing
- Loop Testing1) Basic Path Testing
a. Flow graph notation
In that according to flow of the modules in the application first create flowchart of application
• Then make flow graph :

b.Independent path
• According to above flow graph find out the possible independent path.
• Each path is checked at least once during testing.
• Possible paths in above example
Path 1: 1->11
Path 2: 1->2 ->3->4->5->10->1
Path 3: 1->2 ->3->6->8->9
Path 4: 1->2 ->3->6->7->9

Cyclomatic complexity
• Find how complex your testing Formula= No of Edges- no of nodes+ 2

Cyclomatic complexity = (11) edges – (9) nodes +2


= 4
A program with high cyclomatic complexity is almost likely to be difficult to test

c. Deriving a test cases

• Prepare test case that will force execution of each path in the basic set.
• Execute all possible path during testing.

d. Graph Matrix
• Flow graph is stored in computer memory using graph matrix.
• In flow graph each edge give some weight this weight are store in two dimensional matrix.

2) Control Structure Testing


a - Condition Testing
In program first find conditional statement like relational expression :>,<,>=,<=,==,!=
simple condition : like unary operators like ~
compound conditions : (E1 & E2 )|(E2 & E3)
Boolean expression : true or false

b. Data flow testing


- check the flow of data in this we are declare variable intermediate we are using the variable check that
how value is changed.

c: Loop Testing

• In application consist of different looping statement check how the data is moving in loops. It consist of simple
loops , nested loops, concatenated loops and unstructured loops

Black Box Testing White Box Testing

It is a way of software testing in which the internal It is a way of testing the software in which the tester
structure or the program or the code is hidden and has knowledge about the internal structure or the code
nothing is known about it. or the program of the software.

Implementation of code is not needed for black box Code implementation is necessary for white box
testing. testing.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is needed. Knowledge of implementation is required.

It can be referred to as outer or external software


It is the inner or the internal software testing.
testing.

It is a functional test of the software. It is a structural test of the software.

This testing can be initiated based on the requirement This type of testing of software is started after a detail
specifications document. design document.
Black Box Testing White Box Testing

No knowledge of programming is required. It is mandatory to have knowledge of programming.

It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of testing of It is generally applicable to the lower levels of software
software. testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for algorithm testing. It is suitable for algorithm testing.

Data domains along with inner or internal boundaries


Can be done by trial and error ways and methods.
can be better tested.

Example: Search something on google by using


Example: By input to check and verify loops
keywords

Black-box test design techniques-


White-box test design techniques-
 Decision table testing
 Control flow testing
 All-pairs testing
 Data flow testing
 Equivalence partitioning
 Branch testing
 Error guessing

Types of Black Box Testing: Types of White Box Testing:


 Functional Testing  Path Testing
 Non-functional testing  Loop Testing
 Regression Testing  Condition testing

It is less exhaustive as compared to white box It is comparatively more exhaustive than black box
testing. testing.

4.5 Object-Oriented Software Testing Methods

Software typically undergoes many levels of testing, from unit testing to system or acceptance testing. Typically,
in-unit testing, small “units”, or modules of the software, are tested separately with focus on testing the code of
that module. In higher, order testing (e.g, acceptance testing), the entire system (or a subsystem) is tested with the
focus on testing the functionality or external behavior of the system.
As information systems are becoming more complex, the object-oriented paradigm is gaining popularity because
of its benefits in analysis, design, and coding. Conventional testing methods cannot be applied for testing classes
because of problems involved in testing classes, abstract classes, inheritance, dynamic binding, message, passing,
polymorphism, concurrency, etc.
Testing classes is a fundamentally different problem than testing functions. A function (or a procedure) has a
clearly defined input-output behavior, while a class does not have an input-output behavior specification. We can
test a method of a class using approaches for testing functions, but we cannot test the class using these
approaches.
According to Davis the dependencies occurring in conventional systems are:
 Data dependencies between variables
 Calling dependencies between modules
 Functional dependencies between a module and the variable it computes
 Definitional dependencies between a variable and its types.
But in Object-Oriented systems there are following additional dependencies:
 Class to class dependencies
 Class to method dependencies
 Class to message dependencies
 Class to variable dependencies
 Method to variable dependencies
 Method to message dependencies
 Method to method dependencies

4.5 Functional Testing

Functional Testing is a type of Software Testing in which the system is tested against the functional requirements
and specifications. Functional testing ensures that the requirements or specifications are properly satisfied by the
application. This type of testing is particularly concerned with the result of processing. It focuses on simulation of
actual system usage but does not develop any system structure assumptions.
It is basically defined as a type of testing which verifies that each function of the software application works in
conformance with the requirement and specification. This testing is not concerned about the source code of the
application. Each functionality of the software application is tested by providing appropriate test input, expecting
the output and comparing the actual output with the expected output. This testing focuses on checking of user
interface, APIs, database, security, client or server application and functionality of the Application Under Test.
Functional testing can be manual or automated.
Functional testing involves the following steps:
1. Identify function that is to be performed.
2. Create input data based on the specifications of function.
3. Determine the output based on the specifications of function.
4. Execute the test case.
5. Compare the actual and expected output.
Advantages of Functional Testing:
 It ensures to deliver a bug-free product.
 It ensures to deliver a high-quality product.
 No assumptions about the structure of the system.
 This testing is focused on the specifications as per the customer usage.
Disadvantages of Functional Testing:
 There are high chances of performing redundant testing.
 Logical errors can be missed out in the product.
 If the requirement is not complete then performing this testing becomes difficult.

4.6 Unit testing


Unit testing is a type of software testing that focuses on individual units or components of a software system. The
purpose of unit testing is to validate that each unit of the software works as intended and meets the requirements.
Unit testing is typically performed by developers, and it is performed early in the development process before the
code is integrated and tested as a whole system.
Unit tests are automated and are run each time the code is changed to ensure that new code does not break
existing functionality. Unit tests are designed to validate the smallest possible unit of code, such as a function or a
method, and test it in isolation from the rest of the system. This allows developers to quickly identify and fix any
issues early in the development process, improving the overall quality of the software and reducing the time
required for later testing.

Objective of Unit Testing:

The objective of Unit Testing is:


1. To isolate a section of code.
2. To verify the correctness of the code.
3. To test every function and procedure.
4. To fix bugs early in the development cycle and to save costs.
5. To help the developers to understand the code base and enable them to make changes quickly.
6. To help with code reuse.
There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit tests for input, user interface, and
output parts.
2. White Box Testing: This technique is used in testing the functional behavior of the system by giving the
input and checking the functionality output including the internal design structure and code of the modules.
3. Gray Box Testing: This technique is used in executing the relevant test cases, test methods, test functions,
and analyzing the code performance for the modules.

Advantages of Unit Testing:

1. Unit Testing allows developers to learn what functionality is provided by a unit and how to use it to gain a
basic understanding of the unit API.
2. Unit testing allows the programmer to refine code and make sure the module works properly.
3. Unit testing enables testing parts of the project without waiting for others to be completed.
4. Early Detection of Issues: Unit testing allows developers to detect and fix issues early in the development
process, before they become larger and more difficult to fix.
5. Improved Code Quality: Unit testing helps to ensure that each unit of code works as intended and meets the
requirements, improving the overall quality of the software.

Disadvantages of Unit Testing:

1. The process is time-consuming for writing the unit test cases.


2. Unit testing will not cover all the errors in the module because there is a chance of having errors in the
modules while doing integration testing.
3. Unit Testing is not efficient for checking the errors in the UI(User Interface) part of the module.
4. It requires more time for maintenance when the source code is changed frequently.
5. It cannot cover the non-functional testing parameters such as scalability, the performance of the system, etc.

4.9 System testing

System Testing is a type of software testing that is performed on a complete integrated system to evaluate the
compliance of the system with the corresponding requirements. In system testing, integration testing passed
components are taken as input. The goal of integration testing is to detect any irregularity between the units that
are integrated together. System testing detects defects within both the integrated units and the whole system. The
result of system testing is the observed behavior of a component or a system when it is tested. System Testing is
carried out on the whole system in the context of either system requirement specifications or functional
requirement specifications or in the context of both. System testing tests the design and behavior of the system
and also the expectations of the customer. It is performed to test the system beyond the bounds mentioned in
the software requirements specification (SRS). System Testing is basically performed by a testing team that is
independent of the development team that helps to test the quality of the system impartial. It has both functional
and non-functional testing. System Testing is a black-box testing. System Testing is performed after the
integration testing and before the acceptance testing.

System Testing Process: System Testing is performed in the following steps:


 Test Environment Setup: Create testing environment for the better quality testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data, test cases are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.

Types of System Testing:


 Performance Testing: Performance Testing is a type of software testing that is carried out to test the speed,
scalability, stability and reliability of the software product or application.
 Load Testing: Load Testing is a type of software Testing which is carried out to determine the behaviour of
a system or software product under extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to check the robustness of the system
under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which is carried out to check the
performance of a software application or system in terms of its capability to scale up or scale down the
number of user request load.

Advantages of System Testing :


 The testers do not require more knowledge of programming to carry out this testing.
 It will test the entire product or software so that we will easily detect the errors or defects which cannot be
identified during the unit testing and integration testing.
 The testing environment is similar to that of the real time production or business environment.
 It checks the entire functionality of the system with different test scripts and also it covers the technical and
business requirements of clients.
 After this testing, the product will almost cover all the possible bugs or errors and hence the development
team will confidently go ahead with acceptance testing.
Disadvantages of System Testing :
 This testing is time consuming process than another testing techniques since it checks the entire product or
software.
 The cost for the testing will be high since it covers the testing of entire software.
 It needs good debugging tool otherwise the hidden errors will not be found.

4.10 User Satisfaction Testing

Several tests are performed on a product before deploying it. You need to collect qualitative and quantitative
data and satisfy customers’ needs with the product. A proper final report is made mentioning the changes required
in the product (software). Usability Testing in software testing is a type of testing, that is done from an end
user’s perspective to determine if the system is easily usable. Usability testing is generally the practice of testing
how to easy a design is to use on a group of representative users. A very common mistake in usability testing is
conducting a study too late in the design process If you wait until right before your product is released, you won’t
have the time or money to fix any issues – and you’ll have wasted a lot of effort developing your product the
wrong way.

Phases of Usability Testing


There are five phases in usability testing which are followed by the system when usability testing is performed.
These are given below:

1. Prepare your product or design to test: The first phase of usability testing is choosing a product and then
making it ready for usability testing. For usability testing, more functions and operations are required than
this phase provided that type of requirement. Hence this is one of the most important phases in usability
testing.
2. Find your participants: The second phase of usability testing is finding an employee who is helping you
with performing usability testing. Generally, the number of participants that you need is based on a number
of case studies. Generally, five participants are able to find almost as many usability problems as you’d find
using many more test participants.
3. Write a test plan: This is the third phase of usability testing. The plan is one of the first steps in each round
of usability testing is to develop a plan for the test. The main purpose of the plan is to document what you are
going to do, how you are going to conduct the test, what metrics you are going to find, the number of
participants you are going to test, and what scenarios you will use.
4. Take on the role of the moderator: This is the fourth phase of usability testing and here the moderator
plays a vital role that involves building a partnership with the participant. Most of the research findings are
derived by observing the participant’s actions and gathering verbal feedback to be an effective moderator,
you need to be able to make instant decisions while simultaneously overseeing various aspects of the
research session.
5. Present your findings/ final report: This phase generally involves combining your results into an overall
score and presenting it meaningfully to your audience. An easy method to do this is to compare each data
point to a target goal and represent this as one single metric based on a percentage of users who achieved this
goal.

Pros and Cons of Usability Testing


As every coin has two sides, usability testing has pros and cons. Some of the pros it has are:
 Gives excellent features and functionalities to the product
 Improves user satisfaction and fulfills requirements based on user’s feedback
 The product becomes more efficient and effective
The biggest cons of usability testing are the cost and time. The more usability testing is performed, the more cost
and time is being used.
Unit 5 - Project Planning and Management
5.1 Project management process
Project Management is a discipline about planning, monitoring, and controlling software projects, identifying the
scope, estimating the work involved, and creating a project schedule. Along with it is also responsible to keep the
team up to date on the project’s progress and handle problems and discuss solutions.
The Project Management Process consists of the following 4 stages:
 Feasibility study
 Project Planning
 Project Execution
 Project Termination

Feasibility Study: A feasibility study explores system requirements to determine project feasibility. There are
several fields of feasibility study including economic feasibility, operational feasibility, and technical feasibility.
The goal is to determine whether the system can be implemented or not. The process of feasibility study takes as
input the required details as specified by the user and other domain-specific details. The output of this process
simply tells whether the project should be undertaken or not and if yes, what would the constraints be.
Additionally, all the risks and their potential effects on the projects are also evaluated before a decision to start
the project is taken.
Project Planning: A detailed plan stating a stepwise strategy to achieve the listed objectives is an integral part of
any project. Planning consists of the following activities:
 Set objectives or goals
 Develop strategies
 Develop project policies
 Determine courses of action
 Making planning decisions
 Set procedures and rules for the project
 Develop a software project plan
 Prepare budget
 Conduct risk management
 Document software project plans
This step also involves the construction of a work breakdown structure(WBS). It also includes size, effort,
schedule, and cost estimation using various techniques.
Project Execution: A project is executed by choosing an appropriate software development lifecycle
model(SDLC). It includes a number of steps including requirements analysis, design, coding, testing and
implementation, testing, delivery, and maintenance. There are a number of factors that need to be considered
while doing so including the size of the system, the nature of the project, time and budget constraints, domain
requirements, etc. An inappropriate SDLC can lead to the failure of the project.
Project Termination: There can be several reasons for the termination of a project. Though expecting a project
to terminate after successful completion is conventional, at times, a project may also terminate without
completion. Projects have to be closed down when the requirements are not fulfilled according to given time and
cost constraints.
Some of the reasons for failure include:
 Fast-changing technology
 Project running out of time
 Organizational politics
 Too much change in customer requirements
 Project exceeding budget or funds
Once the project is terminated, a post-performance analysis is done. Also, a final report is published describing
the experiences, lessons learned, and recommendations for handling future projects.
Project management is a systematic approach to planning, organizing, and controlling the resources required to
achieve specific project goals and objectives. The project management process involves a set of activities that are
performed to plan, execute, and close a project. The project management process can be divided into several
phases, each of which has a specific purpose and set of tasks.

The main phases of the project management process are:

1. Initiation: This phase involves defining the project, identifying the stakeholders, and establishing the
project’s goals and objectives.
2. Planning: In this phase, the project manager defines the scope of the project, develops a detailed project plan,
and identifies the resources required to complete the project.
3. Execution: This phase involves the actual implementation of the project, including the allocation of
resources, the execution of tasks, and the monitoring and control of project progress.
4. Monitoring and Control: This phase involves tracking the project’s progress, comparing actual results to the
project plan, and making changes to the project as necessary.
5. Closing: This phase involves completing the project, documenting the results, and closing out any open
issues.
6. Effective project management requires a clear understanding of the project management process and the
skills necessary to apply it effectively. The project manager must have the ability to plan and execute
projects, manage resources, communicate effectively, and handle risks and issues.

Advantages of the project management process:

1. Provides a structured approach to managing projects.


2. Helps to define project objectives and requirements.
3. Facilitates effective communication and collaboration among team members.
4. Helps to manage project risks and issues.
5. Ensures that the project is delivered on time and within budget.
Disadvantages of the project management process:

1. Can be time-consuming and bureaucratic


2. May be inflexible and less adaptable to changes
3. Requires a skilled project manager to implement effectively
4. May not be suitable for small or simple projects.

5.2 The Inspection and Audit Process

Inspection and audit processes are two important activities in project planning and management that help ensure the
project meets its objectives and requirements. Both processes involve a systematic and objective review of project
documentation, processes, and outcomes, but they differ in their focus and purpose.

Inspection is a process that involves reviewing project documentation, deliverables, and processes to identify
defects, errors, and inconsistencies. The inspection process is typically conducted by a team of subject matter
experts who use checklists, guidelines, and other tools to identify and document issues. The purpose of inspection is
to identify and correct problems early in the project life cycle to minimize their impact on project outcomes.

Audit, on the other hand, is a process that involves a more comprehensive review of project documentation,
processes, and outcomes to evaluate the project's effectiveness, efficiency, and compliance with standards and
regulations. The audit process is typically conducted by an independent team of auditors who use established criteria
and standards to evaluate project documentation, processes, and outcomes. The purpose of audit is to provide an
objective assessment of project performance and identify areas for improvement.

Both inspection and audit processes are important in project planning and management as they help ensure that the
project meets its objectives, stays within budget and schedule, and complies with relevant standards and regulations.
By identifying and correcting problems early in the project life cycle, inspection and audit processes help prevent
costly rework and delays, and improve the overall quality and success of the project.

5.3 Software Configuration Management process

In Software Engineering, Software Configuration Management(SCM) is a process to systematically manage,


organize, and control the changes in the documents, codes, and other entities during the Software Development Life
Cycle. The primary goal is to increase productivity with minimal mistakes. SCM is part of cross-disciplinary field of
configuration management and it can accurately determine who made which revision.

1. Planning and Identification

The first step in the process is planning and identification. In this step, the goal is to plan for the development of
the software project and identify the items within the scope. This is accomplished by having meetings and
brainstorming sessions with your team to figure out the basic criteria for the rest of the project.

Part of this process involves figuring out how the project will proceed and identifying the exit criteria. This way,
your team will know how to recognize when all of the goals of the project have been met.

Specific activities during this step include:

 Identifying items like test cases, specification requirements, and code modules
 Identifying each computer software configuration item in the process
 Group basic details of why, when, and what changes will be made and who will be in charge of making them
 Create a list of necessary resources, like tools, files, documents, etc.

2. Version Control and Baseline


The version control and baseline step ensures the continuous integrity of the product by identifying an accepted
version of the software. This baseline is designated at a specific time in the SCM process and can only be altered
through a formal procedure.

The point of this step is to control the changes being made to the product. As the project develops, new baselines
are established, resulting in several versions of the software.

This step involves the following activities:

 Identifying and classifying the components that are covered by the project
 Developing a way to track the hierarchy of different versions of the software
 Identifying the essential relationships between various components
 Establishing various baselines for the product, including developmental, functional, and product baselines
 Developing a standardized label scheme for all products, revisions, and files so that everyone is on the same page.
Baselining a project attribute forces formal configuration change control processes to be enacted in the event that
these attributes are changed.

3. Change Control

Change control is the method used to ensure that any changes that are made are consistent with the rest of the
project. Having these controls in place helps with quality assurance, and the approval and release of new
baseline(s). Change control is essential to the successful completion of the project.
In this step, requests to change configurations are submitted to the team and approved or denied by the software
configuration manager. The most common types of requests are to add or edit various configuration items or
change user permissions.

This procedure includes:

 Controlling ad-hoc changes requested by the client


 Checking the merit of the change request by examining the overall impact they will have on the project
 Making approved changes or explaining why change requests were denied.

4. Configuration Status Accounting

The next step is to ensure the project is developing according to the plan by testing and verifying according to the
predetermined baselines. It involves looking at release notes and related documents to ensure the software meets
all functional requirements.

Configuration status accounting tracks each version released during the process, assessing what is new in each
version and why the changes were necessary. Some of the activities in this step include:

 Recording and evaluating changes made from one baseline to the next
 Monitoring the status and resolution of all change requests
 Maintaining documentation of each change made as a result of change requests and to reach another baseline
 Checking previous versions for analysis and testing.

5. Audits and Reviews

The final step is a technical review of every stage in the software development life cycle. Audits and reviews look
at the process, configurations, workflow, change requests, and everything that has gone into developing each
baseline throughout the project’s development.
The team performs multiple reviews of the application to verify its integrity and also put together essential
accompanying documentation such as release notes, user manuals, and installation guides.

Activities in this step include:

 Making sure that the goals laid out in the planning and identification step are met
 Ensuring that the software complies with identified configuration control standards
 Making sure changes from baselines match the reports
 Validating that the project is consistent and complete according to the goals of the project.

5.4 Effort estimation

Effort estimation is the process of forecasting how much effort is required to develop or maintain a software
application. This effort is traditionally measured in the hours worked by a person, or the money needed to pay for
this work.
Effort estimation is used to help draft project plans and budgets in the early stages of the software development life
cycle. This practice enables a project manager or product owner to accurately predict costs and allocate resources
accordingly.

Agile effort estimation techniques


There are many different Agile effort estimation techniques to choose from. Here are three of the most common
ones:
Planning Poker: In this method, team members sit together in a circle to assign values to story points. Each
individual will have a set of cards with the numerical values that can be assigned: 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100.
The product owner will read out a user story to the team members. They will have a discussion and then decide
which value it should have. If everyone is in agreement, the final estimate is decided. If not, the team will discuss
further until a consensus is reached.
T-shirt Sizes: Here, story points take the form of sizes: extra-small (XS), small (S), medium (M), large (L), and
extra-large (XL). Estimators will determine the sizes to get a quick and rough estimate that can be converted to
numbers later.
Dot voting: This approach enables team members to sort items in the product backlog from low to high priority.
User stories are posted on a board and estimators get four or five dots to use as votes. The one with the most dots is
deemed the highest-priority item, and so on.

5.5 Project Schedule and Staffing

Once the effort is estimated, various schedules (or project duration) are possible, depending on the number of
resources (people) put on the project.

For example, for a project whose effort estimate is 56 person-months, a total schedule of 8 months is possible with 7
people. A schedule of 7 months with 8 people is also possible, as is a schedule of approximately 9 months with 6
people.

Manpower and months are not interchangeable in a software project. A schedule cannot be simply obtained from
the overall effort estimate by deciding on average staff size and then determining the total time requirement by
dividing the total effort by the average staff size. Brooks has pointed out that person and months (time) are not
interchangeable. According to Brooks , "... man and months are interchangeable only for activities that require no
communication among men, like sowing wheat or reaping cotton. This is not even approximately true of
software...."

For instance, in the example here, a schedule of 1 month with 56 people is not possible even though the effort
matches the requirement. Similarly, no one would execute the project in 28 months with 2 people. In other words,
once the effort is fixed, there is some flexibility in setting the schedule by appropriately staffing the project, but this
flexibility is not unlimited.

Empirical data also suggests that no simple equation between effort and schedule fits well. In a project, the
scheduling activity can be broken into two sub activities: determining the overall schedule (the project duration)
with major milestones, and developing the detailed schedule of the various tasks.

5.6 Quality planning: Quality Concepts

Software quality product is defined in term of its fitness of purpose. That is, a quality product does precisely what
the users want it to do. For software products, the fitness of use is generally explained in terms of satisfaction of the
requirements laid down in the SRS document. Although "fitness of purpose" is a satisfactory interpretation of
quality for many devices such as a car, a table fan, a grinding machine, etc.for software products, "fitness of
purpose" is not a wholly satisfactory definition of quality.

Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the SRS
document. But, has an almost unusable user interface. Even though it may be functionally right, we cannot consider
it to be a quality product.

5.7 Qualitative quality management planning

A quantitative approach involves using statistical methods and procedures of data analysis to analyze, understand
and improve the quality of software products and services.
The quantitative approach to Quality Management typically involves the following steps:
1. Collect data: Collect data on the process, product, or service to be analyzed. The data that is related to the
process, product, or service is collected for the analysis process.
2. Analyze data: To analyze data, Statistical methods are useful to identify the trends, and patterns for the
future growth of the organization.
3. Identify cause-and-effect relationships: The analyzed data in the data analysis phase is used to identify the
cause-and-effect relationships with the issues that affect the quality of the product and identify the root cause
of the problem to reduce the repetition of the issue
4. Implement improvements: Based on the results from data analysis and Identifying the cause-and-effect
relationship, new changes or features are added to increase the quality level by implementing the
improvements.
5. Monitor and measure: To ensure that the improvements lead to providing the desired quality of the product
and satisfying the requirements, regular monitoring along with certain measures is needed.

5.8 CMM (Capability maturity model ) project management process

CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in 1987.
 It is not a software process model. It is a framework that is used to analyze the approach and techniques
followed by any organization to develop software products.
 It also provides guidelines to further enhance the maturity of the process used to develop those software
products.
 It is based on profound feedback and development practices adopted by the most successful organizations
worldwide.
 This model describes a strategy for software process improvement that should be followed by moving
through 5 different levels.
 Each level of maturity shows a process capability level. All the levels except level-1 are further described by
Key Process Areas (KPA’s).
The 5 levels of CMM are as follows:
Level-1: Initial –
 No KPA’s defined.
 Processes followed are Adhoc and immature and are not well defined.
 Unstable environment for software development.
 No basis for predicting product quality, time for completion, etc.
 Limited project management capabilities, such as no systematic tracking of schedules, budgets, or progress.
 Limited communication and coordination among team members and stakeholders.
 No formal training or orientation for new team members.
 Little or no use of software development tools or automation.
 Highly dependent on individual skills and knowledge rather than standardized processes.
 High risk of project failure or delays due to lack of process control and stability.

Level-2: Repeatable –
 Focuses on establishing basic project management policies.
 Experience with earlier projects is used for managing new similar natured projects.
 Project Planning- It includes defining resources required, goals, constraints, etc. for the project. It presents a
detailed plan to be followed systematically for the successful completion of good quality software.
 Configuration Management- The focus is on maintaining the performance of the software product,
including all its components, for the entire lifecycle.
 Requirements Management- It includes the management of customer reviews and feedback which result in
some changes in the requirement set. It also consists of accommodation of those modified requirements.
 Subcontract Management- It focuses on the effective management of qualified software contractors i.e. it
manages the parts of the software which are developed by third parties.
 Software Quality Assurance- It guarantees a good quality software product by following certain rules and
quality standard guidelines while developing.
Level-3: Defined –
 At this level, documentation of the standard guidelines and procedures takes place.
 It is a well-defined integrated set of project-specific software engineering and management processes.
 Peer Reviews- In this method, defects are removed by using a number of review methods like walkthroughs,
inspections, buddy checks, etc.
 Intergroup Coordination- It consists of planned interactions between different development teams to ensure
efficient and proper fulfillment of customer needs.
 Organization Process Definition- Its key focus is on the development and maintenance of the standard
development processes.
 Organization Process Focus- It includes activities and practices that should be followed to improve the
process capabilities of an organization.
 Training Programs- It focuses on the enhancement of knowledge and skills of the team members including
the developers and ensuring an increase in work efficiency.

Level-4: Managed –
 At this stage, quantitative quality goals are set for the organization for software products as well as software
processes.
 The measurements made help the organization to predict the product and process quality within some limits
defined quantitatively.
 Software Quality Management- It includes the establishment of plans and strategies to develop quantitative
analysis and understanding of the product’s quality.
 Quantitative Management- It focuses on controlling the project performance in a quantitative manner.

Level-5: Optimizing –
 This is the highest level of process maturity in CMM and focuses on continuous process improvement in the
organization using quantitative feedback.
 Use of new tools, techniques, and evaluation of software processes is done to prevent recurrence of known
defects.
 Process Change Management- Its focus is on the continuous improvement of the organization’s software
processes to improve productivity, quality, and cycle time for the software product.
 Technology Change Management- It consists of the identification and use of new technologies to improve
product quality and decrease product development time.
 Defect Prevention- It focuses on the identification of causes of defects and prevents them from recurring in
future projects by improving project-defined processes.

5.9 Risk Management Planning

A risk is a probable problem- it might happen or it might not. There are main two characteristics of risk
Uncertainty- the risk may or may not happen that means there are no 100% risks.
loss – If the risk occurs in reality , undesirable result or losses will occur.
Risk management is a sequence of steps that help a software team to understand , analyze and manage
uncertainty. Risk management consists of
 Risk Identification
 Risk analysis
 Risk Planning
 Risk Monitoring
A computer code project may be laid low with an outsized sort of risk. so as to be ready to consistently
establish the necessary risks which could have an effect on a computer code project, it’s necessary to reason
risks into completely different categories. The project manager will then examine the risks from every
category square measure relevant to the project.
There square measure 3 main classes of risks that may have an effect on a computer code project:
1. Project Risks:
Project risks concern various sorts of monetary funds, schedules, personnel, resource, and customer-related
issues. a vital project risk is schedule slippage. Since computer code is intangible, it’s terribly tough to
observe and manage a computer code project. it’s terribly tough to manage one thing that can not be seen.
For any producing project, like producing cars, the project manager will see the merchandise taking form.

For example, see that the engine is fitted, at the moment the area of the door unit fitted, the automotive is
obtaining painted, etc. so he will simply assess the progress of the work and manage it. The physical property
of the merchandise being developed is a vital reason why several computer codes come to suffer from the
danger of schedule slippage.
2. Technical Risks:
Technical risks concern potential style, implementation, interfacing, testing, and maintenance issues.
Technical risks conjointly embody ambiguous specifications, incomplete specification, dynamic
specification, technical uncertainty, and technical degeneration. Most technical risks occur thanks to the
event team’s lean information concerning the project.

3. Business Risks:
This type of risk embodies the risks of building a superb product that nobody needs, losing monetary funds or
personal commitments, etc.

Classification of Risk in a project:


Example: Let us consider a satellite based mobile communication project. The project manager can identify
several risks in this project. Let us classify them appropriately.
 What if the project cost escalates and overshoots what was estimated? – Project Risk
 What if the mobile phones that are developed become too bulky in size to conveniently carry? Business Risk
 What if call hand-off between satellites becomes too difficult to implement? Technical Risk

5.10 Project Monitoring Plan

A project monitoring plan is a document that outlines the strategies and methods for tracking and evaluating project
progress, performance, and outcomes. It is an essential tool in project management that helps project managers and
stakeholders monitor the project's status, identify potential problems, and take corrective actions to ensure the
project stays on track.

A project monitoring plan typically includes the following components:

Objectives: The plan should clearly state the project's objectives and goals, which will serve as the basis for
monitoring and evaluating project progress and outcomes.

Performance Measures: The plan should identify specific performance measures and indicators that will be used to
track and evaluate project progress and outcomes. These measures should be quantifiable, measurable, and directly
related to the project's objectives.

Data Collection and Analysis Methods: The plan should outline the methods for collecting and analyzing project
data, including the types of data to be collected, the sources of data, and the frequency of data collection.

Reporting: The plan should specify the frequency and format of project status reports, as well as the stakeholders
who will receive the reports. The reports should include relevant project data, analysis, and recommendations for
corrective actions, if necessary.

Roles and Responsibilities: The plan should clearly define the roles and responsibilities of project team members
and stakeholders in the monitoring and evaluation process.
Risk Management: The plan should identify potential risks to the project's success and outline strategies for
mitigating those risks.

Overall, a project monitoring plan is a crucial component of project planning and management, as it helps ensure
that the project stays on track and meets its objectives. By monitoring and evaluating project progress and outcomes,
project managers and stakeholders can identify potential problems early and take corrective actions to keep the
project on track.

5.11 Detailed Scheduling

Project schedule simply means a mechanism that is used to communicate and know about that tasks are needed
and has to be done or performed and which organizational resources will be given or allocated to these tasks and
in what time duration or time frame work is needed to be performed. Effective project scheduling leads to success
of project, reduced cost, and increased customer satisfaction. Scheduling in project management means to list out
activities, deliverables, and milestones within a project that are delivered. It contains more notes than your
average weekly planner notes. The most common and important form of project schedule is Gantt chart.

Process:
The manager needs to estimate time and resources of project while scheduling project. All activities in project
must be arranged in a coherent sequence that means activities should be arranged in a logical and well-organized
manner for easy to understand. Initial estimates of project can be made optimistically which means estimates can
be made when all favorable things will happen and no threats or problems take place.
The total work is separated or divided into various small activities or tasks during project schedule. Then, Project
manager will decide time required for each activity or task to get completed. Even some activities are conducted
and performed in parallel for efficient performance. The project manager should be aware of fact that each stage
of project is not problem-free.
Advantages of Project Scheduling:
There are several advantages provided by project schedule in our project management:
 It simply ensures that everyone remains on same page as far as tasks get completed, dependencies, and
deadlines.
 It helps in identifying issues early and concerns such as lack or unavailability of resources.
 It also helps to identify relationships and to monitor process.
 It provides effective budget management and risk mitigation.

Unit 6- Agile Project Management


6.1 Introduction to APM:
Agile project management (APM) is an iterative approach to planning and guiding project processes. It breaks
project processes down into smaller cycles called sprints, or iterations.
Agile project management enables project teams in software development to work quickly and collaboratively on a
project while being able to adapt to changing requirements in development. It also enables development teams to
react to feedback quickly, so they can make changes at each sprint and product cycle.
Just as in Agile software development, an Agile project is completed in small sections. In Agile software
development, for instance, an iteration refers to a single development cycle. Each section or iteration is reviewed
and critiqued by the project team, which should include representatives of the project's various stakeholders. Insights
gained from the critique of an iteration are used to determine what the next step should be in the project.
Agile project management focuses on working in small batches, visualizing processes and collaborating with end
users to gain feedback. Continuous releases are also a focus, as these normally incorporate given feedback within
each iteration.
What is Agile project methodology?
The Agile project methodology breaks projects into small pieces. These project pieces are completed in sprints that
generally run anywhere from a few days to a few weeks. These sessions run from the initial design phase to testing
and quality assurance (QA).
The Agile methodology enables teams to release segments as they're completed. This continuous release schedule
enables teams to demonstrate that these segments are successful and, if not, to fix flaws quickly. The belief is that
this helps reduce the chance of large-scale failures because there's continuous improvement throughout the project
lifecycle.

The 12 principles of Agile project management are as follows:


1. Early and continuous delivery of software is the highest priority to achieve customer satisfaction.
2. Teams must be able to change requirements at any point in the development process, even in late stages.
3. Prioritize continuous creation and deployment of working software in short succession.
4. Developers must work together with end users and project stakeholders throughout the project.
5. Team members need to be motivated to support their surrounding environment.
6. Convey information in development teams through face-to-face conversations, if possible.
7. Measure progress primarily by progress made on creating working software.
8. Developers must maintain a constant pace to continue a sustainable development process.
9. Continuous attention should be given to the quality of software to ensure good design.
10. Maximize the work done by focusing on simplicity in design.
11. Teams must be self-organizing to produce the best software.
12. Teams need to reflect on how to become more effective at regular intervals.
The 5 phases of APM

There are five main phases involved in the APM process:


1. Envision. The project and overall product are first conceptualized in this phase, and the needs of the end
customers are identified. This phase also determines who is going to work on the project and its stakeholders.
2. Speculate. This phase involves creating the initial requirements for the product. Teams will work together to
brainstorm a features list of the final product, then identify milestones involving the project timeline.
3. Explore. The project is worked on with a focus on staying within project constraints, but teams will also
explore alternatives to fulfill project requirements. Teams work on single milestones and iterate before moving
on to the next.
4. Adapt. Delivered results are reviewed and teams adapt as needed. This phase focuses on changes or corrections
that occur based on customer and staff perspectives. Feedback should be constantly given so each part of the
project meets end-user requirements. The project should improve with each iteration.
5. Close. Delivered results are reviewed and teams adapt as needed. The final project is measured against updated
requirements. Mistakes or issues encountered within the process should be reviewed to avoid similar issues in
the future.

6.2 Implementation
Step 1: Set your project vision and scope with a planning meeting
What is it?
At the beginning of a new Agile project, you need to define a clear business need that your project is addressing. In
more simple terms: what is the end goal of this Agile project and how will you achieve it?
An Agile strategy meeting covers big picture ideas but it also needs to be realistic. You can start to think about
the scope of work, but remember that Agile projects need to be flexible and adapt to feedback.

To keep your planning meeting focused, try using the Elevator Pitch method:
 For: (Our Target Customer)
 Who: (Statement of the Need)
 The: (Product Name) is a (Product Category)
 That: (Key Product Benefit, Compelling Reason to Buy and/or Use)
 Unlike: (Primary Competitive Alternative)
 Our Product: (Final Statement of Primary Differentiation)

Who should be there?
The Agile planning meeting is where you get buy-in on your project. Try to include relevant stakeholders as well as
the product owner and key members of the product team.

When does it happen?


A planning meeting should happen before the project starts. Or, if you’re working on a project continuously, plan a
major strategy meeting annually to ensure your mission is still valid.

How long should it take?


The length of time an Agile planning meeting should take is totally subjective. However, depending on the
complexity of the project, most can take anywhere from 4–16 hours (just not in a row!)

Step 2: Build out your product roadmap


What is it?
With your strategy in place, it’s time for the product owner to translate that vision into a product roadmap. This is a
high-level view of the requirements, updated user stories, and a loose timeframe of how it will all get completed.
The ‘loose’ part here is important. You’re not spending days or weeks planning out every step, but simply identifying,
prioritizing, and roughly estimating the time and effort each piece of your product will take on the way to making a
usable product.
So, what does this look like for an Agile project?
Product Management expert Roman Pichler suggests working with a goal-oriented product roadmap, which is
sometimes also referred to as theme-based:

“Goal-oriented roadmaps focus on goals, objectives, and outcomes like acquiring customers, increasing engagement,
and removing technical debt. Features still exist, but they are derived from the goals and should be used sparingly.
Use no more than three to five features per goal, as a rule of thumb.”
For each of these goals, you want to include 5 key pieces of information: Date, Name, Goal, Features, and Metrics

Who should be there?


The product roadmap is the responsibility of the product owner but should include input from any other stakeholders
in the project—think marketing, sales, support, and development team reps.

When does it happen?


The product roadmap needs to be in place before you start planning out sprints, so it’s best to move into creating it
directly after your strategy meeting.

How long should it take?


Like all things in Agile project management, you want to move quickly rather than dwell on early-stage planning.
However, your roadmap is a literal map from your mission to your MVP and should take as long as it does to feel
confident that you’ve covered all the applicable goals.
Agile planning is a skill on its own.

Step 3: Create a release plan


What is it?
Agile project management isn’t is built around short-term sprints with the goal of regularly and consistently releasing
usable software.
After you have a high-level roadmap in place, the product owner then creates a high-level timetable for each release.
Because Agile projects will have multiple releases, you’ll want to prioritize the features needed to get you to launch
first. For example, if your project kicked off in November, you might set your MVP launch for early February, with a
high-priority feature release the following May.
This all depends on the complexity of your project and the lengths of your “sprints”—the periods of work dedicated
to each goal (which we’ll get into next!). A typical release includes 3–5 of these sprints.

Who should be there?


A release plan is like rallying the troops. The product owner, project managers, Scrum Master, and all team members
should be present.

When does it happen?


At a minimum, your release plans should be created on the first day of any new release and reviewed at least every
quarter.

How long should it take?


Be realistic about how long a release will take, but don’t let that slow you down. A typical release planning session
should take around 4–8 hours.

Step 4: Sprint planning


What is it?
It’s time to move from the macro to the micro view. Together with the product owner, the development team plans
“sprints”—short cycles of development in which specific tasks and goals will be carried out.
At the beginning of a sprint cycle, you and your team will create a list of backlog items you think you can complete in
that timeframe that will allow you to create functional software. Then, it’s as simple as using one of the Agile
methodologies to work through them (which we’ll cover more in-depth below).

Who should be there?


Sprint planning in Agile is done by the product team but should include input and guidance from the product owner,
project managers, and scrum master. However, ultimately it’s up to the team to decide what should (and can) get done
in a sprint.

When does it happen?


Sprint planning takes place at the start of each sprint cycle. For example, if you’re doing weekly sprints, you’ll do a
planning session every Monday (or whatever day of the week you choose to start on).

How long should it take?


Sprint planning sets the tone for the cycle. So while you don’t want to spend too much time at this stage, it could
realistically take you 2–4 hours. But once you’ve planned your sprint you’re quite literally off to the races.

Step 5: Keep your team on track with daily standups


Agile projects move quickly. And so it’s necessary that you have regular moments to check in and make sure there
aren’t roadblocks getting in the way. These are called “stand-ups” in Agile-speak.

A standup is a daily, 10–15-minute meeting where your team comes together to discuss three things:
 What did you complete yesterday?
 What are you working on today?
 Are there any roadblocks getting in the way?

While this might seem like an annoyance to some of your team, these meetings are essential for fostering the kind of
communication that drives Agile project management. Agile depends on reacting quickly to issues and voicing them
in a public space is a powerful way to foster cross-team collaboration.

Step 6: Sprint reviews


What is it?
Each sprint cycle ends with a functioning piece of software getting shipped. And while this is a huge milestone to
celebrate, it’s also an opportunity to review what was done and show this off to people on your team and any key
stakeholders. Think of it as Agile show-and-tell.
Sprint reviews should cover the more practical aspects of the sprint. Check your initial plan and make sure all
requirements were met according to your definition of done.
As the product owner, it’s your choice to accept or refuse certain functionalities. If something went wrong, ask why?
How can you adjust the next sprint so your team can hit their targets? Agile is all about continuous learning and
iterations, and this means on your processes as well as your product.

Who should be there?


The sprint review involves everyone who worked on and is impacted by the release. This means you should include
your entire team as well as any key stakeholders.

When does it happen?


The sprint review takes place at the end of each sprint.

How long should it take?


Just say no to powerpoints and feature dissertations. The sprint review should only take an hour or two max.

Step 7: Decide what to focus on next in your sprint retrospective


What is it?
One of the core principles of Agile project management is that it’s sustainable. This means you should be ready to
start on the next sprint as soon as the previous one ends.
To make sure you’re actually learning from each release (and not just moving forward blindly), you need to dig in
with a sprint retrospective.
After you show off the release, a retrospective is a moment to think back on the process of the previous sprint.
Did everything go as planned? Was the workload manageable? Where could you improve your process or planning?
Did you learn something during the sprint that changes your initial timeline or vision for the project?
Don’t simply plan, but also take this time to discuss how the previous sprint went and how you could improve in your
next one.

Who should be there?


The retrospective is a natural extension of the review, and so while your stakeholders can leave, the rest of the team
should be involved and giving their insights.

When does it happen?


It makes the most sense for your sprint retrospective to happen right after your sprint review.

How long should it take?


Again, keep it short and sweet. An hour or two max is probably all you’ll need to debrief and plan for the next brief.

6.3 Iterative Project Management Life Cycle


This approach evolved to counter the existing constraints in the traditional approach especially where there was a
need to adapt to changes. In this approach, the processes in each phase are iterated till the phase exit criteria are met.
Truly speaking this approach was just a baby step in the evolution of project management Life Cycles.
It is clear from the above figure that the Iterative Cycle is nothing but a series of mini-waterfalls. It can be adopted
for less complex and small projects where the deliverables are fairly understood. It allows a relatively better
insulation to changes over the Traditional approach.
The Agile iterative project management life cycle is a specific implementation of the iterative approach, commonly
used in Agile project management methodologies such as Scrum or Kanban. It consists of the following stages:
1. Conceptualization: In this initial stage, the project's vision and high-level objectives are defined. The
project team identifies the problem to be solved, potential solutions, and the desired outcomes. A high-level
product backlog is created, capturing the key features and requirements.
2. Planning and Backlog Refinement: The product backlog is refined and prioritized, breaking down the
features into manageable units called user stories. The project team estimates the effort required for each
user story, usually using story points. Sprint goals are set, and a sprint backlog is created, containing the
user stories selected for the upcoming iteration or sprint.
3. Sprint Execution: A sprint typically lasts for 1-4 weeks. During this phase, the project team works on
developing the selected user stories from the sprint backlog. Daily stand-up meetings are held to provide
updates on progress, discuss challenges, and plan the day's work. The team collaborates closely,
implementing, testing, and integrating the features incrementally.
4. Daily Collaboration and Monitoring: Throughout the sprint, the project team engages in continuous
collaboration and communication. They regularly review the progress, discuss any obstacles or issues, and
make necessary adjustments. The project manager or Scrum Master monitors the team's progress, removes
any impediments, and ensures adherence to Agile principles and practices.
5. Sprint Review: At the end of each sprint, a sprint review meeting is conducted to showcase the completed
user stories to stakeholders, including the product owner and any other relevant individuals or teams.
Feedback is gathered, and the product backlog is updated based on the insights gained. The sprint review
helps validate the direction and progress of the project.
6. Sprint Retrospective: Following the sprint review, a retrospective meeting is held to reflect on the sprint
execution. The project team discusses what went well, what could be improved, and any potential
adjustments to the team's processes or practices. Action items are identified to enhance productivity,
teamwork, and overall project performance.
7. Iteration and Continuous Improvement: The project then proceeds to the next sprint, repeating the cycle.
Each sprint focuses on delivering value incrementally, with a new set of user stories selected from the
refined product backlog. The process continues iteratively, with frequent feedback, collaboration, and
adaptation, allowing the project to evolve and respond to changing needs.
The Agile iterative project management life cycle emphasizes delivering working software or products in short
iterations, promoting customer collaboration, and embracing change. It enables flexibility, transparency, and
continuous improvement throughout the project, ensuring that the final outcome aligns with the stakeholders' needs
and expectations.
6.4 Adaptive Project Management Life Cycle

The Agile Adaptive Project Management Life Cycle is a project management approach that combines Agile
principles with adaptive and iterative practices. It acknowledges the inherent uncertainty and complexity of
projects and emphasizes the ability to adapt and respond to changing circumstances. The life cycle consists of the
following stages:
1. Discovery: The project starts with a discovery phase to gain a deep understanding of the project's
objectives, context, and stakeholders' needs. This involves conducting research, gathering requirements,
and identifying key project constraints and risks.
2. Planning and Roadmap: A high-level plan and roadmap are created, outlining the project's goals,
deliverables, and major milestones. However, the plan remains flexible and adaptable, allowing for
adjustments as new information emerges or priorities change.
3. Iterative Development: The project progresses through a series of iterations or sprints, each focused on
delivering a working increment of the project. During each iteration, the project team collaboratively
plans, executes, and reviews the work. Feedback from stakeholders is sought regularly to ensure
alignment with their evolving expectations.
4. Continuous Adaptation: The Agile Adaptive approach emphasizes ongoing adaptation and learning. As
the project progresses, the team incorporates new information, feedback, and changing requirements into
the project's direction and plans. This iterative adaptation helps optimize the project's outcomes and
enables the team to adjust course when needed.
5. Stakeholder Engagement: Active stakeholder engagement is crucial throughout the project.
Stakeholders are involved in providing feedback, participating in iterative reviews, and collaborating on
decision-making. Their input helps shape the project's direction and ensures that it remains aligned with
their needs and priorities.
6. Monitoring and Control: The project's progress, budget, and quality are monitored and controlled
throughout the life cycle. Key performance indicators and metrics are tracked to measure progress,
identify risks, and make informed decisions. Regular retrospectives are conducted to reflect on the team's
performance and identify areas for improvement.
7. Closure and Transition: The project is closed when the desired outcomes are achieved or when it no
longer aligns with the organization's goals. Lessons learned are captured, and knowledge transfer
activities take place to ensure a smooth transition to ongoing operations or subsequent projects.
The Agile Adaptive Project Management Life Cycle encourages flexibility, collaboration, and continuous
improvement. It allows project teams to embrace uncertainty, adapt to changes, and deliver value incrementally
while maintaining a focus on customer satisfaction.

6.5 Adaptive & Integrating the APM toolkit

Adaptive Project Management (APM) is an approach that focuses on embracing change and uncertainty
throughout the project lifecycle. It recognizes that traditional project management methods may not be suitable
for projects that are highly complex, dynamic, or subject to frequent changes in requirements. APM emphasizes
flexibility, collaboration, and continuous adaptation to deliver successful outcomes.
Integrating the APM toolkit involves incorporating various techniques, tools, and principles that support the
adaptive project management approach. Here are some key elements of the APM toolkit:
1. Iterative and Incremental Development: APM emphasizes breaking the project down into smaller
iterations or increments. Each iteration focuses on delivering a working product or solution, allowing for
feedback and adjustments throughout the project.
2. Agile Frameworks: APM often leverages Agile methodologies such as Scrum, Kanban, or Lean to
promote adaptive practices. These frameworks provide principles and practices for managing change,
facilitating collaboration, and delivering value iteratively.
3. Dynamic Planning: APM recognizes that project plans need to be adaptable and dynamic. Rather than
creating a detailed plan upfront, the APM toolkit encourages continuous planning and re-planning as the
project progresses. This involves regularly assessing the project's status, reassessing priorities, and
adjusting plans accordingly.
4. Stakeholder Engagement: APM places significant emphasis on active stakeholder engagement. Regular
communication, collaboration, and feedback from stakeholders are crucial to ensure alignment, manage
expectations, and incorporate changing requirements.
5. Continuous Learning and Improvement: APM encourages a culture of learning and improvement. It
involves conducting regular retrospectives, where the project team reflects on their performance,
identifies areas for improvement, and implements changes to enhance project outcomes.
6. Adaptive Decision-Making: APM promotes decentralized decision-making, empowering project teams
to make decisions at the appropriate level. This allows for quicker responses to changes, encourages
innovation, and increases ownership and accountability among team members.
7. Risk Management: APM recognizes that uncertainty and risk are inherent in complex projects. The
APM toolkit includes techniques for proactive risk identification, assessment, and mitigation. It
encourages a flexible and adaptive approach to risk management, enabling timely adjustments to
minimize potential negative impacts.
By integrating these elements, the APM toolkit enables project teams to navigate complexity, respond to changes,
and deliver value in dynamic environments. It emphasizes collaboration, adaptability, and continuous
improvement to increase the likelihood of project success.

6.7 The Science of Scrum

"The Science of Scrum" is a book by Jeff Sutherland that explores how Scrum, an Agile project management
framework, is rooted in scientific principles. It highlights the importance of feedback, transparency, and adaptation
in driving project success. The book offers practical insights and guidance on leveraging the scientific foundations
of Scrum to improve collaboration, deliver value, and navigate complexity effectively.

"The Science of Scrum" by Jeff Sutherland provides a scientific perspective on the Scrum framework within the
context of Agile project management. The book explores how empirical principles and practices of Scrum align
with the Agile values and principles. Here's an explanation of "The Science of Scrum" from an Agile project
management point of view:
1. Empirical Process Control: Agile project management embraces an empirical process control
approach, which involves making decisions based on observation, experimentation, and feedback. "The
Science of Scrum" reinforces this approach by highlighting Scrum's empirical nature. It emphasizes
transparency, inspection, and adaptation as the foundation of effective project management. Agile teams
use frequent inspections and feedback loops to gain visibility into their work and make informed
adjustments to deliver higher value.
2. Empirical Foundation: The book delves into the empirical foundations of Scrum, exploring its
alignment with other scientific disciplines. Agile project management values evidence-based decision-
making and continuous learning. "The Science of Scrum" draws connections to fields such as complex
adaptive systems, game theory, and cognitive neuroscience to highlight the empirical nature of Scrum
and reinforce the importance of data-driven insights in Agile project management.
3. Agile Values and Principles: Agile project management is guided by the Agile Manifesto and its
underlying values and principles. "The Science of Scrum" aligns with these Agile values, such as
customer collaboration, responding to change, and working iteratively. It emphasizes the Scrum values
of commitment, courage, focus, openness, and respect, which are integral to fostering an Agile mindset
and creating a collaborative and adaptive work environment.
4. Iterative and Incremental Delivery: Scrum, as a key Agile framework, promotes iterative and
incremental delivery of work. "The Science of Scrum" explains how the Scrum practices, including
sprint planning, daily stand-ups, sprint reviews, and retrospectives, enable Agile teams to break down
complex projects into smaller, manageable increments. This iterative approach allows for regular
feedback and adaptation, aligning with Agile project management's emphasis on delivering value
incrementally and embracing change.
5. Agile Project Scaling: "The Science of Scrum" addresses the scaling aspects of Scrum, acknowledging
that Agile project management is not limited to small teams or projects. It introduces the concept of
Scrum of Scrums, which is a scaling mechanism for coordinating and aligning multiple Scrum teams
working on larger projects. This perspective highlights the adaptability of Scrum and its applicability in
diverse project contexts within Agile project management.
"The Science of Scrum" provides Agile project managers and practitioners with a deeper understanding of the
scientific principles underlying the Scrum framework. By integrating empirical practices, iterative delivery, Agile
values, and scaling considerations, the book helps Agile project managers enhance their ability to manage
complex projects and navigate uncertainty effectively.
It's important to note that the Agile project management perspective presented here is a general interpretation of
the book. For a comprehensive understanding of the scientific and Agile principles discussed, it is recommended
to read "The Science of Scrum" by Jeff Sutherland.

6.8 New Management Responsibilities

In Agile project management, there is a shift in management responsibilities compared to traditional project
management approaches. Here's an explanation of the new management responsibilities in Agile project
management:
1. Facilitating Self-Organization: Agile project managers have the responsibility to foster self-
organization within the project team. Instead of micromanaging and assigning tasks, they create an
environment where team members are empowered to make decisions, collaborate, and take ownership of
their work. The focus is on enabling the team to be self-directed and self-managing.
2. Removing Obstacles: Agile project managers play a crucial role in removing obstacles or barriers that
hinder the progress of the team. They identify and address any issues or challenges that impede the
team's productivity and help them overcome those barriers. This could involve coordinating with
stakeholders, resolving conflicts, or providing necessary resources and support.
3. Ensuring Alignment: Agile project managers have the responsibility to ensure that the team's work
aligns with the project goals and customer needs. They facilitate clear communication and collaboration
among team members, stakeholders, and customers. By promoting regular feedback and transparency,
they help the team stay focused on delivering value and adapting to changing requirements.
4. Promoting Continuous Improvement: Agile project managers encourage a culture of continuous
improvement within the project team. They facilitate retrospectives, where the team reflects on their
processes, identifies areas for improvement, and implements changes to enhance their performance. They
support the team in adopting Agile practices, experimenting with new ideas, and finding ways to
optimize their work.
5. Agile Leadership and Coaching: Agile project managers take on a coaching and leadership role rather
than a command-and-control approach. They provide guidance, mentorship, and support to the team
members. They encourage skill development, foster a learning mindset, and help individuals and the
team grow and evolve.
6. Monitoring Progress and Metrics: Agile project managers monitor project progress and key metrics to
ensure visibility into the team's performance. They use Agile project management tools and techniques
to track work, assess progress, and make data-driven decisions. They also communicate project status
and metrics to stakeholders, facilitating transparency and enabling informed decision-making.
7. Adaptability and Flexibility: Agile project managers embrace adaptability and flexibility as they
navigate uncertainty and changing requirements. They understand that Agile projects require a dynamic
and iterative approach, and they adjust plans and priorities as needed. They are open to feedback and
encourage the team to be responsive to changing circumstances.
These new management responsibilities in Agile project management focus on empowering the team, promoting
collaboration, embracing change, and delivering value iteratively. The role of the Agile project manager is to
facilitate the team's success by creating an environment that supports self-organization, continuous improvement,
and adaptability

You might also like