SY SE Unit wise notes
SY SE Unit wise notes
SY SE Unit wise notes
1.1 Introduction:
Software is a program or set of programs containing instructions that provide desired functionality.
And Engineering is the process of designing and building something that serves a particular purpose
and finds a cost-effective solution to problems.
Software engineering is the process of designing, developing, testing, and maintaining software.
It is a systematic and disciplined approach to software development that aims to create high-quality,
reliable, and maintainable software.
Software engineering includes a variety of techniques, tools, and methodologies, including
requirements analysis, design, testing, and maintenance.
Some key principles of software engineering include:
1. Modularity: Breaking the software into smaller, reusable components that can be developed and tested
independently.
2. Abstraction: Hiding the implementation details of a component and exposing only the necessary functionality
to other parts of the software.
3. Encapsulation: Wrapping up the data and functions of an object into a single unit, and protecting the internal
state of an object from external modifications.
4. Reusability: Creating components that can be used in multiple projects, which can save time and resources.
5. Maintenance: Regularly updating and improving the software to fix bugs, add new features, and address
security vulnerabilities.
6. Testing: Verifying that the software meets its requirements and is free of bugs.
7. Design Patterns: Solving recurring problems in software design by providing templates for solving them.
8. Agile methodologies: Using iterative and incremental development processes that focus on customer
satisfaction, rapid delivery, and flexibility.
9. Continuous Integration & Deployment: Continuously integrating the code changes and deploying them into
the production environment.
Software Engineering is mainly used for large projects based on software systems rather than single programs
or applications. The main goal of software Engineering is to develop software application for improving the
quality, budget and time efficiency. Software Engineering ensures that the software that has to built should be
consistent, correct, also on budget, on time and within the required requirements.
3. Time limitation
The deadlines set for software engineers are incredibly short and are one of the major challenges of being a software
engineer. After all, it is a game of time. When engineers collaborate with several clients across various time zones,
the process becomes considerably more difficult. These time restraints frequently cause development teams to work
less productively, resulting in subpar-quality products.
● System Design
● Implementation
● Deployment of system
● Maintenance
B) Evolutionary Prototyping –
In this method, the prototype developed initially is incrementally refined on the basis of customer feedback till it
finally gets accepted. In comparison to Rapid Throwaway Prototyping, it offers a better approach which saves time
as well as effort. This is because developing a prototype from scratch for every iteration of the process can
sometimes be very frustrating for the developers.
C) Incremental Prototyping – In this type of incremental Prototyping, the final expected product is broken into
different small pieces of prototypes and being developed individually. In the end, when all individual pieces are
properly developed, then the different prototypes are collectively merged into a single final product in their
predefined order. It’s a very efficient approach that reduces the complexity of the development process, where the
goal is divided into sub-parts and each sub-part is developed individually. The time interval between the project’s
beginning and final delivery is substantially reduced because all parts of the system are prototyped and tested
simultaneously. Of course, there might be the possibility that the pieces just do not fit together due to some lack of
ness in the development phase – this can only be fixed by careful and complete plotting of the entire system before
prototyping starts.
D) Extreme Prototyping – This method is mainly used for web development. It is consists of three sequential
independent phases:
D.1) In this phase a basic prototype with all the existing static pages are presented in the HTML format.
D.2) In the 2nd phase, Functional screens are made with a simulated data process using a prototype services layer.
D.3) This is the final step where all the services are implemented and associated with the final prototype.
Advantages –
● The customers get to see the partial product early in the life cycle. This ensures a greater level of customer
satisfaction and comfort.
● New requirements can be easily accommodated as there is scope for refinement.
● Missing functionalities can be easily figured out.
● Errors can be detected much earlier thereby saving a lot of effort and cost, besides enhancing the quality of the
software.
● The developed prototype can be reused by the developer for more complicated projects in the future.
● Flexibility in design.
Disadvantages –
● Costly with respect to time as well as money.
● There may be too much variation in requirements each time the prototype is evaluated by the customer.
● Poor Documentation due to continuously changing customer requirements.
● It is very difficult for developers to accommodate all the changes demanded by the customer.
● There is uncertainty in determining the number of iterations that would be required before the prototype is
finally accepted by the customer.
● After seeing an early prototype, the customers sometimes demand the actual product to be delivered soon.
● Developers in a hurry to build prototypes may end up with sub-optimal solutions.
● The customer might lose interest in the product if he/she is not satisfied with the initial prototype.
A>Incremental Model
Incremental Model is a process of software development where requirements divided into multiple standalone
modules of the software development cycle. In this model, each module goes through the requirements, design,
implementation and testing phases. Every subsequent release of the module adds function to the previous release.
The process continues until the complete system achieved.
The Spiral Model is a risk-driven model, meaning that the focus is on managing risk through multiple iterations of
the software development process. It consists of the following phases:
1. Planning: The first phase of the Spiral Model is the planning phase, where the scope of the project is determined
and a plan is created for the next iteration of the spiral.
2. Risk Analysis: In the risk analysis phase, the risks associated with the project are identified and evaluated.
3. Engineering: In the engineering phase, the software is developed based on the requirements gathered in the
previous iteration.
4. Evaluation: In the evaluation phase, the software is evaluated to determine if it meets the customer’s
requirements and if it is of high quality.
5. Planning: The next iteration of the spiral begins with a new planning phase, based on the results of the
evaluation.
6. The Spiral Model is often used for complex and large software development projects, as it allows for a more
flexible and adaptable approach to software development. It is also well-suited to projects with significant
uncertainty or high levels of risk.
Advantages of Spiral Model:
Below are some advantages of the Spiral Model.
1. Risk Handling: The projects with many unknown risks that occur as the development proceeds, in that case,
Spiral Model is the best development model to follow due to the risk analysis and risk handling at every phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and complex projects.
3. Flexibility in Requirements: Change requests in the Requirements at later phase can be incorporated accurately
by using this model.
4. Customer Satisfaction: Customer can see the development of the product at the early phase of the software
development and thus, they habituated with the system by using it before completion of the total product.
5. Iterative and Incremental Approach: The Spiral Model provides an iterative and incremental approach to
software development, allowing for flexibility and adaptability in response to changing requirements or
unexpected events.
6. Emphasis on Risk Management: The Spiral Model places a strong emphasis on risk management, which helps
to minimize the impact of uncertainty and risk on the software development process.
7. Improved Communication: The Spiral Model provides for regular evaluations and reviews, which can improve
communication between the customer and the development team.
8. Improved Quality: The Spiral Model allows for multiple iterations of the software development process, which
can result in improved software quality and reliability
Disadvantages:
1. Team of expert professional is required, as the process is complex.
2. Complex and not properly organized process.
3. More dependency on risk management.
4. Hard to integrate again and again.
Software Requirement Specification (SRS) Format as name suggests, is complete specification and description of
requirements of software that needs to be fulfilled for successful development of software system. These
requirements can be functional as well as non-functional depending upon type of requirement. The interaction
between different customers and contractor is done because its necessary to fully understand needs of
customers. Depending upon information gathered after interaction, SRS is developed which describes requirements
of software that may include changes and modifications that is needed to be done to increase quality of product and
to satisfy customer’s demand.
1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is said to be correct if it
covers all the requirements that are actually expected from the system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all the pages, resolving
the to be determined parts to as much extent as possible as well as covering all the functional and non-
functional requirements properly.
3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set of requirements.
Examples of conflict include differences in terminologies used at separate places, logical conflicts like time
period of report generation, etc.
4. Unambiguousness:
A SRS is said to be unambiguous if all the requirements stated have only 1 interpretation. Some of the ways to
prevent unambiguousness include the use of modeling techniques like ER diagrams, proper reviews and buddy
checks, etc.
6. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily accepting changes to the system
to some extent. Modifications should be properly indexed and cross-referenced.
7. Verifiability:
A SRS is verifiable if there exists a specific technique to quantifiably measure the extent to which every
requirement is met by the system. For example, a requirement starting that the system must be user-friendly is
not verifiable and listing such requirements should be avoided.
8. Traceability:
One should be able to trace a requirement to design component and then to code segment in the program.
Similarly, one should be able to trace a requirement to the corresponding test cases.
9. Design Independence:
There should be an option to choose from multiple design alternatives for the final system. More specifically,
the SRS should not include any implementation details.
10. Testability:
A SRS should be written in such a way that it is easy to generate test cases and test plans from the document.
7. Traceability: A good SRS document includes detailed requirements and specifications, which enables
traceability throughout the software development process.
8. Testability: A good SRS document can serve as a basis for test cases and verification, which can ensure that the
software meets the requirements and specifications.
9. Improved Communication: A good SRS document can serve as a communication tool between different
stakeholders, such as project managers, developers, testers, and customers.
10. Reduced Rework: A good SRS document can help to identify and resolve issues early in the development
process, which can reduce the need for rework and improve the overall quality of the software.
7. Changes and Updates: Changes or updates to the SRS document can cause delays in the software development
process and can be difficult to manage.
8. Lack of Flexibility: A detailed SRS document can restrict the flexibility of the development process, which can
be challenging for projects that require agile development methodologies.
9. Limited Stakeholder Involvement: The process of creating an SRS document can limit the involvement of
stakeholders in the software development process, which can lead to a lack of collaboration and input from
different perspectives.
10. Ambiguity: A poorly written SRS document can lead to ambiguity and misunderstandings, which can cause
issues throughout the software development process.
A software requirements specification (SRS) is a document that captures complete description about how the system
is expected to perform. It is usually signed off at the end of requirements engineering phase.
Qualities of SRS:
● Correct
● Unambiguous
● Complete
● Consistent
● Ranked for importance and/or stability
● Verifiable
● Modifiable
● Traceable
Functional requirements often form the core of a requirements document. The traditional approach for specifying
functionality is to specify each function that the system should provide. Use cases specify the functionality of a
system by specifying the behavior of the system, captured as interactions of the users with the system. Use cases can
be used to describe the business processes of the larger business or organization that deploys the software, or it
could just describe the behavior of the software system. We will focus on describing the behavior of software
systems that are to be built.
Though use cases are primarily for specifying behavior, they can also be used effectively for analysis. Later when
we discuss how to develop use cases, we will discuss how they can help in eliciting requirements also.
Use Case:
In software and systems engineering, a use case is a list of actions or event steps, typically defining the interactions
between a role (known in the Unified Modeling Language as an actor) and a system, to achieve a goal. The actor
can be a human, an external system, or time. In systems engineering, use cases are used at a higher level than within
software engineering, often representing missions or stakeholder goals. Another way to look at it is a use case
describes a way in which a real-world actor interacts with the system. In a system use case you include high-level
implementation decisions. System use cases can be written in both an informal manner and a formal manner.
Advantages of DFD
● It helps us to understand the functioning and the limits of a system.
● It is a graphical representation which is very easy to understand as it helps visualize contents.
● Data Flow Diagram represent detailed and well explained diagram of system components.
● It is used as the part of system documentation file.
● Data Flow Diagrams can be understood by both technical or nontechnical person because they are very easy to
understand.
Disadvantages of DFD
● At times DFD can confuse the programmers regarding the system.
● Data Flow Diagram takes long time to be generated, and many times due to this reasons analysts are denied
permission to work on it.
1-level DFD:
In 1-level DFD, the context diagram is decomposed into multiple bubbles/processes. In this level, we highlight
the main functions of the system and breakdown the high-level process of 0-level DFD into subprocesses.
. 2-level DFD:
2-level DFD goes one step deeper into parts of 1-level DFD. It can be used to plan or record the specific/necessary
detail about the system’s functioning.
B> Entity Relationship Diagram:
ER Diagram:
● Entity Relational model is a model for identifying entities to be represented in the database and representation
of how those entities are related.
● The ER data model specifies enterprise schema that represents the overall logical structure of a database
graphically.
● E-R diagrams are used to model real-world objects like a person, a car, a company and the relation between
these real-world objects.
Features of ER model
i) E-R diagrams are used to represent E-R model in a database, which makes them easy to be converted into
relations (tables).
ii) E-R diagrams provide the purpose of real-world modeling of objects which makes them intently useful.
iii) E-R diagrams require no technical knowledge and no hardware support.
iv) These diagrams are very easy to understand and easy to create even by a naive user.
v) It gives a standard solution of visualizing the data logically.
1. Key Attribute –
The attribute which uniquely identifies each entity in the entity set is called key attribute.For example, Roll_No
will be unique for each student. In ER diagram, key attribute is represented by an oval with underlying lines.
2. Composite Attribute –
An attribute composed of many other attribute is called as composite attribute. For example, Address attribute of
student Entity type consists of Street, City, State, and Country. In ER diagram, composite attribute is represented by
an oval comprising of ovals.
3. Multivalued Attribute –
An attribute consisting more than one value for a given entity. For example, Phone_No (can be more than one for a
given student). In ER diagram, a multivalued attribute is represented by a double oval.
4. Derived Attribute –
An attribute that can be derived from other attributes of the entity type is known as a derived attribute. e.g.; Age
(can be derived from DOB). In ER diagram, the derived attribute is represented by a dashed oval
Cardinality:
The number of times an entity of an entity set participates in a relationship set is known as cardinality.
Cardinality can be of different types:
1. One-to-one – When each entity in each entity set can take part only once in the relationship, the cardinality is
one-to-one. Let us assume that a male can marry one female and a female can marry one male. So the relationship
will be one-to-one.
the total number of tables that can be used in this is 2.
2. Many to one – When entities in one entity set can take part only once in the relationship set and entities in
other entity sets can take part more than once in the relationship set, cardinality is many to one. Let us assume
that a student can take only one course but one course can be taken by many students. So the cardinality will be n to
1. It means that for one course there can be n students but for one student, there will be only one course.
The total number of tables that can be used in this is
3. Many to many – When entities in all entity sets can take part more than once in the relationship cardinality is
many to many. Let us assume that a student can take more than one course and one course can be taken by many
students. So the relationship will be many to many.
the total number of tables that can be used in this is 3.
Unit 3 - Software Architecture and Design
3.1 Introduction to Software Design
Software Design is the process to transform the user requirements into some suitable form, which helps the
programmer in software coding and implementation. During the software design phase, the design document is
produced, based on the customer requirements as documented in the SRS document. Hence the aim of this phase
is to transform the SRS document into the design document.
1. Correctness:
A good design should be correct i.e. it should correctly implement all the functionalities of the system.
2. Efficiency:
A good software design should address the resources, time, and cost optimization issues.
3. Understandability:
A good design should be easily understandable, for which it should be modular and all the modules are
arranged in layers.
4. Completeness:
The design should have all the components like data structures, modules, and external interfaces, etc.
5. Maintainability:
A good software design should be easily amenable to change whenever a change request is made from the
customer side
Different levels of Software Design:
There are three different levels of software design. They are:
1. Architectural Design:
The architecture of a system can be viewed as the overall structure of the system & the way in which
structure provides conceptual integrity of the system. The architectural design identifies the software as a
system with many components interacting with each other. At this level, the designers get the idea of the
proposed solution domain.
3. Detailed design:
Once the high-level design is complete, a detailed design is undertaken. In detailed design, each module is
examined carefully to design the data structure and algorithms. The stage outcome is documented in the form
of a module specification document.
Methodological Expertise
Expert on software development methodologies that may be adopted during SDLC (Software Development
Life Cycle).
Choose the appropriate approaches for development that helps the entire team.
Hidden Role of Software Architect
Facilitates the technical work among team members and reinforcing the trust relationship in the team.
Information specialist who shares knowledge and has vast experience.
Protect the team members from external forces that would distract them and bring less value to the project.
Deliverables of the Architect
A clear, complete, consistent, and achievable set of functional goals
A functional description of the system, with at least two layers of decomposition
A concept for the system
A design in the form of the system, with at least two layers of decomposition
A notion of the timing, operator attributes, and the implementation and operation plans
A document or process which ensures functional decomposition is followed, and the form of interfaces is
controlled
Quality Attributes
Quality is a measure of excellence or the state of being free from deficiencies or defects. Quality attributes are the
system properties that are separate from the functionality of the system.
Implementing quality attributes makes it easier to differentiate a good system from a bad one. Attributes are overall
factors that affect runtime behavior, system design, and user experience.
They can be classified as −
Static Quality Attributes
Reflect the structure of a system and organization, directly related to architecture, design, and source code. They are
invisible to end-user, but affect the development and maintenance cost, e.g.: modularity, testability, maintainability,
etc.
Dynamic Quality Attributes
Reflect the behavior of the system during its execution. They are directly related to system’s architecture, design,
source code, configuration, deployment parameters, environment, and platform.
They are visible to the end-user and exist at runtime, e.g. throughput, robustness, scalability, etc.
Components:-Components are generally units of computation or data stores in the system. A component has a
name, which is generally chosen to represent the role of the component or the function it performs.
A component is of a component-type, where the type represents a generic component, defining the general
computation and the interfaces a component of that type must have. Note that though a component has a type, in the
C&C architecture view.
In a diagram representing a C&C architecture view of a system, it is highly desirable to have a different
representation for different component types, so the different types can be identified visually. In a box-and-line
diagram, often all components are represented as rectangular boxes. Such an approach will require that types of the
components are described separately
Connectors: - The different components of a system are likely to interact while the system is in operation to provide
the services expected of the system. After all, components exist to provide parts of the services and features of the
system, and these must be combined to deliver the overall system functionality. For composing a system from its
components, information about the interaction between components is necessary.
Interaction between components may be through a simple means supported by the underlying process execution
infrastructure of the operating system. For example, a component may interact with another using the procedure call
mechanism (a connector,) which is provided by the runtime environment for the programming language.
6. Accommodate change –
The software should be designed in such a way that it accommodates the change implying that the software
should adjust to the change that is required to be done as per the user’s need.
7. Degrade gently –
The software should be designed in such a way that it degrades gracefully which means it should work
properly even if an error occurs during the execution.
8. Assessed or quality –
The design should be assessed or evaluated for the quality meaning that during the evaluation, the quality of
the design needs to be checked and focused on.
9. Review to discover errors –
The design should be reviewed which means that the overall evaluation should be done to check if there is
any error present or if it can be minimized.
Conceptual design is an initial/starting phase in the A Technical design is a phase in which the event team
process of planning, during which the broad outlines of writes the code and describes the minute detail of either
function and sort of something are coupled. the whole design or some parts of it.
It is written in the customer’s language and designed It describes any other thing that converts the
according to the client’s requirements. requirements into a solution to the customer’s problem.
It describes that what will happen to the data in the
It describes the functions or methods of the system.
system.
It also includes the process and sub-processes besides It includes the functioning and working of the
the strategies. conceptual design.
At the end of this phase, the solutions to the problems At the end of this phase, and after analyzing the
are sent for review. technical design, the specification is initiated.
3.7 Cohesion:
Cohesion is the indication of the relationship within the module. It is the concept of intra-module. Cohesion has
many types but usually, high cohesion is good for software.
Coupling:
Coupling is also the indication of the relationships between modules. It is the concept of the Inter-module. The
coupling has also many types but usually, the low coupling is good for software.
In other words, once a software entity (such as a class) is defined and implemented, it should be possible to extend
its behavior without modifying its source code. This can be achieved by using inheritance, interfaces, or other
techniques that allow new functionality to be added to a software entity without changing its existing
implementation.
The OCP promotes a modular and flexible design approach, which enables software systems to evolve over time
without breaking existing functionality. By designing software entities that are open for extension but closed for
modification, developers can reduce the risk of introducing bugs or unintended side effects when making changes to
existing code. Additionally, this principle can help to improve the maintainability and scalability of software
systems, as it encourages the use of loosely coupled components that can be easily modified or replaced without
affecting the rest of the system.
Objects are independent entities that may readily be changed because all state and representation information is
held within the object itself. Object may be distributed and may execute sequentially or in parallel.
COMPARISON
FACTORS FUNCTION ORIENTED DESIGN OBJECT ORIENTED DESIGN
The basic abstractions, which are The basic abstractions are not the real world
Abstraction given to the user, are real world functions but are the data abstraction where the
functions. real world entities are represented.
Functions are grouped together by Function are grouped together on the basis of the
Function which a higher level function is data they operate since the classes are associated
obtained. with their methods.
Begins basis Begins by considering the use case Begins by identifying objects and classes.
COMPARISON
FACTORS FUNCTION ORIENTED DESIGN OBJECT ORIENTED DESIGN
High Level Design is the general system design Low Level Design is like detailing HLD means
01.
means it refers to the overall system design. it refers to component-level design process.
02. High Level Design in short called as HLD. Low Level Design in short called as LLD.
03. It is also known as macro level/system design. It is also known as micro level/detailed design.
It describes the overall description/architecture of the It describes detailed description of each and
04.
application. every module.
High Level Design expresses the brief functionality Low Level Design expresses details functional
05.
of each module. logic of the module.
In HLD the input criteria is Software Requirement In LLD the input criteria is reviewed High Level
09.
Specification (SRS). Design (HLD).
High Level Solution converts the Business/client Low Level Design converts the High Level
10.
requirement into High Level Solution. Solution into Detailed solution.
In HLD the output criteria is data base design, In LLD the output criteria is program
11.
functional design and review record. specification and unit test plan.
3.14 Verification
As mentioned, verification is the process of determining if the software in question is designed and
developed according to specified requirements. Specifications act as inputs for the software development process.
The code for any software application is written based on the specifications document.
Verification is done to check if the software being developed has adhered to these specifications at every stage of the
development life cycle. The verification ensures that the code logic is in line with specifications.
Depending on the complexity and scope of the software application, the software testing team uses different
methods of verification, including inspection, code reviews, technical reviews, and walkthroughs. Software testing
teams may also use mathematical models and calculations to make predictive statements about the software and
verify its code logic.
2. It enables software teams to develop products that meet design specifications and customer needs.
3. It saves time by detecting the defects at the early stage of software development.
4. It reduces or eliminates defects that may arise at the later stage of the software development process.
3.15 Metrics:
A software metric is a measure of software characteristics which are measurable or countable. Software metrics are
valuable for many reasons, including measuring software performance, planning work items, measuring
productivity, and many other uses.
Within the software development process, many metrics are that are all connected. Software metrics are similar to
the four functions of management: Planning, Organization, Control, or Improvement.
1. Product Metrics: These are the measures of various characteristics of the software product. The two important
software characteristics are:
1. Size and complexity of software.
2. Quality and reliability of software.
These metrics can be computed for different stages of SDLC.
2. Process Metrics: These are the measures of various characteristics of the software development process. For
example, the efficiency of fault detection. They are used to measure the characteristics of methods, techniques, and
tools that are used for developing software.
Advantage of Software Metrics
Comparative study of various design methodology of software systems.
For analysis, comparison, and critical study of different programming language concerning their
characteristics.
In comparing and evaluating the capabilities and productivity of people involved in software development.
In the preparation of software quality specifications.
In the verification of compliance of software systems requirements and specifications.
In making inference about the effort to be put in the design and development of the software systems.
In getting an idea about the complexity of the code.
In taking decisions regarding further division of a complex module is to be done or not.
In guiding resource manager for their proper utilization.
In comparison and making design tradeoffs between software development and maintenance cost.
In providing feedback to software managers about the progress and quality during various phases of the
software development life cycle.
In the allocation of testing resources for testing the code.
Disadvantage of Software Metrics
The application of software metrics is not always easy, and in some cases, it is difficult and costly.
The verification and justification of software metrics are based on historical/empirical data whose validity
is difficult to verify.
These are useful for managing software products but not for evaluating the performance of the technical
staff.
The definition and derivation of Software metrics are usually based on assuming which are not
standardized and may depend upon tools available and working environment.
Most of the predictive models rely on estimates of certain variables which are often not known precisely.
Unit 4 - Testing
4.1 Testing Fundamentals
Testing fundamentals refer to the core principles and concepts that guide the process of software testing. They
provide a framework for ensuring the quality and reliability of software applications. Here are the key testing
fundamentals:
1. Test Objectives: Clearly define the purpose and goals of testing. This includes identifying what needs to
be tested, the expected outcomes, and the criteria for determining success.
2. Test Planning: Develop a comprehensive test plan that outlines the testing approach, test scope, test
deliverables, and resource requirements. It involves defining test objectives, test strategies, and test
schedules.
3. Test Design: Create test cases that effectively cover different scenarios and aspects of the software. Test
design involves identifying test conditions, selecting appropriate test data, and defining the expected
results.
4. Test Execution: Run the test cases and observe the actual results. Test execution includes setting up test
environments, executing test scripts, and recording the outcomes. It is important to accurately capture any
defects or issues encountered during testing.
5. Defect Management: Establish a process for capturing, tracking, and resolving defects identified during
testing. This involves logging defects, prioritizing them based on severity, assigning them to appropriate
stakeholders, and verifying their resolution.
6. Test Reporting: Generate test reports to communicate the progress, findings, and quality of the software
under test. Test reports provide stakeholders with a clear overview of the testing activities, including test
coverage, defect metrics, and overall test results.
7. Test Automation: Utilize automation tools and frameworks to enhance the efficiency and effectiveness of
testing. Test automation involves writing and executing automated test scripts, generating test data, and
analyzing test results.
8. Test Environment: Create and maintain a stable and representative test environment that closely
resembles the production environment. This includes hardware, software, network configurations, and any
other dependencies required for testing.
9. Test Data: Identify and prepare relevant and realistic test data to ensure comprehensive testing coverage.
Test data should encompass both typical and boundary conditions, error scenarios, and different
combinations of inputs.
10. Test Maintenance: Continuously review and update test artifacts to keep them aligned with the evolving
software requirements and changes. Test maintenance involves modifying test cases, adding new test
scenarios, and incorporating lessons learned from previous testing cycles.
By adhering to these testing fundamentals, organizations can establish a structured and systematic approach to
testing, leading to improved software quality and customer satisfaction.
Black box testing is a software testing technique that focuses on testing the functionality of a system without
examining its internal structure or implementation details. It is performed from an external perspective, treating the
system as a "black box" where the tester has no knowledge of the internal workings.
Here are some key points about black box testing:
1. Objective: The main objective of black box testing is to validate whether the system meets the specified
requirements and functions correctly, without considering how the system achieves that functionality.
2. No knowledge of internals: Testers performing black box testing have no access to the source code,
architecture, or design details of the system being tested. They rely solely on the system's inputs, outputs,
and documented specifications.
3. Focus on inputs and outputs: Black box testing concentrates on the input values provided to the system
and verifies if the expected outputs are produced. It aims to identify any discrepancies between the
expected behavior and the actual behavior of the system.
4. Test cases design: Test cases are designed based on functional requirements, specifications, use cases, and
other available documentation. The goal is to cover different scenarios and ensure that all desired system
functionalities are tested.
5. Techniques used: Various techniques can be applied in black box testing, including equivalence
partitioning, boundary value analysis, decision table testing, state transition testing, and error guessing.
These techniques help in selecting test cases that are likely to expose defects.
6. Test coverage: Black box testing aims to achieve high test coverage by ensuring that all relevant
functionalities and combinations of inputs are tested. The focus is on validating the behavior of the system
as perceived by the end user.
7. Advantages: Black box testing provides several advantages, including early detection of functional issues,
independence from implementation details, and the ability to assess the system from a user's perspective. It
also allows for parallel testing efforts since multiple testers can work simultaneously.
8. Limitations: While black box testing is effective at validating system functionality, it may not uncover all
defects or issues. It relies heavily on the quality of requirements and specifications documentation.
Additionally, it may not be as efficient in finding certain types of defects, such as those related to
performance or security vulnerabilities.
Overall, black box testing plays a crucial role in ensuring that software systems meet the specified requirements
and behave as expected. By focusing on the external behavior of the system, it provides valuable insights into the
system's functionality and helps in improving its quality.
• It is method of software testing that examines the functionality of an application without looking into internal
structure or working.
• It is a method in which internal structure of the application is not known to the tester.
• It checks the behavior of application hence it is called behavioral testing.
• This type of testing focus on functional requirements. • It check performance error and behavioral error.
• In black box testing check possible inputs and expected output
1) Equivalence Partitioning
2) Boundary Value Analysis
3) Cause effect table
4) State Transition
5) Exploratory Testing
6) Error Guessing
1) Equivalence Partitioning
Black box Equivalence Partitioning testing is a technique used in software testing to improve test coverage while
minimizing the number of test cases. It is based on the principle that if a specific input value within a range of values
causes a certain behavior or outcome, then any other value within the same range will likely produce the same
behavior or outcome.
Here is a detailed explanation of the black box Equivalence Partitioning testing technique:
1. Identify Input Domain: Start by identifying the input variables or data fields that are relevant to the
functionality being tested. For example, if testing a login feature, the input variables could be username and
password.
2. Partition the Input Domain: Divide the input domain into equivalence partitions. An equivalence
partition is a range or set of input values that are expected to produce the same behavior or outcome. The
idea is to select representative values from each partition to test, assuming that if one value works correctly,
others in the same partition should also work correctly.
3. Determine Valid and Invalid Partitions: Identify both valid and invalid equivalence partitions. Valid
partitions represent input values that should be accepted and processed correctly by the software. Invalid
partitions represent input values that should be rejected or produce errors.
4. Select Test Cases: From each partition, select representative test cases that cover different scenarios and
boundary conditions. The goal is to choose test cases that maximize coverage while minimizing
redundancy. For example, if a partition represents a range of ages from 18 to 65, selecting test cases like
20, 40, and 60 would be sufficient to cover the partition.
5. Execute Test Cases: Execute the selected test cases using the black box approach, without considering the
internal structure or implementation of the software. Provide the input values to the software and observe
the outputs or behaviors. Compare the actual results with the expected results.
6. Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual
outcomes. Any failures or deviations from expected behavior should be considered as defects or issues to
be addressed by the development team.
7. Iterate and Refine: Based on the test results, refine the equivalence partitions and test cases if necessary.
If defects are found, additional test cases can be added to cover the specific scenarios that caused the
failures.
The benefits of black box Equivalence Partitioning testing include reducing the number of test cases required,
ensuring adequate coverage of input values, and identifying defects early in the testing process. By focusing on
representative values within each partition, this technique helps optimize test coverage while maximizing efficiency.
It's important to note that Equivalence Partitioning is just one technique within the broader black box testing
approach, and it should be combined with other techniques like boundary value analysis, decision tables, and error
guessing to achieve comprehensive test coverage.
• In this technique divide the input domain into various class of data.
• Test each partition once this will reduce lots of rework and also give good test coverage.
Black box Boundary Value Analysis (BVA) testing is a technique used in software testing to determine test cases
based on the boundaries of input values. It aims to identify potential errors or defects that may occur at the
boundaries or edges of valid and invalid input ranges. By focusing on these critical values, BVA helps improve test
coverage and increase the likelihood of finding defects. Here is a detailed explanation of the black box Boundary
Value Analysis testing technique:
1. Identify Input Variables: Start by identifying the input variables or data fields that are relevant to the
functionality being tested. These could be numeric values, strings, dates, or any other input types.
2. Determine Boundary Values: For each input variable, determine the boundary values. These include the
minimum and maximum valid values, as well as values that are just below and above these boundaries. For
example, if testing a form that accepts ages between 18 and 65, the boundary values would be 17, 18, 65,
and 66.
3. Identify Invalid Boundary Values: Identify values that are just outside the valid range and would be
considered invalid inputs. For example, for an input field that only accepts positive integers, the invalid
boundary values would be -1, 0, and 1.
4. Select Test Cases: From the identified boundary values, select representative test cases that cover different
scenarios and edge cases. Typically, at least one test case is selected from each valid and invalid boundary
value set. For example, if there are three valid boundary values (17, 18, 65), at least one test case is selected
for each of these values.
5. Execute Test Cases: Execute the selected test cases using the black box approach, without considering the
internal structure or implementation of the software. Provide the input values to the software and observe
the outputs or behaviors. Compare the actual results with the expected results.
6. Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual
outcomes. Pay particular attention to the behavior at the boundaries and verify that the software handles the
input values correctly.
7. Iterate and Refine: Based on the test results, refine the boundary values and test cases if necessary. If
defects are found, additional test cases can be added to cover the specific scenarios that caused the failures.
The benefits of black box Boundary Value Analysis testing include focusing testing efforts on critical values that are
likely to uncover defects, providing comprehensive test coverage, and maximizing the effectiveness of testing
resources.
It's important to note that Boundary Value Analysis is just one technique within the broader black box testing
approach, and it should be combined with other techniques like Equivalence Partitioning, decision tables, and error
guessing to achieve thorough test coverage.
In this check boundary value just above to the boundary Max age is 60 so check age 61
• This test technique logical relationship between the inputs is checked if-else logic.
• Using this technique we consider condition and action.
• We take condition as input and action as output.
• Example : if (age > =18 && age <=60) {openaccout(); }
4) State Transition
Black box State Transition Table testing is a technique used in software testing to analyze and test the behavior of a
system or software application based on different states and transitions between those states. It focuses on capturing
the transitions that occur when certain events or conditions cause a change in the system's state. Here is a detailed
explanation of the black box State Transition Table testing technique:
1. Identify States: Start by identifying the distinct states that the system or software application can be in.
States represent the different modes, conditions, or statuses of the system. For example, if testing an e-
commerce website, states could include "logged out," "logged in," "adding items to the cart," and
"checkout."
2. Define Events: Identify the events or actions that can trigger a transition from one state to another. Events
represent user interactions, system inputs, or external factors that cause a change in the system's state. For
example, events could be "clicking the login button," "adding an item to the cart," or "submitting the
order."
3. Create the State Transition Table: Create a table with columns representing the current state, events, and
the resulting next state. Each row in the table represents a specific combination of the current state and the
triggering event, along with the resulting next state. The table is populated with all possible combinations
of states, events, and next states.
4. Determine Rules and Conditions: Analyze the transitions between states and identify any rules or
conditions that govern those transitions. Specify the rules in a concise and unambiguous manner. For
example, "If the current state is 'logged out' and the event is 'clicking the login button,' the next state should
be 'logged in.'"
5. Generate Test Cases: Generate test cases by selecting representative combinations from the State
Transition Table. Aim to cover all possible combinations of states and events, including valid and invalid
scenarios, as well as boundary conditions. Each test case should represent a unique combination of current
state, event, and the expected next state.
6. Execute Test Cases: Execute the selected test cases using the black box approach. Trigger the specified
events or actions in the corresponding current states and observe the resulting next states. Compare the
actual results with the expected results based on the defined rules.
7. Analyze Results: Analyze the test results to identify any discrepancies between the expected and actual
next states. Pay attention to the transitions triggered by different combinations of current states and events
and verify that the system behaves as expected based on the defined rules.
8. Iterate and Refine: Based on the test results, refine the State Transition Table and test cases if necessary.
If defects are found, additional test cases can be added to cover the specific combinations of current states
and events that caused the failures.
The benefits of black box State Transition Table testing include a systematic approach to testing state-dependent
behaviors, uncovering defects related to state transitions, and identifying missing or incorrect rules.
It's important to note that State Transition Table testing is just one technique within the broader black box testing
approach, and it should be combined with other techniques like Equivalence Partitioning, Boundary Value Analysis,
and decision tables to achieve comprehensive test coverage.
• We apply state transition technique when an application gives a different output for the same input
depending on output is produce on what has happen in the earlier stage.
• Example : Vending Machine , Traffic light
Vending Machine dispatch a product when inserted coin is valid.
5) Exploratory Testing
Black box State Exploratory Testing is a technique used in software testing to explore and test the behavior of a
system or software application based on different states and user interactions without relying on predetermined test
cases. It involves a tester's exploration and experimentation to uncover defects and understand the system's response
in various states. Here is a detailed explanation of the black box State Exploratory Testing technique:
1. Understand the System: Gain a thorough understanding of the system or software application being
tested, including its intended functionality, states, and possible user interactions. This knowledge will guide
the exploration process.
2. Identify States: Identify the different states that the system can be in during its lifecycle or operation.
States represent the various modes, conditions, or statuses of the system. For example, states could include
"logged out," "logged in," "idle," or "processing."
3. Define User Interactions: Determine the various user interactions, inputs, or events that can affect the
system's behavior or transition between states. These interactions could be user actions, system inputs, or
external events. For example, interactions could include clicking buttons, entering data, or receiving
external notifications.
4. Explore State Transitions: Start exploring the system by interacting with it and observing how it responds
in different states. Trigger events or perform actions that transition the system from one state to another.
Pay attention to the system's behavior, output, or any unexpected responses.
5. Observe System Reactions: Observe how the system reacts or responds to the different interactions and
state transitions. Note any anomalies, unexpected behaviors, or errors that occur. Pay attention to edge
cases, boundary conditions, and scenarios that are likely to reveal defects.
6. Document and Report Findings: Document the observations, findings, and any defects or issues
encountered during the exploration. Provide clear and concise descriptions of the steps taken, the observed
behavior, and any relevant details. Report the findings to the development team for further investigation
and resolution.
7. Repeat and Expand Exploration: Continuously explore and experiment with the system, focusing on
different states, combinations of interactions, and edge cases. Vary the order of interactions, explore
different paths, and simulate real-world scenarios to uncover potential defects and understand the system's
behavior comprehensively.
8. Iterate and Improve: Based on the findings and insights gained through exploration, refine the testing
approach and prioritize areas for further investigation. Use the knowledge gained to inform the creation of
additional test cases or to refine existing test cases for more targeted testing.
The benefits of black box State Exploratory Testing include the ability to uncover defects that might not be covered
by predefined test cases, a deeper understanding of the system's behavior in different states, and the identification of
real-world scenarios that can lead to unexpected issues.
It's important to note that State Exploratory Testing should be performed alongside other black box testing
techniques to achieve comprehensive test coverage and ensure that both expected and unexpected behaviors are
tested.
6) Error Guessing
Black box Error Guessing is a technique used in software testing to uncover defects or errors by leveraging the
tester's intuition, experience, and knowledge of potential vulnerabilities or weaknesses in the system. It is an
informal and heuristic approach that focuses on imagining potential errors that may not be explicitly specified in
requirements or test cases. Here is a detailed explanation of the black box Error Guessing technique:
1. Understand the System: Gain a thorough understanding of the system or software application being
tested, including its intended functionality, inputs, and expected outputs. Familiarize yourself with common
errors or issues that can occur in similar systems.
2. Leverage Experience and Knowledge: Draw upon your experience as a tester, knowledge of the system
domain, and understanding of common software pitfalls or vulnerabilities. Consider the types of errors that
are likely to be present in the system based on similar systems you have encountered in the past.
3. Guess Potential Errors: Based on your experience and knowledge, imagine potential errors or defects that
could occur in the system. Think about various failure scenarios, boundary conditions, edge cases, or
unexpected user behaviors that could lead to errors. Formulate these potential errors in the form of specific
test cases or test scenarios.
4. Design Test Cases: Design test cases that specifically target the potential errors or defects you have
identified. Each test case should represent a specific scenario or condition that could lead to an error.
Ensure that the test cases cover a wide range of possible error scenarios.
5. Execute Test Cases: Execute the test cases designed to uncover potential errors in the system. Provide the
inputs or trigger the conditions specified in the test cases and observe the system's behavior. Compare the
actual results with the expected results, focusing on identifying any unexpected behaviors, failures, or
errors.
6. Document and Report Findings: Document the observed errors, failures, or unexpected behaviors
encountered during the testing process. Clearly describe the steps taken, the inputs provided, and the
observed outcomes. Report the findings to the development team, providing clear and concise descriptions
of the errors or issues discovered.
7. Repeat and Expand: Continuously apply the error guessing approach throughout the testing process,
iterating on potential error scenarios and exploring additional test cases based on new insights or
observations. Leverage the testing team's collective experience and knowledge to refine and expand the
error guessing efforts.
The benefits of black box Error Guessing include the ability to uncover defects that may not be explicitly
documented, leveraging the tester's intuition and experience to identify potential issues, and addressing areas that
may have been overlooked by other testing techniques.
It's important to note that Error Guessing should not replace other formal testing techniques but should complement
them. It is most effective when used alongside techniques such as Equivalence Partitioning, Boundary Value
Analysis, and Cause-Effect Table testing to achieve comprehensive test coverage.
White box testing, also known as structural or glass box testing, is a software testing technique that focuses on
examining the internal structure, design, and implementation of a software application. It involves testing based on
an understanding of the underlying code, algorithms, and logic used in the system. Here is a detailed explanation of
white box testing:
1. Understand the Internal Structure: Gain a deep understanding of the software application's internal
structure, including the code, modules, functions, and data flows. Review the design and architecture
documents to familiarize yourself with the system's components and their relationships.
2. Identify Testable Units: Identify the specific units or components within the application that will be
tested. This can include functions, methods, classes, modules, or individual lines of code. These units are
typically referred to as code modules or code entities.
3. Design Test Cases: Design test cases based on the internal structure and implementation details of the
software. This includes selecting specific paths through the code, exercising different conditions, and
covering various code branches and logical scenarios. Test cases should be designed to verify the
correctness, robustness, and efficiency of the code.
4. Code Coverage: Aim for high code coverage, which measures the percentage of code that is exercised by
the test cases. Different levels of code coverage include statement coverage, branch coverage, path
coverage, and condition coverage. The goal is to ensure that all possible code paths and conditions are
tested.
5. Execute Test Cases: Execute the designed test cases by providing specific inputs and verifying the outputs
or behavior of the code. Monitor and track the results of each test case, comparing the actual outcomes with
the expected outcomes.
6. Debugging and Fixing Defects: When defects are found, utilize the knowledge of the code structure to
identify the root causes of the issues. Debugging tools and techniques can be employed to pinpoint the
locations of defects in the code. Once identified, defects are fixed by modifying the code to correct the
errors.
7. Performance and Security Testing: White box testing can also include performance and security testing.
By understanding the internal workings of the system, performance bottlenecks and vulnerabilities can be
identified and addressed. Techniques such as load testing, stress testing, and penetration testing can be
applied.
8. Test Automation: White box testing lends itself well to test automation. Automation frameworks can be
used to execute the test cases, monitor code coverage, and track the results. This allows for efficient and
repeatable testing, especially for regression testing.
The benefits of white box testing include the ability to uncover defects at the code level, ensure thorough coverage
of different code paths and conditions, optimize performance and security, and improve the overall quality of the
software.
It's important to note that white box testing is most effective when combined with other testing techniques such as
black box testing. This allows for a comprehensive and balanced approach to software testing, addressing both the
internal structure and the external behavior of the application.
White box testing is a software testing technique that involves examining the internal structure, design, and
implementation details of a system. It aims to ensure that all components of the system, such as individual functions,
methods, and code paths, are tested thoroughly. Basic Path Testing is one of the techniques used in white box testing
to achieve this goal.
Basic Path Testing focuses on testing all possible paths within a program's source code. A path refers to a sequence
of instructions or statements executed during the program's execution. Here's a detailed explanation of Basic Path
Testing:
1. Control Flow Graph (CFG): The first step in Basic Path Testing is to construct a Control Flow Graph
(CFG) for the program. A CFG is a graphical representation that depicts the flow of control between
various program statements. It consists of nodes (representing statements) and edges (representing possible
control flow between statements).
2. Cyclomatic Complexity: The next step is to calculate the cyclomatic complexity of the program using the
CFG. Cyclomatic complexity is a metric that quantifies the number of independent paths in a program. It
helps in determining the minimum number of test cases required to achieve full path coverage.
3. Identify Paths: Based on the CFG, all possible paths from the program's entry point to the exit point are
identified. This includes both linear paths (single path from start to end) and branching paths (multiple
paths due to conditional statements, loops, etc.).
4. Generate Test Cases: Test cases are created to cover each identified path. For linear paths, a single test
case is sufficient to cover the entire path. For branching paths, multiple test cases are required to cover
different scenarios and ensure that all conditional branches are exercised.
5. Path Execution: The test cases are executed, and the program's behavior is observed. The objective is to
determine if the program behaves as expected for each path and whether any errors or unexpected
behaviors occur.
6. Path Coverage Analysis: After executing the test cases, the coverage of paths is analyzed. The goal is to
ensure that all paths within the program have been exercised at least once. Coverage analysis helps in
identifying any missed or untested paths.
7. Iterative Process: Basic Path Testing is an iterative process. If any paths are missed during the initial
round of testing, additional test cases are designed to cover those paths. The process continues until all
paths are covered, or a predetermined coverage criterion is met.
Benefits of Basic Path Testing:
Thorough coverage: Basic Path Testing aims to achieve comprehensive coverage of a program's paths,
ensuring that all logical branches and decision points are tested.
Identifying defects: By examining all possible paths, this technique can help in uncovering errors, logic
flaws, or missing conditions within the code.
Optimization: It allows developers to optimize the code by identifying redundant or unreachable paths,
eliminating dead code, and improving the overall efficiency of the program.
Limitations of Basic Path Testing:
Complexity: In large and complex programs, the number of paths can be vast, making it impractical to test
every single path. Prioritization and selection of critical paths become crucial in such cases.
Limited to code-level testing: Basic Path Testing focuses solely on the internal structure of the program
and may not address issues related to integration, user interface, or system-level behavior.
Overall, Basic Path Testing is a powerful technique within white box testing that helps ensure thorough coverage of
a program's internal paths. It enables testers to dive deep into the code and identify potential defects, leading to more
robust and reliable software.
- Flow graph notation
- Independent path
- Deriving test cases
- Graph matrix
White box testing techniques focus on examining the internal structure and logic of a software application. Control
Structure Testing is one such technique that aims to test the control flow or the flow of program execution within the
application. It involves designing test cases to ensure that all possible control flow paths, decision points, loops, and
conditions are exercised. Here is a detailed explanation of Control Structure Testing:
1. Understand the Control Structure: Gain a thorough understanding of the control flow within the
software application. This includes identifying the decision points (such as if-else statements, switch-case
statements), loops (such as for loops, while loops), and other control structures that dictate the program's
execution.
2. Control Flow Graph: Construct a control flow graph, which is a visual representation of the program's
control flow. The control flow graph consists of nodes representing statements or blocks of code, and edges
representing the flow of control between the nodes. This graph provides a clear visualization of the control
flow paths within the application.
3. Identify Coverage Criteria: Select a coverage criteria based on the control structure. Coverage criteria
define the objectives of testing and help ensure that all control flow paths are tested. Common coverage
criteria for Control Structure Testing include statement coverage, branch coverage, condition coverage, and
path coverage.
4. Design Test Cases: Design test cases to achieve the desired coverage criteria. Each test case should be
designed to exercise a specific control flow path or combination of paths. This involves providing inputs or
test data that will cause the program to follow different branches, make decisions, and traverse loops.
5. Execute Test Cases: Execute the designed test cases by running the software application with the provided
inputs. Observe the program's behavior and compare the actual outputs or outcomes with the expected ones.
Ensure that each control flow path is exercised and that all decision points and conditions are tested.
6. Coverage Analysis: Analyze the coverage achieved based on the selected coverage criteria. Determine the
percentage of control flow paths covered by the executed test cases. This analysis helps identify any gaps in
testing and guides the creation of additional test cases to improve coverage if necessary.
7. Debugging and Fixing Defects: When defects or errors are identified during Control Structure Testing,
utilize the knowledge of the control flow to pinpoint the causes of the issues. Debugging techniques can be
applied to trace the program's execution and identify the specific statements or conditions that lead to the
defects. Once identified, defects can be fixed by modifying the code accordingly.
8. Iteration and Refinement: Iterate on the testing process by creating additional test cases to improve
coverage and address any missed control flow paths or scenarios. Refine the test cases based on the insights
gained from previous test runs and defect analysis.
Control Structure Testing helps ensure that the application's control flow is thoroughly tested, reducing the chances
of errors, logic flaws, and unexpected behaviors. It complements other white box testing techniques and should be
combined with other testing approaches, such as black box testing, to achieve comprehensive test coverage.
Note: Control Structure Testing is just one technique within the broader scope of white box testing. Other white box
testing techniques, such as data flow testing and path testing, can be employed to further analyze and test the internal
workings of the software application.
- Condition Testing
- Data flow testing
- Loop Testing1) Basic Path Testing
a. Flow graph notation
In that according to flow of the modules in the application first create flowchart of application
• Then make flow graph :
b.Independent path
• According to above flow graph find out the possible independent path.
• Each path is checked at least once during testing.
• Possible paths in above example
Path 1: 1->11
Path 2: 1->2 ->3->4->5->10->1
Path 3: 1->2 ->3->6->8->9
Path 4: 1->2 ->3->6->7->9
Cyclomatic complexity
• Find how complex your testing Formula= No of Edges- no of nodes+ 2
• Prepare test case that will force execution of each path in the basic set.
• Execute all possible path during testing.
d. Graph Matrix
• Flow graph is stored in computer memory using graph matrix.
• In flow graph each edge give some weight this weight are store in two dimensional matrix.
c: Loop Testing
• In application consist of different looping statement check how the data is moving in loops. It consist of simple
loops , nested loops, concatenated loops and unstructured loops
It is a way of software testing in which the internal It is a way of testing the software in which the tester
structure or the program or the code is hidden and has knowledge about the internal structure or the code
nothing is known about it. or the program of the software.
Implementation of code is not needed for black box Code implementation is necessary for white box
testing. testing.
This testing can be initiated based on the requirement This type of testing of software is started after a detail
specifications document. design document.
Black Box Testing White Box Testing
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of testing of It is generally applicable to the lower levels of software
software. testing.
It is not suitable or preferred for algorithm testing. It is suitable for algorithm testing.
It is less exhaustive as compared to white box It is comparatively more exhaustive than black box
testing. testing.
Software typically undergoes many levels of testing, from unit testing to system or acceptance testing. Typically,
in-unit testing, small “units”, or modules of the software, are tested separately with focus on testing the code of
that module. In higher, order testing (e.g, acceptance testing), the entire system (or a subsystem) is tested with the
focus on testing the functionality or external behavior of the system.
As information systems are becoming more complex, the object-oriented paradigm is gaining popularity because
of its benefits in analysis, design, and coding. Conventional testing methods cannot be applied for testing classes
because of problems involved in testing classes, abstract classes, inheritance, dynamic binding, message, passing,
polymorphism, concurrency, etc.
Testing classes is a fundamentally different problem than testing functions. A function (or a procedure) has a
clearly defined input-output behavior, while a class does not have an input-output behavior specification. We can
test a method of a class using approaches for testing functions, but we cannot test the class using these
approaches.
According to Davis the dependencies occurring in conventional systems are:
Data dependencies between variables
Calling dependencies between modules
Functional dependencies between a module and the variable it computes
Definitional dependencies between a variable and its types.
But in Object-Oriented systems there are following additional dependencies:
Class to class dependencies
Class to method dependencies
Class to message dependencies
Class to variable dependencies
Method to variable dependencies
Method to message dependencies
Method to method dependencies
Functional Testing is a type of Software Testing in which the system is tested against the functional requirements
and specifications. Functional testing ensures that the requirements or specifications are properly satisfied by the
application. This type of testing is particularly concerned with the result of processing. It focuses on simulation of
actual system usage but does not develop any system structure assumptions.
It is basically defined as a type of testing which verifies that each function of the software application works in
conformance with the requirement and specification. This testing is not concerned about the source code of the
application. Each functionality of the software application is tested by providing appropriate test input, expecting
the output and comparing the actual output with the expected output. This testing focuses on checking of user
interface, APIs, database, security, client or server application and functionality of the Application Under Test.
Functional testing can be manual or automated.
Functional testing involves the following steps:
1. Identify function that is to be performed.
2. Create input data based on the specifications of function.
3. Determine the output based on the specifications of function.
4. Execute the test case.
5. Compare the actual and expected output.
Advantages of Functional Testing:
It ensures to deliver a bug-free product.
It ensures to deliver a high-quality product.
No assumptions about the structure of the system.
This testing is focused on the specifications as per the customer usage.
Disadvantages of Functional Testing:
There are high chances of performing redundant testing.
Logical errors can be missed out in the product.
If the requirement is not complete then performing this testing becomes difficult.
1. Unit Testing allows developers to learn what functionality is provided by a unit and how to use it to gain a
basic understanding of the unit API.
2. Unit testing allows the programmer to refine code and make sure the module works properly.
3. Unit testing enables testing parts of the project without waiting for others to be completed.
4. Early Detection of Issues: Unit testing allows developers to detect and fix issues early in the development
process, before they become larger and more difficult to fix.
5. Improved Code Quality: Unit testing helps to ensure that each unit of code works as intended and meets the
requirements, improving the overall quality of the software.
System Testing is a type of software testing that is performed on a complete integrated system to evaluate the
compliance of the system with the corresponding requirements. In system testing, integration testing passed
components are taken as input. The goal of integration testing is to detect any irregularity between the units that
are integrated together. System testing detects defects within both the integrated units and the whole system. The
result of system testing is the observed behavior of a component or a system when it is tested. System Testing is
carried out on the whole system in the context of either system requirement specifications or functional
requirement specifications or in the context of both. System testing tests the design and behavior of the system
and also the expectations of the customer. It is performed to test the system beyond the bounds mentioned in
the software requirements specification (SRS). System Testing is basically performed by a testing team that is
independent of the development team that helps to test the quality of the system impartial. It has both functional
and non-functional testing. System Testing is a black-box testing. System Testing is performed after the
integration testing and before the acceptance testing.
Several tests are performed on a product before deploying it. You need to collect qualitative and quantitative
data and satisfy customers’ needs with the product. A proper final report is made mentioning the changes required
in the product (software). Usability Testing in software testing is a type of testing, that is done from an end
user’s perspective to determine if the system is easily usable. Usability testing is generally the practice of testing
how to easy a design is to use on a group of representative users. A very common mistake in usability testing is
conducting a study too late in the design process If you wait until right before your product is released, you won’t
have the time or money to fix any issues – and you’ll have wasted a lot of effort developing your product the
wrong way.
1. Prepare your product or design to test: The first phase of usability testing is choosing a product and then
making it ready for usability testing. For usability testing, more functions and operations are required than
this phase provided that type of requirement. Hence this is one of the most important phases in usability
testing.
2. Find your participants: The second phase of usability testing is finding an employee who is helping you
with performing usability testing. Generally, the number of participants that you need is based on a number
of case studies. Generally, five participants are able to find almost as many usability problems as you’d find
using many more test participants.
3. Write a test plan: This is the third phase of usability testing. The plan is one of the first steps in each round
of usability testing is to develop a plan for the test. The main purpose of the plan is to document what you are
going to do, how you are going to conduct the test, what metrics you are going to find, the number of
participants you are going to test, and what scenarios you will use.
4. Take on the role of the moderator: This is the fourth phase of usability testing and here the moderator
plays a vital role that involves building a partnership with the participant. Most of the research findings are
derived by observing the participant’s actions and gathering verbal feedback to be an effective moderator,
you need to be able to make instant decisions while simultaneously overseeing various aspects of the
research session.
5. Present your findings/ final report: This phase generally involves combining your results into an overall
score and presenting it meaningfully to your audience. An easy method to do this is to compare each data
point to a target goal and represent this as one single metric based on a percentage of users who achieved this
goal.
Feasibility Study: A feasibility study explores system requirements to determine project feasibility. There are
several fields of feasibility study including economic feasibility, operational feasibility, and technical feasibility.
The goal is to determine whether the system can be implemented or not. The process of feasibility study takes as
input the required details as specified by the user and other domain-specific details. The output of this process
simply tells whether the project should be undertaken or not and if yes, what would the constraints be.
Additionally, all the risks and their potential effects on the projects are also evaluated before a decision to start
the project is taken.
Project Planning: A detailed plan stating a stepwise strategy to achieve the listed objectives is an integral part of
any project. Planning consists of the following activities:
Set objectives or goals
Develop strategies
Develop project policies
Determine courses of action
Making planning decisions
Set procedures and rules for the project
Develop a software project plan
Prepare budget
Conduct risk management
Document software project plans
This step also involves the construction of a work breakdown structure(WBS). It also includes size, effort,
schedule, and cost estimation using various techniques.
Project Execution: A project is executed by choosing an appropriate software development lifecycle
model(SDLC). It includes a number of steps including requirements analysis, design, coding, testing and
implementation, testing, delivery, and maintenance. There are a number of factors that need to be considered
while doing so including the size of the system, the nature of the project, time and budget constraints, domain
requirements, etc. An inappropriate SDLC can lead to the failure of the project.
Project Termination: There can be several reasons for the termination of a project. Though expecting a project
to terminate after successful completion is conventional, at times, a project may also terminate without
completion. Projects have to be closed down when the requirements are not fulfilled according to given time and
cost constraints.
Some of the reasons for failure include:
Fast-changing technology
Project running out of time
Organizational politics
Too much change in customer requirements
Project exceeding budget or funds
Once the project is terminated, a post-performance analysis is done. Also, a final report is published describing
the experiences, lessons learned, and recommendations for handling future projects.
Project management is a systematic approach to planning, organizing, and controlling the resources required to
achieve specific project goals and objectives. The project management process involves a set of activities that are
performed to plan, execute, and close a project. The project management process can be divided into several
phases, each of which has a specific purpose and set of tasks.
1. Initiation: This phase involves defining the project, identifying the stakeholders, and establishing the
project’s goals and objectives.
2. Planning: In this phase, the project manager defines the scope of the project, develops a detailed project plan,
and identifies the resources required to complete the project.
3. Execution: This phase involves the actual implementation of the project, including the allocation of
resources, the execution of tasks, and the monitoring and control of project progress.
4. Monitoring and Control: This phase involves tracking the project’s progress, comparing actual results to the
project plan, and making changes to the project as necessary.
5. Closing: This phase involves completing the project, documenting the results, and closing out any open
issues.
6. Effective project management requires a clear understanding of the project management process and the
skills necessary to apply it effectively. The project manager must have the ability to plan and execute
projects, manage resources, communicate effectively, and handle risks and issues.
Inspection and audit processes are two important activities in project planning and management that help ensure the
project meets its objectives and requirements. Both processes involve a systematic and objective review of project
documentation, processes, and outcomes, but they differ in their focus and purpose.
Inspection is a process that involves reviewing project documentation, deliverables, and processes to identify
defects, errors, and inconsistencies. The inspection process is typically conducted by a team of subject matter
experts who use checklists, guidelines, and other tools to identify and document issues. The purpose of inspection is
to identify and correct problems early in the project life cycle to minimize their impact on project outcomes.
Audit, on the other hand, is a process that involves a more comprehensive review of project documentation,
processes, and outcomes to evaluate the project's effectiveness, efficiency, and compliance with standards and
regulations. The audit process is typically conducted by an independent team of auditors who use established criteria
and standards to evaluate project documentation, processes, and outcomes. The purpose of audit is to provide an
objective assessment of project performance and identify areas for improvement.
Both inspection and audit processes are important in project planning and management as they help ensure that the
project meets its objectives, stays within budget and schedule, and complies with relevant standards and regulations.
By identifying and correcting problems early in the project life cycle, inspection and audit processes help prevent
costly rework and delays, and improve the overall quality and success of the project.
The first step in the process is planning and identification. In this step, the goal is to plan for the development of
the software project and identify the items within the scope. This is accomplished by having meetings and
brainstorming sessions with your team to figure out the basic criteria for the rest of the project.
Part of this process involves figuring out how the project will proceed and identifying the exit criteria. This way,
your team will know how to recognize when all of the goals of the project have been met.
Identifying items like test cases, specification requirements, and code modules
Identifying each computer software configuration item in the process
Group basic details of why, when, and what changes will be made and who will be in charge of making them
Create a list of necessary resources, like tools, files, documents, etc.
The point of this step is to control the changes being made to the product. As the project develops, new baselines
are established, resulting in several versions of the software.
Identifying and classifying the components that are covered by the project
Developing a way to track the hierarchy of different versions of the software
Identifying the essential relationships between various components
Establishing various baselines for the product, including developmental, functional, and product baselines
Developing a standardized label scheme for all products, revisions, and files so that everyone is on the same page.
Baselining a project attribute forces formal configuration change control processes to be enacted in the event that
these attributes are changed.
3. Change Control
Change control is the method used to ensure that any changes that are made are consistent with the rest of the
project. Having these controls in place helps with quality assurance, and the approval and release of new
baseline(s). Change control is essential to the successful completion of the project.
In this step, requests to change configurations are submitted to the team and approved or denied by the software
configuration manager. The most common types of requests are to add or edit various configuration items or
change user permissions.
The next step is to ensure the project is developing according to the plan by testing and verifying according to the
predetermined baselines. It involves looking at release notes and related documents to ensure the software meets
all functional requirements.
Configuration status accounting tracks each version released during the process, assessing what is new in each
version and why the changes were necessary. Some of the activities in this step include:
Recording and evaluating changes made from one baseline to the next
Monitoring the status and resolution of all change requests
Maintaining documentation of each change made as a result of change requests and to reach another baseline
Checking previous versions for analysis and testing.
The final step is a technical review of every stage in the software development life cycle. Audits and reviews look
at the process, configurations, workflow, change requests, and everything that has gone into developing each
baseline throughout the project’s development.
The team performs multiple reviews of the application to verify its integrity and also put together essential
accompanying documentation such as release notes, user manuals, and installation guides.
Making sure that the goals laid out in the planning and identification step are met
Ensuring that the software complies with identified configuration control standards
Making sure changes from baselines match the reports
Validating that the project is consistent and complete according to the goals of the project.
Effort estimation is the process of forecasting how much effort is required to develop or maintain a software
application. This effort is traditionally measured in the hours worked by a person, or the money needed to pay for
this work.
Effort estimation is used to help draft project plans and budgets in the early stages of the software development life
cycle. This practice enables a project manager or product owner to accurately predict costs and allocate resources
accordingly.
Once the effort is estimated, various schedules (or project duration) are possible, depending on the number of
resources (people) put on the project.
For example, for a project whose effort estimate is 56 person-months, a total schedule of 8 months is possible with 7
people. A schedule of 7 months with 8 people is also possible, as is a schedule of approximately 9 months with 6
people.
Manpower and months are not interchangeable in a software project. A schedule cannot be simply obtained from
the overall effort estimate by deciding on average staff size and then determining the total time requirement by
dividing the total effort by the average staff size. Brooks has pointed out that person and months (time) are not
interchangeable. According to Brooks , "... man and months are interchangeable only for activities that require no
communication among men, like sowing wheat or reaping cotton. This is not even approximately true of
software...."
For instance, in the example here, a schedule of 1 month with 56 people is not possible even though the effort
matches the requirement. Similarly, no one would execute the project in 28 months with 2 people. In other words,
once the effort is fixed, there is some flexibility in setting the schedule by appropriately staffing the project, but this
flexibility is not unlimited.
Empirical data also suggests that no simple equation between effort and schedule fits well. In a project, the
scheduling activity can be broken into two sub activities: determining the overall schedule (the project duration)
with major milestones, and developing the detailed schedule of the various tasks.
Software quality product is defined in term of its fitness of purpose. That is, a quality product does precisely what
the users want it to do. For software products, the fitness of use is generally explained in terms of satisfaction of the
requirements laid down in the SRS document. Although "fitness of purpose" is a satisfactory interpretation of
quality for many devices such as a car, a table fan, a grinding machine, etc.for software products, "fitness of
purpose" is not a wholly satisfactory definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as specified in the SRS
document. But, has an almost unusable user interface. Even though it may be functionally right, we cannot consider
it to be a quality product.
A quantitative approach involves using statistical methods and procedures of data analysis to analyze, understand
and improve the quality of software products and services.
The quantitative approach to Quality Management typically involves the following steps:
1. Collect data: Collect data on the process, product, or service to be analyzed. The data that is related to the
process, product, or service is collected for the analysis process.
2. Analyze data: To analyze data, Statistical methods are useful to identify the trends, and patterns for the
future growth of the organization.
3. Identify cause-and-effect relationships: The analyzed data in the data analysis phase is used to identify the
cause-and-effect relationships with the issues that affect the quality of the product and identify the root cause
of the problem to reduce the repetition of the issue
4. Implement improvements: Based on the results from data analysis and Identifying the cause-and-effect
relationship, new changes or features are added to increase the quality level by implementing the
improvements.
5. Monitor and measure: To ensure that the improvements lead to providing the desired quality of the product
and satisfying the requirements, regular monitoring along with certain measures is needed.
CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in 1987.
It is not a software process model. It is a framework that is used to analyze the approach and techniques
followed by any organization to develop software products.
It also provides guidelines to further enhance the maturity of the process used to develop those software
products.
It is based on profound feedback and development practices adopted by the most successful organizations
worldwide.
This model describes a strategy for software process improvement that should be followed by moving
through 5 different levels.
Each level of maturity shows a process capability level. All the levels except level-1 are further described by
Key Process Areas (KPA’s).
The 5 levels of CMM are as follows:
Level-1: Initial –
No KPA’s defined.
Processes followed are Adhoc and immature and are not well defined.
Unstable environment for software development.
No basis for predicting product quality, time for completion, etc.
Limited project management capabilities, such as no systematic tracking of schedules, budgets, or progress.
Limited communication and coordination among team members and stakeholders.
No formal training or orientation for new team members.
Little or no use of software development tools or automation.
Highly dependent on individual skills and knowledge rather than standardized processes.
High risk of project failure or delays due to lack of process control and stability.
Level-2: Repeatable –
Focuses on establishing basic project management policies.
Experience with earlier projects is used for managing new similar natured projects.
Project Planning- It includes defining resources required, goals, constraints, etc. for the project. It presents a
detailed plan to be followed systematically for the successful completion of good quality software.
Configuration Management- The focus is on maintaining the performance of the software product,
including all its components, for the entire lifecycle.
Requirements Management- It includes the management of customer reviews and feedback which result in
some changes in the requirement set. It also consists of accommodation of those modified requirements.
Subcontract Management- It focuses on the effective management of qualified software contractors i.e. it
manages the parts of the software which are developed by third parties.
Software Quality Assurance- It guarantees a good quality software product by following certain rules and
quality standard guidelines while developing.
Level-3: Defined –
At this level, documentation of the standard guidelines and procedures takes place.
It is a well-defined integrated set of project-specific software engineering and management processes.
Peer Reviews- In this method, defects are removed by using a number of review methods like walkthroughs,
inspections, buddy checks, etc.
Intergroup Coordination- It consists of planned interactions between different development teams to ensure
efficient and proper fulfillment of customer needs.
Organization Process Definition- Its key focus is on the development and maintenance of the standard
development processes.
Organization Process Focus- It includes activities and practices that should be followed to improve the
process capabilities of an organization.
Training Programs- It focuses on the enhancement of knowledge and skills of the team members including
the developers and ensuring an increase in work efficiency.
Level-4: Managed –
At this stage, quantitative quality goals are set for the organization for software products as well as software
processes.
The measurements made help the organization to predict the product and process quality within some limits
defined quantitatively.
Software Quality Management- It includes the establishment of plans and strategies to develop quantitative
analysis and understanding of the product’s quality.
Quantitative Management- It focuses on controlling the project performance in a quantitative manner.
Level-5: Optimizing –
This is the highest level of process maturity in CMM and focuses on continuous process improvement in the
organization using quantitative feedback.
Use of new tools, techniques, and evaluation of software processes is done to prevent recurrence of known
defects.
Process Change Management- Its focus is on the continuous improvement of the organization’s software
processes to improve productivity, quality, and cycle time for the software product.
Technology Change Management- It consists of the identification and use of new technologies to improve
product quality and decrease product development time.
Defect Prevention- It focuses on the identification of causes of defects and prevents them from recurring in
future projects by improving project-defined processes.
A risk is a probable problem- it might happen or it might not. There are main two characteristics of risk
Uncertainty- the risk may or may not happen that means there are no 100% risks.
loss – If the risk occurs in reality , undesirable result or losses will occur.
Risk management is a sequence of steps that help a software team to understand , analyze and manage
uncertainty. Risk management consists of
Risk Identification
Risk analysis
Risk Planning
Risk Monitoring
A computer code project may be laid low with an outsized sort of risk. so as to be ready to consistently
establish the necessary risks which could have an effect on a computer code project, it’s necessary to reason
risks into completely different categories. The project manager will then examine the risks from every
category square measure relevant to the project.
There square measure 3 main classes of risks that may have an effect on a computer code project:
1. Project Risks:
Project risks concern various sorts of monetary funds, schedules, personnel, resource, and customer-related
issues. a vital project risk is schedule slippage. Since computer code is intangible, it’s terribly tough to
observe and manage a computer code project. it’s terribly tough to manage one thing that can not be seen.
For any producing project, like producing cars, the project manager will see the merchandise taking form.
For example, see that the engine is fitted, at the moment the area of the door unit fitted, the automotive is
obtaining painted, etc. so he will simply assess the progress of the work and manage it. The physical property
of the merchandise being developed is a vital reason why several computer codes come to suffer from the
danger of schedule slippage.
2. Technical Risks:
Technical risks concern potential style, implementation, interfacing, testing, and maintenance issues.
Technical risks conjointly embody ambiguous specifications, incomplete specification, dynamic
specification, technical uncertainty, and technical degeneration. Most technical risks occur thanks to the
event team’s lean information concerning the project.
3. Business Risks:
This type of risk embodies the risks of building a superb product that nobody needs, losing monetary funds or
personal commitments, etc.
A project monitoring plan is a document that outlines the strategies and methods for tracking and evaluating project
progress, performance, and outcomes. It is an essential tool in project management that helps project managers and
stakeholders monitor the project's status, identify potential problems, and take corrective actions to ensure the
project stays on track.
Objectives: The plan should clearly state the project's objectives and goals, which will serve as the basis for
monitoring and evaluating project progress and outcomes.
Performance Measures: The plan should identify specific performance measures and indicators that will be used to
track and evaluate project progress and outcomes. These measures should be quantifiable, measurable, and directly
related to the project's objectives.
Data Collection and Analysis Methods: The plan should outline the methods for collecting and analyzing project
data, including the types of data to be collected, the sources of data, and the frequency of data collection.
Reporting: The plan should specify the frequency and format of project status reports, as well as the stakeholders
who will receive the reports. The reports should include relevant project data, analysis, and recommendations for
corrective actions, if necessary.
Roles and Responsibilities: The plan should clearly define the roles and responsibilities of project team members
and stakeholders in the monitoring and evaluation process.
Risk Management: The plan should identify potential risks to the project's success and outline strategies for
mitigating those risks.
Overall, a project monitoring plan is a crucial component of project planning and management, as it helps ensure
that the project stays on track and meets its objectives. By monitoring and evaluating project progress and outcomes,
project managers and stakeholders can identify potential problems early and take corrective actions to keep the
project on track.
Project schedule simply means a mechanism that is used to communicate and know about that tasks are needed
and has to be done or performed and which organizational resources will be given or allocated to these tasks and
in what time duration or time frame work is needed to be performed. Effective project scheduling leads to success
of project, reduced cost, and increased customer satisfaction. Scheduling in project management means to list out
activities, deliverables, and milestones within a project that are delivered. It contains more notes than your
average weekly planner notes. The most common and important form of project schedule is Gantt chart.
Process:
The manager needs to estimate time and resources of project while scheduling project. All activities in project
must be arranged in a coherent sequence that means activities should be arranged in a logical and well-organized
manner for easy to understand. Initial estimates of project can be made optimistically which means estimates can
be made when all favorable things will happen and no threats or problems take place.
The total work is separated or divided into various small activities or tasks during project schedule. Then, Project
manager will decide time required for each activity or task to get completed. Even some activities are conducted
and performed in parallel for efficient performance. The project manager should be aware of fact that each stage
of project is not problem-free.
Advantages of Project Scheduling:
There are several advantages provided by project schedule in our project management:
It simply ensures that everyone remains on same page as far as tasks get completed, dependencies, and
deadlines.
It helps in identifying issues early and concerns such as lack or unavailability of resources.
It also helps to identify relationships and to monitor process.
It provides effective budget management and risk mitigation.
6.2 Implementation
Step 1: Set your project vision and scope with a planning meeting
What is it?
At the beginning of a new Agile project, you need to define a clear business need that your project is addressing. In
more simple terms: what is the end goal of this Agile project and how will you achieve it?
An Agile strategy meeting covers big picture ideas but it also needs to be realistic. You can start to think about
the scope of work, but remember that Agile projects need to be flexible and adapt to feedback.
To keep your planning meeting focused, try using the Elevator Pitch method:
For: (Our Target Customer)
Who: (Statement of the Need)
The: (Product Name) is a (Product Category)
That: (Key Product Benefit, Compelling Reason to Buy and/or Use)
Unlike: (Primary Competitive Alternative)
Our Product: (Final Statement of Primary Differentiation)
Who should be there?
The Agile planning meeting is where you get buy-in on your project. Try to include relevant stakeholders as well as
the product owner and key members of the product team.
“Goal-oriented roadmaps focus on goals, objectives, and outcomes like acquiring customers, increasing engagement,
and removing technical debt. Features still exist, but they are derived from the goals and should be used sparingly.
Use no more than three to five features per goal, as a rule of thumb.”
For each of these goals, you want to include 5 key pieces of information: Date, Name, Goal, Features, and Metrics
A standup is a daily, 10–15-minute meeting where your team comes together to discuss three things:
What did you complete yesterday?
What are you working on today?
Are there any roadblocks getting in the way?
While this might seem like an annoyance to some of your team, these meetings are essential for fostering the kind of
communication that drives Agile project management. Agile depends on reacting quickly to issues and voicing them
in a public space is a powerful way to foster cross-team collaboration.
The Agile Adaptive Project Management Life Cycle is a project management approach that combines Agile
principles with adaptive and iterative practices. It acknowledges the inherent uncertainty and complexity of
projects and emphasizes the ability to adapt and respond to changing circumstances. The life cycle consists of the
following stages:
1. Discovery: The project starts with a discovery phase to gain a deep understanding of the project's
objectives, context, and stakeholders' needs. This involves conducting research, gathering requirements,
and identifying key project constraints and risks.
2. Planning and Roadmap: A high-level plan and roadmap are created, outlining the project's goals,
deliverables, and major milestones. However, the plan remains flexible and adaptable, allowing for
adjustments as new information emerges or priorities change.
3. Iterative Development: The project progresses through a series of iterations or sprints, each focused on
delivering a working increment of the project. During each iteration, the project team collaboratively
plans, executes, and reviews the work. Feedback from stakeholders is sought regularly to ensure
alignment with their evolving expectations.
4. Continuous Adaptation: The Agile Adaptive approach emphasizes ongoing adaptation and learning. As
the project progresses, the team incorporates new information, feedback, and changing requirements into
the project's direction and plans. This iterative adaptation helps optimize the project's outcomes and
enables the team to adjust course when needed.
5. Stakeholder Engagement: Active stakeholder engagement is crucial throughout the project.
Stakeholders are involved in providing feedback, participating in iterative reviews, and collaborating on
decision-making. Their input helps shape the project's direction and ensures that it remains aligned with
their needs and priorities.
6. Monitoring and Control: The project's progress, budget, and quality are monitored and controlled
throughout the life cycle. Key performance indicators and metrics are tracked to measure progress,
identify risks, and make informed decisions. Regular retrospectives are conducted to reflect on the team's
performance and identify areas for improvement.
7. Closure and Transition: The project is closed when the desired outcomes are achieved or when it no
longer aligns with the organization's goals. Lessons learned are captured, and knowledge transfer
activities take place to ensure a smooth transition to ongoing operations or subsequent projects.
The Agile Adaptive Project Management Life Cycle encourages flexibility, collaboration, and continuous
improvement. It allows project teams to embrace uncertainty, adapt to changes, and deliver value incrementally
while maintaining a focus on customer satisfaction.
Adaptive Project Management (APM) is an approach that focuses on embracing change and uncertainty
throughout the project lifecycle. It recognizes that traditional project management methods may not be suitable
for projects that are highly complex, dynamic, or subject to frequent changes in requirements. APM emphasizes
flexibility, collaboration, and continuous adaptation to deliver successful outcomes.
Integrating the APM toolkit involves incorporating various techniques, tools, and principles that support the
adaptive project management approach. Here are some key elements of the APM toolkit:
1. Iterative and Incremental Development: APM emphasizes breaking the project down into smaller
iterations or increments. Each iteration focuses on delivering a working product or solution, allowing for
feedback and adjustments throughout the project.
2. Agile Frameworks: APM often leverages Agile methodologies such as Scrum, Kanban, or Lean to
promote adaptive practices. These frameworks provide principles and practices for managing change,
facilitating collaboration, and delivering value iteratively.
3. Dynamic Planning: APM recognizes that project plans need to be adaptable and dynamic. Rather than
creating a detailed plan upfront, the APM toolkit encourages continuous planning and re-planning as the
project progresses. This involves regularly assessing the project's status, reassessing priorities, and
adjusting plans accordingly.
4. Stakeholder Engagement: APM places significant emphasis on active stakeholder engagement. Regular
communication, collaboration, and feedback from stakeholders are crucial to ensure alignment, manage
expectations, and incorporate changing requirements.
5. Continuous Learning and Improvement: APM encourages a culture of learning and improvement. It
involves conducting regular retrospectives, where the project team reflects on their performance,
identifies areas for improvement, and implements changes to enhance project outcomes.
6. Adaptive Decision-Making: APM promotes decentralized decision-making, empowering project teams
to make decisions at the appropriate level. This allows for quicker responses to changes, encourages
innovation, and increases ownership and accountability among team members.
7. Risk Management: APM recognizes that uncertainty and risk are inherent in complex projects. The
APM toolkit includes techniques for proactive risk identification, assessment, and mitigation. It
encourages a flexible and adaptive approach to risk management, enabling timely adjustments to
minimize potential negative impacts.
By integrating these elements, the APM toolkit enables project teams to navigate complexity, respond to changes,
and deliver value in dynamic environments. It emphasizes collaboration, adaptability, and continuous
improvement to increase the likelihood of project success.
"The Science of Scrum" is a book by Jeff Sutherland that explores how Scrum, an Agile project management
framework, is rooted in scientific principles. It highlights the importance of feedback, transparency, and adaptation
in driving project success. The book offers practical insights and guidance on leveraging the scientific foundations
of Scrum to improve collaboration, deliver value, and navigate complexity effectively.
"The Science of Scrum" by Jeff Sutherland provides a scientific perspective on the Scrum framework within the
context of Agile project management. The book explores how empirical principles and practices of Scrum align
with the Agile values and principles. Here's an explanation of "The Science of Scrum" from an Agile project
management point of view:
1. Empirical Process Control: Agile project management embraces an empirical process control
approach, which involves making decisions based on observation, experimentation, and feedback. "The
Science of Scrum" reinforces this approach by highlighting Scrum's empirical nature. It emphasizes
transparency, inspection, and adaptation as the foundation of effective project management. Agile teams
use frequent inspections and feedback loops to gain visibility into their work and make informed
adjustments to deliver higher value.
2. Empirical Foundation: The book delves into the empirical foundations of Scrum, exploring its
alignment with other scientific disciplines. Agile project management values evidence-based decision-
making and continuous learning. "The Science of Scrum" draws connections to fields such as complex
adaptive systems, game theory, and cognitive neuroscience to highlight the empirical nature of Scrum
and reinforce the importance of data-driven insights in Agile project management.
3. Agile Values and Principles: Agile project management is guided by the Agile Manifesto and its
underlying values and principles. "The Science of Scrum" aligns with these Agile values, such as
customer collaboration, responding to change, and working iteratively. It emphasizes the Scrum values
of commitment, courage, focus, openness, and respect, which are integral to fostering an Agile mindset
and creating a collaborative and adaptive work environment.
4. Iterative and Incremental Delivery: Scrum, as a key Agile framework, promotes iterative and
incremental delivery of work. "The Science of Scrum" explains how the Scrum practices, including
sprint planning, daily stand-ups, sprint reviews, and retrospectives, enable Agile teams to break down
complex projects into smaller, manageable increments. This iterative approach allows for regular
feedback and adaptation, aligning with Agile project management's emphasis on delivering value
incrementally and embracing change.
5. Agile Project Scaling: "The Science of Scrum" addresses the scaling aspects of Scrum, acknowledging
that Agile project management is not limited to small teams or projects. It introduces the concept of
Scrum of Scrums, which is a scaling mechanism for coordinating and aligning multiple Scrum teams
working on larger projects. This perspective highlights the adaptability of Scrum and its applicability in
diverse project contexts within Agile project management.
"The Science of Scrum" provides Agile project managers and practitioners with a deeper understanding of the
scientific principles underlying the Scrum framework. By integrating empirical practices, iterative delivery, Agile
values, and scaling considerations, the book helps Agile project managers enhance their ability to manage
complex projects and navigate uncertainty effectively.
It's important to note that the Agile project management perspective presented here is a general interpretation of
the book. For a comprehensive understanding of the scientific and Agile principles discussed, it is recommended
to read "The Science of Scrum" by Jeff Sutherland.
In Agile project management, there is a shift in management responsibilities compared to traditional project
management approaches. Here's an explanation of the new management responsibilities in Agile project
management:
1. Facilitating Self-Organization: Agile project managers have the responsibility to foster self-
organization within the project team. Instead of micromanaging and assigning tasks, they create an
environment where team members are empowered to make decisions, collaborate, and take ownership of
their work. The focus is on enabling the team to be self-directed and self-managing.
2. Removing Obstacles: Agile project managers play a crucial role in removing obstacles or barriers that
hinder the progress of the team. They identify and address any issues or challenges that impede the
team's productivity and help them overcome those barriers. This could involve coordinating with
stakeholders, resolving conflicts, or providing necessary resources and support.
3. Ensuring Alignment: Agile project managers have the responsibility to ensure that the team's work
aligns with the project goals and customer needs. They facilitate clear communication and collaboration
among team members, stakeholders, and customers. By promoting regular feedback and transparency,
they help the team stay focused on delivering value and adapting to changing requirements.
4. Promoting Continuous Improvement: Agile project managers encourage a culture of continuous
improvement within the project team. They facilitate retrospectives, where the team reflects on their
processes, identifies areas for improvement, and implements changes to enhance their performance. They
support the team in adopting Agile practices, experimenting with new ideas, and finding ways to
optimize their work.
5. Agile Leadership and Coaching: Agile project managers take on a coaching and leadership role rather
than a command-and-control approach. They provide guidance, mentorship, and support to the team
members. They encourage skill development, foster a learning mindset, and help individuals and the
team grow and evolve.
6. Monitoring Progress and Metrics: Agile project managers monitor project progress and key metrics to
ensure visibility into the team's performance. They use Agile project management tools and techniques
to track work, assess progress, and make data-driven decisions. They also communicate project status
and metrics to stakeholders, facilitating transparency and enabling informed decision-making.
7. Adaptability and Flexibility: Agile project managers embrace adaptability and flexibility as they
navigate uncertainty and changing requirements. They understand that Agile projects require a dynamic
and iterative approach, and they adjust plans and priorities as needed. They are open to feedback and
encourage the team to be responsive to changing circumstances.
These new management responsibilities in Agile project management focus on empowering the team, promoting
collaboration, embracing change, and delivering value iteratively. The role of the Agile project manager is to
facilitate the team's success by creating an environment that supports self-organization, continuous improvement,
and adaptability