Manual Testing Notes
Manual Testing Notes
Manual Testing Notes
INTRODUCTION
What is Software Testing?
• Software Testing is a process of executing a program or application with the intent of finding the software bugs.
• It can also be stated as the process of validating and verifying that a software program or application or product.
• It can also be defined as the process of checking the completeness, correctness, quality, and security of any software
product.
• Testing is a process, not an activity.
• Testing takes place throughout the SDLC.
Defect or Bug
• A Bug or defect is an error/Flaw in the application.
• When actual result deviates from the expected result while testing a software application or product then it results into a
defect.
• When the result of the software application or product does not meet the end user expectations or the software
requirements then it results into a Bug or Defect.
What is Failure
• If under certain environment and situation, defects in the application or product get executed then the system will produce
the wrong results causing a failure.
PRINCIPLES OF TESTING
There are 7 principles of Testing
1. Testing shows presence of defects
2. Exhaustive testing is impossible
3. Early Testing
4. Defect clustering
5. Pesticide Paradox
6. Testing is context depending
7. Absence of errors fallacy
What is Quality?
• Degree of excellence.
• ISO 8402-1986 standard defines quality as “the totality of features and characteristics of a product or service that bears its
ability to satisfy stated or implied needs.”
Phases of SDLC
1. Requirement gathering and analysis
2. Design
3. Implementation
4. Testing
5. Deployment
6. Maintenance
Verification
• It is the process which starts at early phases of SDLC where we check Requirements, Design and Code.
• Done by Review meetings, Walkthrough, or Inspection.
• Are we building the product, right?
• The process of evaluating work-products of a development phase to determine whether they meet the specified
requirements for that phase.
Validation
• The process of evaluation software during or at the end of the development process to determine whether it satisfies
specified business requirements.
• Done by Testing.
• Are we building the right product?
SDLC MODELS
1. SDLC models are methodology used to develop a software
2. Models we are going to cover
3. Waterfall Model
4. Agile Development
5. V Model
Waterfall Model
Advantages
• Simple and Easy to Understand & Implement
• Easy to Manage due to Rigidity of Model
• Phases are processed & completed one at a time
• Good for shorter time projects where requirements are well understood.
Disadvantages
• No working software is produces until late during the life cycle.
• High amounts of risk and uncertainty
• Not good for complex projects
• Cannot accommodate changing requirements
How to Test in SDLC
1. Requirements
2. Design
3. Implementation
4. Testing
5. Deployment
6. Maintenance
UNIT TESTING
• Also known as Component Testing, Module Testing and Program Testing.
• A unit is the smallest testable part of an application like functions, classes, procedures, interfaces.
• Unit testing is a method by which individual units of source code are tested to determine if they are fit for use.
• Unit tests are basically written and executed by software developers to make sure that code meets its design and
requirements and behaves as expected.
• Unit Testing can include Functional and Non-Functional Testing (Memory leak, Performance)
• When Unit testing should be done?
• By whom unit testing should be done
INTEGRATION TESTING
• Integration testing is the process in which we test the interface between the components.
• It tests how interface is interaction with OS, File system, hardware, or another interface.
• Integration testing is done by a specific integration tester or test team.
Levels of Integration testing
There are 2 levels of Integration Testing
1. Component Integration Testing
2. System Integration Testing
SYSTEM TESTING
• In system testing the behavior of whole system/product is tested as defined by the scope of the development project or
product.
• It may include tests based on risks and/or requirement specifications, business process, use cases, or other high-level
descriptions of system behavior, interactions with the operating systems, and system resources.
• System testing is most often the final test to verify that the system to be delivered meets the specification and its purpose.
• System testing is carried out by specialist’s testers or independent testers.
• System testing should investigate both functional and non-functional requirements of the testing.
• System testing requires a controlled test environment.
ALPHA TESTING
• This takes place at the developer’s site. Developers observe the users and note problems.
• Alpha testing is testing of an application when development is about to complete. Minor design changes can still be made
as a result of alpha testing.
• Alpha testing is typically performed by a group that is independent of the design team, but still within the company,
e.g. in-house software test engineers, or software QA engineers.
• Alpha testing is final testing before the software is released to the general public.
BETA TESTING
1. It takes place at the customer’s site. It sends the system to users who install it and use it under real-world working
conditions.
2. A beta test is the second phase of software testing in which a sampling of the intended audience tries the product out.
3. The goal of beta testing is to place your application in the hands of real users outside of your own engineering team to
discover any flaws or issue from the user’s perspective that you would not want to have in your final, released version of
the application.
Types of Testing
• Testing if function (Functional Testing)
• Testing of Software product characteristics (Non- functional Testing)
• Testing of Software structure/architecture (Structural Testing)
• Testing related to changes.
FUNCTIONAL TESTING
• Functional Testing is the testing in which we test the functional aspect of the system.
• We test what a system does.
• The system should do what it is supposed to do whether it is written in the requirements or not.
• This type of testing can be covered based in functional requirement specification document or use cases.
• This type of testing can be done at any test levels.
• According to IS 9126 functional testing focusing on suitability, interoperability, security, accuracy, and compliance.
NON-FUNCTIONAL TESTING
• Non-functional testing is the testing in which we test the non-functional aspect of the system.
• We test how well things are done.
• Non-functional testing covers following testing:
a. Performance Testing
b. Usability Testing
USER INTERFACE
• Check the User Interface of any system
• Color coding is proper
• Position of all elements is proper
• Font sizes
• Changing the mode
USABILITY TESTING
• To test the ease with which the user can use. It tests that whether the application or the product built is user-friendly or
not.
• Test following things
• How easy it is to use the software?
• How easy it is to learn the software?
• How convenient is the software to end user?
• Components we test in Usability testing:
• Learnability: How easy to learn the system.
• Efficiency: How fast experience user work on system
• Memorability: how easy user remember the system
• Errors: How many errors user makes
• Satisfaction: How satisfy the user is.
PERFORMANCE TESTING
• Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a
particular workload.
• This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS
(millions of instructions per second) at which a system function.
LOAD TESTING
• A load test is type of Test which is conducted to understand the behavior of the application under a specific expected load.
• Load testing is performed to determine a system's behavior under both normal and at peak conditions.
STRESS TESTING
• It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
• It put greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be
considered correct behavior under normal circumstances.
SECURITY TESTING
• Whether the application or the product is secured or not.
• Checks to see if the application is vulnerable to attacks, if anyone hack the system or login to the application without any
authorization.
• Checks confidentiality, integrity, authentication, availability, authorization.
COMPATIBILITY TESTING
• To ensure compatibility of the system/application/website built with various Software platforms.
CONFIGURATION TESTING
• To ensure compatibility of the system/application/website built with various Hardware platforms
REGRESSION TESTING
• In this testing we check whether any unchanged modules is affected by any changed module.
• To verify that modifications in the software have not caused unintended adverse side effects.
INTEROPERABILITY TESTING
• Interoperability testing is a type of testing to check whether software can inter-operate with other software component,
softwares or systems.
• This type of testing carried out to ensure that different components or devices from different vendors are working fine
together or not.
MAINTAINABILITY TESTING
• This type of testing covers how easy it is to maintain the system.
• How easy it is to modify the system.
RELIABILITY TESTING
• Reliability testing is a type of testing to verify that software is capable of performing a failure-free operation for a specified
period of time in a specified environment.
• The purpose of reliability testing is to determine product reliability.
• Following factors which decides Reliability:
• Operations should be failure free.
• For how much time operations of failure free.
PORTABILITY TESTING
• In this testing type we check how easy it is to move the software from one environment/platform to another.
• We usually check the amount of effort to move from different Hardware platforms, & Software Platforms.
AD-HOC TESTING
• The testing process in which is NOT organized.
• It is black box testing process which does not follow any process.
• The intent of doing Ad hoc is to find issue after formal round of testing is done.
• People who have complete knowledge of System can only do the ad hoc testing.
STRUCTURAL TESTING
• This is the testing in which we test the internal architecture of the system.
• Also known as White Box, Glass Box.
• We usually do this testing at Unit or Integration testing level.
Informal Review
• Informal Process of Review
• Usually involves 2 members
• Things are not documented.
Formal Review
• Well Structured format of Review
• Consist of 6 Stages
a. Planning
b. Kick Off
c. Preparation
d. Review Meeting
e. Rework
f. Follow Up
1. The Moderator
• Review Leader
• Perform entry check
• Follow-up on the rework
• Schedule the meeting
• Coaches another team
• Leads the possible discussion and stores the date that is collected.
2. The Author
• Illuminate the unclear areas and understand the defects found
• Basic goal should be to learn as much as possible with regard in improving the quality of the document.
3. The Scribe
• Scribe is separate person to do the logging of the defects found during the review.
4. The reviewers
• Also known as checkers or inspectors.
• Check any material for defects, mostly prior to the meeting.
• The manager can also be involved in the review depending on his or her background.
5. The managers
• Manager decides on the execution of review
• Allocates time in project schedules and determines whether review process objectives have been met.
Types of Review
1. Walkthrough
2. Technical Review
3. Inspection
Walkthrough
• It is not a formal process/review
• It is led by the authors
• Authors guide the participants through the document according to his or her thought process to achieve a common
understanding and to gather feedback.
• Useful for the people if they are not from the software discipline, who are not used to or cannot easily understand software
development process.
• Is especially useful for higher level documents like requirement specification, etc.
• The goals of a walkthrough:
• To present the documents both within and outside the software discipline in order to gather the information regarding
the topic under documentation.
• To explain or do the knowledge transfer and evaluate the contents of the document
• To achieve a common understanding and to gather feedback.
• To examine and discuss the validity of the proposed solutions.
Technical Review
• It is less formal review
• It is led by the trained moderator but can also be led by a technical expert
• It is often performed as a peer review without management participation
• Defects are found by the experts (such as architects, designers, key users) who focus on the content of the document.
• In practice, technical reviews vary from quite informal to very formal
• The goals of the technical review are:
• To ensure that an early stage the technical concepts are used correctly
• To access the value of technical concepts and alternatives in the product
• To have consistency in the use and representation of technical concepts
• To inform participants about the technical content of the document.
Inspection
• It is the most formal review type
• It is led by the trained moderators
• During inspection the documents are prepared and checked thoroughly by the reviewers before the meeting
• It involves peers to examine the product
• A separate preparation is carried out during which the product is examined and the
• The defects found are documented in a logging list or issue log
• A formal follow-up is carried out by the moderator applying exit criteria
• The goals of inspection are:
• It helps the author to improve the quality of the document under inspection
• It removes defects efficiently and as early as possible
• It improves product quality It create common understanding by exchanging information
• It learns from defects found and prevent the occurrence of similar defects.
Decision table
• Deal with combinations of inputs.
• more focused on business logic or business rules.
• Decision tables provide a systematic way of stating complex business rules, which is useful for developers as well as for
testers.
Statement Coverage
• The state coverage covers only the true conditions.
• In this process each and every line of code needs to be checked and executed.
Example
Int a
Int b
If (a>b)
Print a
To cover 100% statement coverage, we need 1 statement
Example
Int a
Int b
If (a>b)
Print a
Else
Print b
In order to do 100% statement coverage, we need 2 statements.
Path Coverage
• To cover all path once.
Test Planning
• Test planning involves scheduling and estimating the system testing process, establishing process standards, and describing
the tests that should be carried out.
• managers allocate resources and estimate testing schedules
• Final outcome of Planning is Test Plan.
Test Plan
• Test plan is the project plan for the testing work to be done
• Plan to track the progress of the Training project.
Test Designing
• This phase defines "HOW" to test.
• Identify and get the test data.
• Identify and set up the test environment.
• Create the requirement traceability metrics.
Test Execution
• This is the Software Testing Life Cycle phase where the actual execution takes place.
• before you start your execution, make sure that your entry criterion is met.
• Entry Criteria for Execution:
a. Verify if the Test environment is available and ready for use.
b. Verify if test tools installed in the environment are ready for use.
c. Verify if Testable code is available.
d. Verify if Test Data is available and validated for correctness of Data.
Bug / Defect
• A Bug or defect is an error/Flaw in the application.
• When actual result deviates from the expected result while testing a software application or product then it results into a
defect.
• When the result of the software application or product does not meet with the end user expectations or the software
requirements then it results into a Bug or Defect.
Failure
• If under certain environment and situation defects in the application or product get executed, then the system will produce
the wrong results causing a failure.
Defect Template
1. Defect ID
2. Project
3. Module Name
4. Summary
5. Description
6. Steps to reproduce
7. Actual Result
8. Expected Result
9. Attachments
10. Severity
11. Priority
12. Resolution
13. Component
14. Fix Version
15. Reported By
16. Assigned To
17. Status
18. Environment
Postmortem Review
• it's important for project managers and team members to take stock at the end of a project and develop a list of lessons
learned so that they don't repeat their mistakes in the next project.
• Reexamine the complete process for Future improvements.
Test monitoring
• Give the test team and the test manager feedback on how the testing work is going, allowing opportunities to guide and
improve the testing and the project.
• Provide the project team with visibility about the test results.
• Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test work
is done.
Test control
• Guiding and corrective actions to try to achieve the best possible outcome for the project
• A portion of the software under test will be delivered late but market conditions dictate that we cannot change the release
date. At this point of time test control might involve re-prioritizing the tests so that we start testing against what is available
now.
Types of Risk
1. Product Risk
2. Project Risk
Product Risk
• Possibility that the system or software might fail to satisfy or fulfill some reasonable expectation of the customer, user, or
stakeholder.
• They are risks to the quality of the product.
• Also Known as Quality Risk
• The product risks that can put the product or software in danger are:
• If the software skips some key function that the customers specified, the users required, or the stakeholders were
promised.
• If the software is unreliable and frequently fails to work.
• If software fails in ways that cause financial or other damage to a user or the company that user works for.
• If the software has problems related to a particular quality characteristic, which might not be functionality, but rather
security, reliability, usability, maintainability, or performance.
Project Risk
• It is subject to risks that carry danger to project.
• Risk such as the late delivery of the test items to the test team or availability issues with test environment.
• They are also indirect risks such as expensive delays in repairing defects found in testing or problem with getting
professional system administration support for the test environment.