Manual Testing
Manual Testing
Manual Testing
Fault: Is a condition that causes the software to fail to perform its required function.
Error: Error refers to difference between actual output & expected output.
To discover defects.
To avoid the user from detecting problems.
To prove that the s/w has no defects.
To learn about the reliability of the software.
To ensure that product works as per user expectation.
To stay in business
To avoid being sued by customers
To detect defects early, which helps in reducing the cost of fixing those
defects?
Testing is the process of creating, implementing & evaluating tests. Testing measures
software quality.
Testing can find faults. When they are removed software quality is improved.
Walkthrough Informal meeting. The motto of meeting is defined, but the members will
come without any preparation. The author describes the work product in an informal
meeting to his peers or superiors to get feedback or inform or explain to their work
product.
Reviews Means Re-verification. Reviews have been found to be extremely effective for
detecting defects, improving productivity & lowering costs. They provide good check
points for the management to study the progress of a particular project. Reviews are also a
good tool for ensuring quality control. In short, they have been found to be extremely
useful by a diverse set of people and have found their way in to standard management &
quality control practice of many institutions. Their use continues to grow.
Quality Assurance:
Quality assurance measures the quality of processes used to create a quality product.
Software QA involves the entire s/w development process monitoring & improving the
process, making sure that any agreed upon standards & procedures are followed and
ensuring that problems are found and deal with.
AREAS OF TESTING:
Equivalence Class
For each piece of the specification, generate one or more equivalence class.
Label the classes as “valid” or “invalid”.
Generate one test case for each Invalid Equivalence Class.
Generate a test case that covers as many as possible equivalence classes.
Eg: In LIC different types of policies are there
Policy Age
type
1 0-5 years
2 6-12 years
3 13-21
years
4 21-40
years
5 40-60
years
Here we divide who comes under which policy & write TC’s for valid & invalid classes.
Eg: In LIC,
When user applies for type-5 insurance, system asks to enter the age of the
customer. Here age limit is greater than 40 yrs. & less than 60 yrs.
40-60
Error Guessing:
White box testing also called as Structural Testing. White box testing based on
knowledge of the internal logic of an application’s code. Tests are based on coverage of
code, statements, branches, paths, conditions & loops.
Structure = 1 Entry + 1 Exit with certain constrains, conditions and loops.
Approach
Basic Path Testing
Cyclomatic Complexity
MC cabe complexity
Structure Testing
Conditions Testing
Dataflow Testing
Loop Testing
This is just a combination of both black box and white box testing. Tester should
have the knowledge of both the internal and externals of the function.
Tester should have good knowledge of white box testing & complete knowledge of
black box testing
Grey box testing is especially important with web & Internet applications, because
the Internet is built around loosely integrated components that connect via relatively well-
defined interfaces.
BRS Acceptance
Test
Verification Validation
Verification Validation
Design Integration
Testing Test
Verification Validation
Verification Validation
V – MODEL
‘V’ stands for verification & validation. It is a suitable model for large-scale
companies to maintain testing process. This model defines co-existence relation between
development process and testing process.
PHASES ARE
1) Unit Testing
The main goal is to test the internal logic of the module. In unit testing tester
is supposed to check each and every micro function. All field level validations
are expected to be tested at this stage of testing. In most cases the developer
will do this.
In unit testing both black box & white box testing conducted by
developers.
Depends on LLD
Follows white box testing techniques.
Basic path testing
Loop coverage
Program technique testing
Approach:
i. Equivalence Class
ii. Boundary value analysis
iii. Error guessing
2) Integration Testing:
In this the primary objective of Integration Testing is to discover errors
in the interface between modules / sub-systems.
In this many unit tested modules are combined into sub-systems. The
goal here is to see if the modules are combined can be integrated properly. Follows white
box testing techniques to verify coupling of corresponding modules.
Approach
i. Top-down approach --- this is used for new systems.
ii. Bottom-up approach --- this is used for existing systems.
Top-down Approach
Testing main module without coming sub modules is called top-down approach. We
can use temporary programs instead of sub modules is called stub.
Bottom-up approach:
Testing sub modules with out coming main modules is called bottom-up approach.
We can use temporary programs instead of main module is called driver.
3) System Testing ;
The primary objective of system testing is to discover errors when the system is
tested as a whole. System testing is also called as End – to – End testing. Tester is expected
to test from login to logout by covering various business functionalities, conducted by test
engineers. Depends on SRS.
Approach:
Identify the end-to-end business life cycle.
Design the test data.
Optimize the end-to-end business life cycle.
4) Acceptance testing:
Acceptance testing is to get the acceptance from the client. Client will be
using the system against the business requirements. Client side tests the real-
life data of the client.
Approach:
Building a team with real-time users, functional users and developers.
Execution of business test cases.
F Functionality Testing
U Usability Testing
R Reliability Testing
R Regression Testing
P Performance Testing
S Scalability Testing
C Compatibility Testing
1) Functionality Testing
Testing Approach:
Equivalence class
Boundary value analysis
Error guessing
2) Usability Testing:
To test the ease (comfort, facility) and user-friendliness of the system.
Approach:
i. Each and every function should be available from all the pages of the
site.
ii. User should be able to submit request within 4-5 actions.
iii. Confirmation message should be displayed for each submits.
Quantative approach:
3) Reliability Testing
Objective is to find mean time between failure/time available under specific load
pattern and mean for recovery.
Eg:
23 hours/day availability & 1 hour for recovery (system).
City bank – have 4 servers in each region. Every 6 hrs. It will change servers.
Approach
RRT (Ration Real time tool)
4) Regression Testing
To check the new functionalities have been incorporated correctly without failing
the existing functionalities.
The bugs need to be communicated and assigned to developers that can fix it. After
the problem is resolved, fixes should be re-tested, and determination mode regarding
requirements for regression testing to check that fixes did not create problems else where.
5) Performance Testing
Performance parameters;
Finding break point of application. Max. No. of users that an application can
handle(at the same time)
Approach:
RCQE
Repeatedly working on the same functionality.
Critical Query Execution.
To emulate peak load.
Volume testing:
Load testing:
With the load that customer wants ( not at the same time) . Load is increasing
continuously till the customer is required load.
To find the maximum number of user system can handle. (Customer will give
max. no.)
Classification:
Network scalability
Server scalability
Application scalability
7) Compatibility testing:
How a product will perform over a wide range of hardware, software & network
configuration and to isolate the specific problems.
Approach: ET Approach.
Environment Selection:
Understanding the end users application environment.
Importance of selecting both old browser & new browser.
Selection of the operating system.
The life cycle begins when an application is first conceived (imagine) and ends
when it is no longer in use. It includes aspects such as initial concept , requirements
analysis, functional design, internal design, documentation planning, test planning, coding ,
document preparation , integration testing, maintenances , updates, re-testing, phase-out,
and other aspects.
V model is the most suitable way to follow for deciding when to start writing test
cases and conduct testing.
Testing limitations:
We can only test against system requirements.
o May not detect errors in the requirements.
o Incomplete or ambiguous requirements may lead to inadequate or incorrect
testing.
Exhaustive (total) testing is impossible in present scenario.
Time and budget constraints normally require very careful planning of the testing
effort.
Compromise between through ness and budget.
Test results are used to make business decisions for release dates.
Tester responsibilities :
Follow the test plans, scripts etc, as documented.
Report faults objectively and factually.
Check tests are correct before reporting s/w faults.
Assess risk objectively.
Prioritize what you report.
Communicate the truth.
We can’t test every thing. There is never enough time to do all testing you would
like, so what testing should you do?
Prioritize tests, so that, whenever you stop testing, you have done best testing in the
time available.
Tips:
Possible ranking criteria (all risk based)
Test where a failure would be most severe
Test where failures would be most visible.
Take the help of customer in understand what is most important to him.
What is most critical to the customers business?
Areas changed most often.
Areas with most problems in the past.
Most complex areas, or technically critical.
Software:
Before starting the analysis we first check the feasibility of the project/work/system.
If we feel it is feasible then we will go to SDLC phases.
Finance feasibility
Cost feasibility
Resource feasibility
Ability to accept
Analysis
Design
Coding
Testing
Analysis :
i. Requirements analysis is done to understand the problem the software system
is to solve.
ii. Understanding the requirement of the system is a major task.
iii. Analysis is on identifying what is need from the system.
iv. Main goal of the requirements specification is to produce the SRS document.
v. Once he understood the requirement must be specified in the document.
Design :
i. Purpose of the design is to plan a solution of the problem specified by the
requirement documents.
ii. This phase first step is moving from the problem domain to solution domain.
iii. The o/p of this phase is the design document.
iv. This document similar to a blue print.
Coding:
i. Once the design is complete, most of the major decisions about the system
have been made.
ii. The goal of the coding phase is to translate the design.
iii. The coding effect both testing & maintenance. Well-written code can reduce
the testing & maintenance efforts. Because of testing and maintenance costs of
s/w are much higher than to coding cost.
So the goal of the coding should be to reduce the testing & maintenance efforts.
Testing :
i. Testing is the major quality control measure used during s/w development. Its
basic function is to detect errors in the s/w.
ii. After the coding ,computer programs are available that can be executed for
testing purpose different levels of testing are used.
iii. The starting point of testing is unit testing. A module is tested separately. This
is done by the coder himself simultaneously along with the coding of the
module.
iv. After this modules are gradually integrated into subsystems which are then
integrated from the entire system. We do integration tests.
v. System testing: system is tested against the requirement to see if all the
requirement are met all the specified by the documents.
vi. Acceptance testing : client side on the real-life data of the client.
In this model the requirements are not freeze before any design or can proceed.
The prototype is developed based on the currently known requirements.
It is sample of how actual system looks like,
Requirement
Analysis
Design D C T
Code
Test
3. Iterative Model:
In this model we can make changes at any level, but all the four phases of
SDLC will take place again.
A A A
D D
D
C C C
T T T
4. Spiral Model :
In this model system is divided into modules and each module follows phases
of SDLC. It is good & successful model.
C C C Module 1 A A A
T
Module 2
T
Module 3
T
TEST LIFE CYCLE(TLC)
TLC PHASES:
System study
Scope/Approach/Estimation
Defect Handling
GAP Analysis
1. System study:
Domain:
In domain, there may be different types of domains like banking,
finance, Insurance, Marketing, Real-time, ERP, SEIBEL,
Manufacturing etc.
Software:
2. Scope/Approach/Estimation:
What to be tested.
Scope
What not be tested.
Eg:
U I S A
Module
Approach: Test Life Cycle (All the phases of TLC)
Estimation:
LOC (lines of code) / F.P (functional point) / Resource.
1 P.F =10 lines of code.
For this 1000 LOC we can estimate the time to complete the whole
TLC
Test Plan:
1. Who are the client & their details? And also the company& their details
where testing is taken place
2. Reference Documents
Like BRS, SRS, & DFD etc.
3. Scope of the Project
4. Project Architecture & Data Flow diagrams.
5. Test Strategy
6. Deliverables
7. Schedules
8. Milestones
9. Risk/Mitigation/Contingency
10.Testing Requirements
11.Assumptions
12.Test Environment/ Project
13.Defects
14.Escalation process
Finally the main aim of this application is to find Quality Test resources to the
companies and at the same time by using this Thatavarti wanted to improve their
business.
Release 1: In Release 1 we test first Module1 & Module3. But we are not testing
Module2.
We test only unit testing, Integration tests & System testing. After that we find bugs
if any and fix it. Then we retest after fixing.
We test only Module2. Also do regression testing for which if there any affects with
this attachment of module2 with Module1&2.
4. Project Architecture:
In this we represent the application in a pictorial format by using Dataflow diagrams,
Activity diagrams & E-R Diagrams.
5. Test Strategy:
Test strategy explains the application Test factors & Test types.
These are the requirements must to be filled before doing any testing.
Pause: In case if there is any problem to conduct test case execution we may
stop for sometime.
Suspension: We suspend the test case execution if any requirements are not
Fulfilled.
Also we give the clear picture of team who done the Testing.
Example:
Test Plan
Test Strategy
Schedule skip page: The time difference between the Actual and Required time to
deliver a particular document is considered as Scheduled skip page.
Actually in real time working in Saturdays & Sundays we can cover this Mile stone
gap.
10. Risk/Contingency/Mitigation:
11. Training: Training will be given to the resources if they are not having the required
Skills.
Example: Consider a Mainframe project comes with Healthcare domain. But the
company is having only solid test resources that are having knowledge only on web
applications.
If these documents are not sending by the client then we report to them.
13.Test Environment: It specifies the software, hard ware and other system details to
test the application.
14.Defects: Defect report documents
15.Escalation process: While conducting the test if any resource gets any doubts or
problem whom to report can be specified here.
It is nothing but communication flow from Bottom-to Top level in testing process.
Note: Test bed: A test bed configuration is identified and planned from hardware and
operating system
version and compatibility specifications.
Test data: After identifying the requirements for a test the creation of test data is
to be made. The testing team can make the test data or can also be provided by
the client.
Test case is description of what is to be tested what data to be used and what
actions to be done to check the actual risk against the expected result.
A test case is simply a test with formal steps and instructions.
Test cases are valuable because they are repeatable, reproducible under the
same/different environments and easy to improve upon with feedback.
Use case:
Format:
1. Description: it specifies the description of use case
2. Actors: Here we specify the actors involved actually in using this use case
3. Pre condition:
4. User Action& System Response
-Typical flow
-Normal Flow
-Exceptional flow
5. Post condition
6. Specific Requirements
7. Business Validations
8. Parking Lot
TC no.
Pre-condition
Description
Expected output
Actual output
Status
Remarks
TYPES OF REVIEWS:
Peer – peer review same level
Team lead review
Team Manager Review
REVIEW PROCESS:
Functional coverage:
iii. Output:
Raise the defect
Take a screen shot & save it.
8. Defect Handling:
Submit to developer
9. GAP Analysis:
Finding the difference between the client requirement & the application
developed.
Deliverables:
Test plan
Test scenarios
Defect reports
BRs Vs SRs.
SRs Vs Test Case.
TC vs. Defect.
Defect is open / closed.
What is Test Plan: A software project test plan is a document that describes the objectives,
scope, approach & focus of a software testing effort. The completed document will help
people outside the test group understand the “why & how “ of product validation.
WHAT IS DEFECT:
In computer technology, a defect is a coding error in a computer program. It is
defined by saying that “ A software error is present when the program does not do what its
end user reasonably expects it to do.”
Developers.
Technical support.
End users.
Sales and marketing engineers.
TYPES OF DEFECTS:
Cosmetic flow
Data corruption
Data loss
Documentation issue.
Incorrect operation.
Installation problem.
Missing feature.
Slow performance
Unexpected behavior
Unfriendly behavior
Priority:
Relative importance of the defect, how fast the developer has to take up the defect?
The general rule fortune fixing the defects will depend on the severity. All the high
severity defects should be fixed first.
This may not be the same in all cases some times even though severity of the bug is
high it may not be taken as the high priority.
At the same time the low severity bug may be considered as high priority.
3. INTEGRATION TESTING:
4. FUNCTIONAL TESTING:
Black box type of testing. This type of testing should be done by testers. This
does not mean that the programmers should not check that their code works
before releasing it.
5. REGRESSION TESTING:
6. SYSTEM TESTING:
Black box type testing that is based on over all requirements specifications.
Covers all combined parts of a system.
7. ACCEPTANCE TESTING:
8. RECOVERY TESTING:
Testing how well a system recovers from crashes, hardware failures or other
catastrophic(sudden calamity) problems.
9. SECURITY TESTING:
How well the system protects against unauthorized internal or external access.
10.COMPATABILITY TESTING:
Testing how well software performs in a particular hardware / software / network
etc. environment.
11.ALPHA TESTING:
12.BETA TESTING:
Testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-
users or others not by programmers or testers.
13.SANITY TESTING:
This is before testing. Application is stable or not, we want to write test cases
for product whether development team released build is able to conduct complete
testing or not ?
14.SMOKE TESTING:
After testing major & medium or critical functions are closed or not
15.MONKEY TESTING:
Testing like monkey . As no proper approach. Taking any functions and test it.
Coverage of main activities during testing is called monkey testing (If give one
day for testing)
16.MUTENT TESTING:
Password : *************
OK
Approach for the integration when checking the errors between module or sub
module.
19.AD-HOC TESTING:
Doing a short cut way, does not following a sequential order mentioned in the
test cases or test plan.
20.PATH TESTING:
SOFTWARE QUALITY:
BRS:
It specifies needs of customer.
Total business logic documents.
SRS:
It specifies Functional Specifications to develop,
HLD:
High level design document.
It specifies interconnection of modules.
LLD:
It specifies Internal logic of sub-modules.
TESTING TEAM:
Quality Control
Quality Analyst
Test Manager
Test Lead
Test Engineers
Quality: Quality means meeting requirements First time, on time& Every time.
They are
Inspection(sudden check)
Walk throughs&Reviews (Formal approaches)
-Here both parties are aware of the object & These are also called verification
& Static Testing.
Above all
VERIFICATION:
Typically involves reviews and meetings to evaluate( estimate , calculate)
documents, plans, code requirements and specifications. This can be done with check lists,
issue lists, walk through & inspections meetings.
VALIDATION:
Typically involves actual testing and takes place after verifications are completed.
SEVERITY:
Relative impact of the system.
i.e. how far the application is affected by this defect ( low, medium, high, critical).
PRIORITY:
Relative importance of the defect.
(i.e. giving preference to the defect low , medium , high).
SEI:
Software Engineering Institute.
Initiated by the U.S. defense department to help improve software development
processes.
CMM:
Capability maturity model developed by the SEI. It’s a model of 5 levels of
organizational maturity that determine effectiveness in delivering quality software.
Possible: For small project, the time needed to learn and implement them may not be
worth it. For larger projects or on-going long-term projects, they can be valuable.
A good test engineer has a “test to break “ attitude (approach, manner) an ability to
take the point of view of the customer, a strong desire for quality and attention to details.
A test case is a document that describes an input action or event and an expected
response, to determine if a feature of an application is working correctly.
This can be difficult to determine. Common factors in deciding when to stop are …
Dead lines ( release deadlines, testing dead lines etc..)
TC completed with certain percentage passed
Test budget / depleted (used U P)
Bug rate falls below a certain level
Beta or alpha testing period ends.
TESTING TECHNIQUE:
Way of executing and preparing the test cases.
TESTING METHODOLOGIES:
Way of developing the test.