Manual Testing

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 35

MANUAL TESTING

What is software testing?

Testing is executing a program with an intention of finding errors.

Fault: Is a condition that causes the software to fail to perform its required function.

Error: Error refers to difference between actual output & expected output.

Failure: Is the inability of a system or component to perform the required function


according to its specification.

WHY S/W TESTING?

 To discover defects.
 To avoid the user from detecting problems.
 To prove that the s/w has no defects.
 To learn about the reliability of the software.
 To ensure that product works as per user expectation.
 To stay in business
 To avoid being sued by customers
 To detect defects early, which helps in reducing the cost of fixing those
defects?

WHY EXACTLY TESTING IS DIFFERENT FROM QA/QC?

Testing is the process of creating, implementing & evaluating tests. Testing measures
software quality.

Testing can find faults. When they are removed software quality is improved.

Simply: Testing means “Quality control”.


Quality control measures the quality of a product.
Quality assurance measures the quality of processes used to create a quality product.

Quality Control: is the process of inspections, walk through & reviews.

Inspection: An inspection is formalized than a ‘ walkthrough ‘ – typical with group of


people including a moderator, mediator, reader & a recorder to take notes. The subjects of
the inspection is typically a document such a requirements specifications, or a test plan &
two purpose is to find problems and see what is missing, not to fix anything. The primary
purpose of inspection is to detect defects of different stages during a project.

Walkthrough Informal meeting. The motto of meeting is defined, but the members will
come without any preparation. The author describes the work product in an informal
meeting to his peers or superiors to get feedback or inform or explain to their work
product.

Reviews Means Re-verification. Reviews have been found to be extremely effective for
detecting defects, improving productivity & lowering costs. They provide good check
points for the management to study the progress of a particular project. Reviews are also a
good tool for ensuring quality control. In short, they have been found to be extremely
useful by a diverse set of people and have found their way in to standard management &
quality control practice of many institutions. Their use continues to grow.

Quality Assurance:
Quality assurance measures the quality of processes used to create a quality product.
Software QA involves the entire s/w development process monitoring & improving the
process, making sure that any agreed upon standards & procedures are followed and
ensuring that problems are found and deal with.

Why do we need an approach for testing

Yes, we definitely need an approach for testing.


To overcome the following problems, we need a formal approach to testing.
 Incomplete functional coverage
 No risk management
 Too little emphasis on user tasks
 Inefficient over the long term

AREAS OF TESTING:

1. Black box testing


2. White box testing
3. Grey box testing

1. Black Box Testing


Black box testing is also called as functionality testing. In this testing testers
will be asked to test the correctness of the functionality with the help of inputs
& valid outputs.
Black box testing not based on any knowledge of internal design or code.
Tests are based on requirements & functionality.
Approach
Equivalence Class
Boundary Value Analysis
Error Guessing

Equivalence Class
 For each piece of the specification, generate one or more equivalence class.
 Label the classes as “valid” or “invalid”.
 Generate one test case for each Invalid Equivalence Class.
 Generate a test case that covers as many as possible equivalence classes.
Eg: In LIC different types of policies are there

Policy Age
type
1 0-5 years
2 6-12 years
3 13-21
years
4 21-40
years
5 40-60
years

Here we test each & every point.

Suppose 0-5 means we write Test cases for 0,1,2,3,4 & 5.

Here we divide who comes under which policy & write TC’s for valid & invalid classes.

Boundary values Analysis

 Generate test cases for the boundary values.


 Minimum value, minimum value+1, minimum value-1
 Maximum value, Maximum value +1, Maximum value –1

Eg: In LIC,

When user applies for type-5 insurance, system asks to enter the age of the
customer. Here age limit is greater than 40 yrs. & less than 60 yrs.

Here just we will test boundary values.

40-60

Minimum =40 Maximum =60


Minimum + 1 =41 Maximum +1 = 61
Minimum – 1 =39 Maximum -1 = 59

Here we write test cases for this step only.

Error Guessing:

Generate test cases against to the specification


Eg:
Type-5 policy.
It takes age limits only 40-60. But here we write test cases against to
that like 30, 20, 70 & 65.

WHITE BOX TESTING:

White box testing also called as Structural Testing. White box testing based on
knowledge of the internal logic of an application’s code. Tests are based on coverage of
code, statements, branches, paths, conditions & loops.
Structure = 1 Entry + 1 Exit with certain constrains, conditions and loops.

Why do we go for White Box Testing?

Approach
Basic Path Testing
 Cyclomatic Complexity
 MC cabe complexity
Structure Testing
 Conditions Testing
 Dataflow Testing
 Loop Testing

GREY BOX TESTING

This is just a combination of both black box and white box testing. Tester should
have the knowledge of both the internal and externals of the function.

Tester should have good knowledge of white box testing & complete knowledge of
black box testing

Grey box testing is especially important with web & Internet applications, because
the Internet is built around loosely integrated components that connect via relatively well-
defined interfaces.

PHASES OF TESTING – V MODEL

BRS Acceptance
Test

Verification Validation

SRS System Test

Verification Validation
Design Integration
Testing Test

Verification Validation

Build System Unit Test

Verification Validation

V – MODEL

‘V’ stands for verification & validation. It is a suitable model for large-scale
companies to maintain testing process. This model defines co-existence relation between
development process and testing process.

Draw back: Cost & Time.

PHASES ARE

Unit testing and testing process.


This model defines co-existence relation between development process and testing process.
1) Unit Testing
2) Integration Testing
3) System Testing
4) User Acceptance Testing

1) Unit Testing

The main goal is to test the internal logic of the module. In unit testing tester
is supposed to check each and every micro function. All field level validations
are expected to be tested at this stage of testing. In most cases the developer
will do this.

 In unit testing both black box & white box testing conducted by
developers.
 Depends on LLD
 Follows white box testing techniques.
 Basic path testing
 Loop coverage
 Program technique testing
Approach:
i. Equivalence Class
ii. Boundary value analysis
iii. Error guessing

2) Integration Testing:
In this the primary objective of Integration Testing is to discover errors
in the interface between modules / sub-systems.

In this many unit tested modules are combined into sub-systems. The
goal here is to see if the modules are combined can be integrated properly. Follows white
box testing techniques to verify coupling of corresponding modules.
Approach
i. Top-down approach --- this is used for new systems.
ii. Bottom-up approach --- this is used for existing systems.

Top-down Approach

Testing main module without coming sub modules is called top-down approach. We
can use temporary programs instead of sub modules is called stub.

Bottom-up approach:

Testing sub modules with out coming main modules is called bottom-up approach.
We can use temporary programs instead of main module is called driver.

3) System Testing ;
The primary objective of system testing is to discover errors when the system is
tested as a whole. System testing is also called as End – to – End testing. Tester is expected
to test from login to logout by covering various business functionalities, conducted by test
engineers. Depends on SRS.

Follows black box testing techniques.


The main goal is to see if the s/w meets its requirement.

Approach:
 Identify the end-to-end business life cycle.
 Design the test data.
 Optimize the end-to-end business life cycle.
4) Acceptance testing:

Acceptance testing is to get the acceptance from the client. Client will be
using the system against the business requirements. Client side tests the real-
life data of the client.
Approach:
 Building a team with real-time users, functional users and developers.
 Execution of business test cases.

WHAT IS A TEST CASE?

 Test case is a description of what is to be tested. What data to be used and


what actions to be done to check the actual result against the expected result.
 A test case is simply a test with formal steps and instructions.
 Test cases are valuable because they are repeatable, reproducible under the
same/different environments.
 A test case is a document that describes an input action or event and an
expected response to determine if a feature of an application is working
correctly.

WHAT ARE THE ITEMS OF A TEST CASE?

Test case item are,


 Test case number (unique number)
 Pre-condition (The assertion (declaration) about the i/p condition is called the pre-
condition.
 Description (what data to be used, what data to be provided & what data to be
inserted)
 Expected output (The assertion about the expected final state of a program is (called
post-condition))
 Actual output (what ever system displays)
 Status (pass/fail)
 Remarks

CAN THESE TEST CASES BE REUSED?

Yes, test cases can be reused.

Test cases developed for functionality testing can be used for


integration/system/regression testing and performance testing with few modifications.

WHAT ARE THE CHARACTERISTICS OF A GOOD TEST CASE?

A good test case should have the following:


 TC should start with “what you are testing “
 TC should be independent
 TC should not contain “if “statements.
 TC should be uniform
Eg: <Action Buttons>, “Links “.
ARE THERE ANY ISSUES TO BE CONSIDERED?

Yes there are few issues…


 All the TCs should be traceable.
 There should not be too many duplicate test cases.
 Out dated test cases should be cleared off.
 All the test cases should be executable.

FURRPSC MODEL: (Types of Testing)

F  Functionality Testing
U  Usability Testing
R  Reliability Testing
R  Regression Testing
P  Performance Testing
S  Scalability Testing
C Compatibility Testing

1) Functionality Testing

To conform that all the requirements are covered. Functional requirements


specify which o/p should be produced from the given i/p. They describe the
relationship between Input and Output of the system.
A major part in black box testing is called functional testing
Eg:
Here we test…
 Input domain -- (whether taking right values of i/p or not)
 Error handling -- (whether the application reporting to wrong data or not)
 URL’s checking – (for only web application, all links are correcting working
or not)

Testing Approach:
 Equivalence class
 Boundary value analysis
 Error guessing

2) Usability Testing:
To test the ease (comfort, facility) and user-friendliness of the system.
Approach:

Qualitative and quantitative Heuristic Check List.


Classifications of checking:
 Accessibility
 Clarity of communication
 Consistency
 Navigation
 Design & maintenance
 Visual representation
Qualitative approach

i. Each and every function should be available from all the pages of the
site.
ii. User should be able to submit request within 4-5 actions.
iii. Confirmation message should be displayed for each submits.

Quantative approach:

The average of 10 different people should be considered as the final result.


Eg:
Some people may feel the system is more users friendly, if the submit button is on
the left side of the screen. At the same time some others may feel its better if the submit
button is placed on the right side.

3) Reliability Testing

Which defines how well the software meets its requirements?

Objective is to find mean time between failure/time available under specific load
pattern and mean for recovery.

Eg:
23 hours/day availability & 1 hour for recovery (system).
City bank – have 4 servers in each region. Every 6 hrs. It will change servers.
Approach
RRT (Ration Real time tool)

4) Regression Testing

To check the new functionalities have been incorporated correctly without failing
the existing functionalities.

Approach: Automation Tool.

The bugs need to be communicated and assigned to developers that can fix it. After
the problem is resolved, fixes should be re-tested, and determination mode regarding
requirements for regression testing to check that fixes did not create problems else where.

5) Performance Testing

Primary objective of the performance testing is “ To demonstrate the system


functions to specifications with acceptable response times while processing the
required transaction volume on a production sized data base.
Objectives:
 Assessing the system capacity for growth.
 Identifying weak points in the architecture.
 Detect obscure bugs in the software.

Performance parameters;

 Request – response time


 Transactions per second
 Turn around time
 Page down load time
 Through put
Approach:
Classification of performance testing.
 Load test
 Volume test
 Stress test
Stress testing:

Finding break point of application. Max. No. of users that an application can
handle(at the same time)

Approach:

RCQE
 Repeatedly working on the same functionality.
 Critical Query Execution.
 To emulate peak load.

Volume testing:

Execution of our application under huge amounts of resources is called


volume testing.
To find out threshold point we may use this test
Approach: Data Profile.

Load testing:

With the load that customer wants ( not at the same time) . Load is increasing
continuously till the customer is required load.

Gradually increasing the load on the application and checking the


performance.

Approach: Load profile.


6) Scalability testing:

To find the maximum number of user system can handle. (Customer will give
max. no.)

Approach: performance tools.

Classification:
 Network scalability
 Server scalability
 Application scalability
7) Compatibility testing:

How a product will perform over a wide range of hardware, software & network
configuration and to isolate the specific problems.

Approach: ET Approach.

Environment Selection:
 Understanding the end users application environment.
 Importance of selecting both old browser & new browser.
 Selection of the operating system.

Test Bed Creation :


Partition of the hard disk.
 Whether our application run on all customer expected platforms or not?
 Platforms means that the required system software to run our application such
as operating system, compiler, interpreters, browsers. …Etc.

What is the software life cycle ?

The life cycle begins when an application is first conceived (imagine) and ends
when it is no longer in use. It includes aspects such as initial concept , requirements
analysis, functional design, internal design, documentation planning, test planning, coding ,
document preparation , integration testing, maintenances , updates, re-testing, phase-out,
and other aspects.

When should we start designing, test cases / testing?

V model is the most suitable way to follow for deciding when to start writing test
cases and conduct testing.

Testing limitations:
 We can only test against system requirements.
o May not detect errors in the requirements.
o Incomplete or ambiguous requirements may lead to inadequate or incorrect
testing.
 Exhaustive (total) testing is impossible in present scenario.
 Time and budget constraints normally require very careful planning of the testing
effort.
 Compromise between through ness and budget.
 Test results are used to make business decisions for release dates.

Test stop criteria:


 Maximum number of test cases successfully executed.
 Uncover minimum number of defects (16/1000 stm).
 Statement coverage.
 Testing uneconomical.
 Reliability model.

Tester responsibilities :
 Follow the test plans, scripts etc, as documented.
 Report faults objectively and factually.
 Check tests are correct before reporting s/w faults.
 Assess risk objectively.
 Prioritize what you report.
 Communicate the truth.

When should prioritize tests ?

We can’t test every thing. There is never enough time to do all testing you would
like, so what testing should you do?
Prioritize tests, so that, whenever you stop testing, you have done best testing in the
time available.
Tips:
 Possible ranking criteria (all risk based)
 Test where a failure would be most severe
 Test where failures would be most visible.
 Take the help of customer in understand what is most important to him.
 What is most critical to the customers business?
 Areas changed most often.
 Areas with most problems in the past.
 Most complex areas, or technically critical.

Software:

Software is a collection / set of instructions, programs & documents.


Software development life cycle (SDLC) :

Before starting the analysis we first check the feasibility of the project/work/system.
If we feel it is feasible then we will go to SDLC phases.

In feasibility we will see the below functions.

 Finance feasibility
 Cost feasibility
 Resource feasibility
 Ability to accept

SDLC includes 4 phases :

Analysis

Design

Coding

Testing

Analysis :
i. Requirements analysis is done to understand the problem the software system
is to solve.
ii. Understanding the requirement of the system is a major task.
iii. Analysis is on identifying what is need from the system.
iv. Main goal of the requirements specification is to produce the SRS document.
v. Once he understood the requirement must be specified in the document.

Design :
i. Purpose of the design is to plan a solution of the problem specified by the
requirement documents.
ii. This phase first step is moving from the problem domain to solution domain.
iii. The o/p of this phase is the design document.
iv. This document similar to a blue print.

Coding:
i. Once the design is complete, most of the major decisions about the system
have been made.
ii. The goal of the coding phase is to translate the design.
iii. The coding effect both testing & maintenance. Well-written code can reduce
the testing & maintenance efforts. Because of testing and maintenance costs of
s/w are much higher than to coding cost.

So the goal of the coding should be to reduce the testing & maintenance efforts.

Testing :
i. Testing is the major quality control measure used during s/w development. Its
basic function is to detect errors in the s/w.
ii. After the coding ,computer programs are available that can be executed for
testing purpose different levels of testing are used.
iii. The starting point of testing is unit testing. A module is tested separately. This
is done by the coder himself simultaneously along with the coding of the
module.
iv. After this modules are gradually integrated into subsystems which are then
integrated from the entire system. We do integration tests.
v. System testing: system is tested against the requirement to see if all the
requirement are met all the specified by the documents.
vi. Acceptance testing : client side on the real-life data of the client.

TYPES OF SOFTWARE MODELS

1. Water Fall Model:


It includes all phases of SDLC. This is the simplest process model.
O/P in water fall model:
Requirements document, project plan, system design document, detailed
design document, test plan and test reports, final code software manuals, review reports.
Draw back:
Once request made freeze, it cannot be changed i.e. changes cannot be done
after requirements are freezed.
Uses:
It is well suited for routine type of projects where the requirements are well
understood & small project.
2. Prototype Model:

In this model the requirements are not freeze before any design or can proceed.
The prototype is developed based on the currently known requirements.
It is sample of how actual system looks like,

Requirement
Analysis
Design D C T
Code
Test
3. Iterative Model:

In this model we can make changes at any level, but all the four phases of
SDLC will take place again.

It is like continuous model.

A A A

D D
D

C C C

T T T

4. Spiral Model :

In this model system is divided into modules and each module follows phases
of SDLC. It is good & successful model.

C C C Module 1 A A A
T

Module 2

T
Module 3

T
TEST LIFE CYCLE(TLC)

TLC PHASES:

System study

Scope/Approach/Estimation

Test Plan Design

Test Case Design

Test Case Review

Test Case Execution

Defect Handling

GAP Analysis

1. System study:

We will study the particular s/w or project/system.

 Domain:
In domain, there may be different types of domains like banking,
finance, Insurance, Marketing, Real-time, ERP, SEIBEL,
Manufacturing etc.
 Software:

Front End/Back End/ Process.


Front End: GUI, VB, D2K.
Back end : Oracle, Sybase, SQL server, MS access, DB2
Process: Languages. Eg: c, c++, Java..etc.

 Hardware: servers, internet, intranet applications.


 Functional Point/LOC:

Functional Point: No of lines that are required to write a micro function.

Micro function: Function which cannot possible to break further

1 F.P = 10 lines of code.

 No. of pages of software/system


 No. of resources of software/system
 No. of days to be taken to develop software/system
 No. of modules in the software/system (i.e-Associate/Core/Maintainace)
 Pick one priority  High / Medium / Low.

2. Scope/Approach/Estimation:

What to be tested.
Scope
What not be tested.

Eg:
U I S A

Module    
   
Approach: Test     Life Cycle (All the phases of TLC)
   
Estimation:
LOC (lines of code) / F.P (functional point) / Resource.
1 P.F =10 lines of code.

Example: Input = 1000 LOC

For this 1000 LOC we can estimate the time to complete the whole
TLC

System study -5days -No division


Scope/approach/Estimation -2days -No division
Test plan -2days -No division
Test case design -10days -yes to divide

Here we have 1000 LOC = 300 Test cases = 10 days


Test case review (= ½(Test case design))
-5days -yes to divide
Test case Execution -10days -yes to divide
Defect handling -6days -yes to divide

30TC=1Defect=5 Hours (for tracking)


For 300 TC= 10 Defects=50 Hours=6days

Gap analysis -5days -No division


-------------------------------------------------------------------------------------
Total No of days required = 45 man days
(for one time Manual testing)
-------------------------------------------------------------------------------------

1 time Manual testing = 45 man days


Project management = 20/100(45 days) = 9 days
Content management (data storage, Management of project & tools)
=10/100(45days) =4.5days
For buffer = = 10days
-------------------------------------------------------------------------------------
This is for one resource= = 68 days
-------------------------------------------------------------------------------------

If we have 4 resources= = 68/4 =17 days

3. Test Plan Design:

Test Plan:

Test plan includes all the areas.

1. Who are the client & their details? And also the company& their details
where testing is taken place
2. Reference Documents
Like BRS, SRS, & DFD etc.
3. Scope of the Project
4. Project Architecture & Data Flow diagrams.
5. Test Strategy
6. Deliverables
7. Schedules
8. Milestones
9. Risk/Mitigation/Contingency
10.Testing Requirements
11.Assumptions
12.Test Environment/ Project
13.Defects
14.Escalation process

Assume us preparing the Test Plan for Jobs4testing.

1. Client: Thatavarti Technologies.


Company: Thatavarti Technologies.

Here we write the details about the company.


2. Reference Documents: BRS, SRS, & SDS
3. Scope:
Overview: Here in jobs4testing we have three modules.
Module1: Aspirant Module: Here the Aspirant uploads his resume.
Module2: Employee Module: Here Employee uploads jobs.
Module3: Selection Process Module: Here in this module HR people select each
resume and conduct the interviews. (Select the Quality Resumes).

Finally the main aim of this application is to find Quality Test resources to the
companies and at the same time by using this Thatavarti wanted to improve their
business.

Take in this case where we assume we have to Releases.

Release 1: In Release 1 we test first Module1 & Module3. But we are not testing
Module2.

We test only unit testing, Integration tests & System testing. After that we find bugs
if any and fix it. Then we retest after fixing.

Release 2: In release 2 we test Module 2

We test only Module2. Also do regression testing for which if there any affects with
this attachment of module2 with Module1&2.

4. Project Architecture:
In this we represent the application in a pictorial format by using Dataflow diagrams,
Activity diagrams & E-R Diagrams.

5. Test Strategy:
Test strategy explains the application Test factors & Test types.

These are the requirements must to be filled before doing any testing.

Pre Condition Start Criteria Stop Criteria Pause Suspension


Pre Condition: It specifies the requirements to do a particular testing.
Start Criteria: To start a particular test we select criteria.
For example in Thatavarti training selection criteria is candidate’s
communication skills.
Stop Criteria: In the same example if resources are acquired the good
knowledge as per the standards to select in an interview.

Pause: In case if there is any problem to conduct test case execution we may
stop for sometime.
Suspension: We suspend the test case execution if any requirements are not
Fulfilled.

Deliverables: Test Case execution Reports.

6. Resources/ Responsibilities/ Roles:

We specify Each Resource name, Role & their Responsibilities.

Also we give the clear picture of team who done the Testing.
Example:

Resource Role Responsibility

Name: Krishna Test Engineer Preparation of


Contact ID: [email protected] Test cases, design, and
execution & test reports

Test Approach/Process / Life cycle

Test Plan

Test Strategy

Test Factors Phases of Testing

7. Deliverables: 8. Schedules: 9. Milestone:

a). System study document Jan 01-jan07 Jan07


b). Understanding documents
c). Issues Document
d). Test plan Document
e). Test Case Documents
f). Test Case Review Documents
g). Defect Reports
h). Tractability Metrics
i). Functional Coverage Document
j). Test Reports.

Like as above specified for deliverable we main Schedules & Milestone.

Schedule skip page: The time difference between the Actual and Required time to
deliver a particular document is considered as Scheduled skip page.

So, this time difference shows effect on milestone.

This is called as Milestone skip page

Actually in real time working in Saturdays & Sundays we can cover this Mile stone
gap.

10. Risk/Contingency/Mitigation:

Risk: Any Unexpected Event which will affect the project.


Contingency: Prevention step to overcome from the risk.
Mitigation: It specifies the solution to cover the risk after it occurs.

Typical Risks: Broke links, Server problems, weak bandwidth, wrong


builds,
Database down, wrong data, application server down, Integration
failures, Resource problems, unexpected events,
And other in general risks.
Example: In a project where the release engineer not uploaded the required
build.

11. Training: Training will be given to the resources if they are not having the required
Skills.

Example: Consider a Mainframe project comes with Healthcare domain. But the
company is having only solid test resources that are having knowledge only on web
applications.

Then in this case they train their resources on these areas.

12. Assumptions: To test application we make some assumptions.


Example: For doing System testing we first do the unit & Integration testing.

If these documents are not sending by the client then we report to them.

13.Test Environment: It specifies the software, hard ware and other system details to
test the application.
14.Defects: Defect report documents
15.Escalation process: While conducting the test if any resource gets any doubts or
problem whom to report can be specified here.

It is nothing but communication flow from Bottom-to Top level in testing process.

Note: Test bed: A test bed configuration is identified and planned from hardware and
operating system
version and compatibility specifications.
Test data: After identifying the requirements for a test the creation of test data is
to be made. The testing team can make the test data or can also be provided by
the client.

4. Test case Design: (heart of testing)

 Test case is description of what is to be tested what data to be used and what
actions to be done to check the actual risk against the expected result.
 A test case is simply a test with formal steps and instructions.
 Test cases are valuable because they are repeatable, reproducible under the
same/different environments and easy to improve upon with feedback.

Format of Test case Design:

Pre Condition Descripti Dat Expecte Actual Statu Remar Bug


on a d results results s ks Numb
er
Constraint/ Format: Dat System As Pass Comm Eg:Bug
Condition to be #Check a to should expect (or)Fa ents -01
met whether/v test display ed (or) il
erify page Whate
system with the ver
displays details system
expected displa
result y
page.
#action:
User
clicks on
particular
<button>
or link
page
#Data:

Techniques to write a test case:


 Boundary Value analysis:
By using boundary value analysis we take upper&lower boundary values and we
check only those values.
 Equivalence Class portions
We check attributes/parameters of functionalities.
 Error guessing
Testing against specifications.

Use case:

Format:
1. Description: it specifies the description of use case
2. Actors: Here we specify the actors involved actually in using this use case
3. Pre condition:
4. User Action& System Response
-Typical flow
-Normal Flow
-Exceptional flow
5. Post condition
6. Specific Requirements
7. Business Validations
8. Parking Lot

5. Test case items are :

 TC no.
 Pre-condition
 Description
 Expected output
 Actual output
 Status
 Remarks

6. Test Case Review:


Review means re-verification of test case. These are included in the review
format.

First Time Right (FTR)

TYPES OF REVIEWS:
 Peer – peer review  same level
 Team lead review
 Team Manager Review

REVIEW PROCESS:

Take demo of the functionality

Go through use case / function specification

Try to see TC & find out the gap between


Test cases Vs. Use Cases

Submit the review report

Functional coverage:

To check test coverage we design test coverage.

Eg: when the source is Use Case.


For example consider J4T as 100%. It has three modules. Assume module 1 covers 40%,
module 2 covers 30%, and module 3 covers 30%. Let’s take module1 aspirant module,
consider it has 20 use cases…so each test case covers 2%…. more over it has typical flow,
alternate flow, and exceptional flow. Each flow covers some 0.9%
So based on each test case coverage we can say that how much percentage of testing is
covered.

Application Module Use Case Flows Test Case Test


name Name Name Execution
Jobs4testin Aspirant New user Typical T1.3- Not
g (33%) registratio Flow T1.12 completed
(100%) n (1.1%) (Check Not started
(3.3%) Alternate whether it Pending
Flow is Progress
(1.1%) completed
Exceptiona or not)
l Flow
(1.1%)
Eg: When the source is SRS
It is based on modularity.
Applicatio Module Sub Sub1, Functionalit TC ID
n Name Name Modules sub2, y
Level 1,
level2
J4t.com Aspirant Login TT Login
Module Module SSt Login
General
User

7. Test Case Execution:

This case execution includes mainly 3 things.


i. I/P:
 Test cases
 Test data
 Review comments
 SRS
 BRS
 System availability
 Data availability
 Database
 Review doc
ii. Process: Test it.

iii. Output:
 Raise the defect
 Take a screen shot & save it.

8. Defect Handling:

Identify the following things in defect handling.


 Defect No./Id.
 Description
 Origin TC id
 Severity
o Critical
o Major
o Medium
o Minor
o Cosmetic
 Priority
o High
o Medium
o Low
 Status

Following is the flow of defect handling:

 Raise the defect


 Review it internally

 Submit to developer

We have to declare severity of defect & after declare the priority.


According to priority, we will test the defect.

9. GAP Analysis:

Finding the difference between the client requirement & the application
developed.

Deliverables:
 Test plan
 Test scenarios
 Defect reports

BRs Vs SRs.
SRs Vs Test Case.
TC vs. Defect.
Defect is open / closed.

TEST PLAN DESIGN:

What is Test Plan: A software project test plan is a document that describes the objectives,
scope, approach & focus of a software testing effort. The completed document will help
people outside the test group understand the “why & how “ of product validation.

WHAT IS DEFECT:
In computer technology, a defect is a coding error in a computer program. It is
defined by saying that “ A software error is present when the program does not do what its
end user reasonably expects it to do.”

WHO CAN REPORT A DEFECT:


Any one who has involved in software development lifecycle and who is using the
software can report a defect. In most of the cases defects are reported by testing team.

A short list of people expected to report bugs.


 Testers / QA engineers.

 Developers.

 Technical support.

 End users.
 Sales and marketing engineers.

TYPES OF DEFECTS:
 Cosmetic flow
 Data corruption
 Data loss
 Documentation issue.
 Incorrect operation.
 Installation problem.
 Missing feature.
 Slow performance
 Unexpected behavior
 Unfriendly behavior

HOW TO DECIDE THE SEVERITY OF THE DEFECT:

Severit Description Response time /


y Turn around time
Level
High A defect occurred due to the inability of a key Defect should be
function to perform. This problem causes the responded to within 24
system to hang or the user dropped out of the sys. hrs & the situation should
be resolved test exit.
Mediu A defect occurred with severely restriction the A response or action plan
m system such as the inability to use a major function should be provided within
of the system. There is no acceptable work around 3 working days.
but the problem does not inhibit the testing of other
function
Low A defect is occurred which places minor restriction a A response or action plan
function that is not critical. There is an acceptable should be provided within
work-around for the defect. 5 working days.

DEFECT SEVERITY Vs. DEFECT PRIORITY:

Severity: How much the defect is effecting application.

Priority:
 Relative importance of the defect, how fast the developer has to take up the defect?
 The general rule fortune fixing the defects will depend on the severity. All the high
severity defects should be fixed first.
 This may not be the same in all cases some times even though severity of the bug is
high it may not be taken as the high priority.
 At the same time the low severity bug may be considered as high priority.

What kind of testing should be considered?

1. BLACK BOX TESTING:


Not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.

2. WHITE BOX TESTING:

Based on knowledge of the internal logic of an application code. These are


based on coverage of code statements, branches, paths, conditions, loops…etc.

3. INTEGRATION TESTING:

Testing of combined parts of an application to determine function together


correctly.

4. FUNCTIONAL TESTING:

Black box type of testing. This type of testing should be done by testers. This
does not mean that the programmers should not check that their code works
before releasing it.

5. REGRESSION TESTING:

It can be difficult to determine how much re-testing is needed, especially near


the end of the development cycle. Automated testing tools can be especially
useful for this type of testing.

6. SYSTEM TESTING:

Black box type testing that is based on over all requirements specifications.
Covers all combined parts of a system.

7. ACCEPTANCE TESTING:

Final testing based on specification of the end-user or customer or based on


use by end-users / customers over same limited period of time.

8. RECOVERY TESTING:

Testing how well a system recovers from crashes, hardware failures or other
catastrophic(sudden calamity) problems.

9. SECURITY TESTING:

How well the system protects against unauthorized internal or external access.

10.COMPATABILITY TESTING:
Testing how well software performs in a particular hardware / software / network
etc. environment.

11.ALPHA TESTING:

Testing of an application when development is nearing completion, minor


design changes may still be made as a result of such testing.
Typically done by end-users or others not by programmers or testers.

12.BETA TESTING:

Testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-
users or others not by programmers or testers.

13.SANITY TESTING:

This is before testing. Application is stable or not, we want to write test cases
for product whether development team released build is able to conduct complete
testing or not ?

14.SMOKE TESTING:

After testing major & medium or critical functions are closed or not

15.MONKEY TESTING:

Testing like monkey . As no proper approach. Taking any functions and test it.
Coverage of main activities during testing is called monkey testing (If give one
day for testing)

16.MUTENT TESTING:

Is the defect we have to inject defect into application and test.

User Name: KONDA

Password : *************

OK

17.BIG BANG TESTING: (Informal testing)


A single stage of testing after completion of entire coding is called Big
bang testing (no reviews i.e. direct system testing)

18.BIG BANG THEORY:

Approach for the integration when checking the errors between module or sub
module.

19.AD-HOC TESTING:

Doing a short cut way, does not following a sequential order mentioned in the
test cases or test plan.

20.PATH TESTING:

To check every possible condition at least one navigation of flow.

SOFTWARE QUALITY:

 Meet customer requirements.


 Meet customer expectations
 Possible cost
 Time to market

BRS:
It specifies needs of customer.
Total business logic documents.
SRS:
It specifies Functional Specifications to develop,
HLD:
High level design document.
It specifies interconnection of modules.
LLD:
It specifies Internal logic of sub-modules.

TESTING TEAM:

Quality Control

Quality Analyst

Test Manager
Test Lead

Test Engineers

Quality: Quality means meeting requirements First time, on time& Every time.

Factors: 1.Product transition factors


-Reusability
-Interoperability
-Portability
2. Product operational factors
-Correctness
-Efficiency
-Usability
-Reliability
-Integrity
3. Product Revision factors
-Maintainability
-Flexibility
-Testability
Quality Assurance:Seeking to agreed upon rules. Like Quality Standards (CMM/ISO/Six
Sigma)
Quality Control: By specifying all these we can say the quality control

They are
 Inspection(sudden check)
 Walk throughs&Reviews (Formal approaches)
-Here both parties are aware of the object & These are also called verification
& Static Testing.

Above all

We did Testing- To find bugs. (This is also called as dynamic testing)

REVIEWS DURING ANALYSIS:

 Conducted by business analyst


 Verifies completeness and correctness in BRs & SRs
 Are they right requirement ?
 Are they complete ?
 Are they reasonable ?
 Are they achievable?
 Are they testable?
REVIEWS DURING DESIGN:
 Conducted by designers
 Verifies completeness and correctness in HLD & LLD
 Is the design good?
 Is the design complete ?
 Is the design possible ?
 Does the design meet requirements ?

WHY DOES S/W HAVE BUGS:


 Programming errors -- programmers like anyone else can make mistakes.
 Changing requirements
 Poorly documented code -- its tough to maintain and modify code that is badly
written or poorly documents; the result is bug
 Software development tools: visual tools, class libraries, compilers, scripting tools
else often introduce their own bugs or poorly documented, resulting in added bugs.

WHAT IS VERIFICATION & VALIDATION:

VERIFICATION:
Typically involves reviews and meetings to evaluate( estimate , calculate)
documents, plans, code requirements and specifications. This can be done with check lists,
issue lists, walk through & inspections meetings.
VALIDATION:
Typically involves actual testing and takes place after verifications are completed.

SEVERITY:
Relative impact of the system.
i.e. how far the application is affected by this defect ( low, medium, high, critical).

PRIORITY:
Relative importance of the defect.
(i.e. giving preference to the defect low , medium , high).

Which life cycle method is followed in Your organization:


Now we are using V model and we will also include in some other methods like
prototype and spiral in single application.

What is software Quality:


Quality s/w is reasonably bug-free, delivered on time and with in budget, meats
requirements and/or expectations and is maintainable.

SEI:
Software Engineering Institute.
Initiated by the U.S. defense department to help improve software development
processes.

CMM:
Capability maturity model developed by the SEI. It’s a model of 5 levels of
organizational maturity that determine effectiveness in delivering quality software.

ANSI --- American National Standards Institute

Will automated testing tools make testing easier ?

Possible: For small project, the time needed to learn and implement them may not be
worth it. For larger projects or on-going long-term projects, they can be valuable.

WEB TEST TOOL:


To check that links are valid , HTML code usage is correct, client-side and
server-side programs work, a web site’s interactions are secure.

WHAT MAKES A GOOD TEST ENGINEER ?

A good test engineer has a “test to break “ attitude (approach, manner) an ability to
take the point of view of the customer, a strong desire for quality and attention to details.

WHATS THE ROLW OF DOCUMENTATION IN QA ?

Critical, QA practices should be documented such that they are repeatable


specifications, designs business rules, inspection reports, configurations, code changes, test
plans. etc…..

WHATS A TEST CASE ?

A test case is a document that describes an input action or event and an expected
response, to determine if a feature of an application is working correctly.

HOW CAN IT BE KNOWN WHEN TO STOP TESTING ?

This can be difficult to determine. Common factors in deciding when to stop are …
 Dead lines ( release deadlines, testing dead lines etc..)
 TC completed with certain percentage passed
 Test budget / depleted (used U P)
 Bug rate falls below a certain level
 Beta or alpha testing period ends.

WHAT CAN BE DONE IF REQUIREMENTS ARE CHANGING CONTINUOUSLY:


Use rapid prototyping whenever possible to help customers feel sure of their requirements
and minimize changes.
The project initial schedule should allow for some extra time corresponding with the
possibility of changes.
Focus less on detailed test plans and test cases and more an ad-hoc testing.

WHAT IS THE DIFFERENCE BETWEEN A PRODUCT AND A PROJECT ?


PRODUCT:
Developing a product without interactions to two client before the
product release
PROJECT:
Developing a product based on the client needs or requirements.

WHAT IS A TEST PROCEDURE ?

Execution of one or more test cases.

WHAT ARE THE DEFECT PARAMETERS ?

There are 5 parameters.


 Source
 Error Description
 Status
 Priority
 Severity

WHAT IS TRACABILITY MATRIX ?


To map the test requirement and the test case ID, whether it is fulfilling the coverage
or not.

WHAT IS TEST STRATEGY ?


Applying a type of testing techniques to explore the maximum bugs.

WHAT IS CONFIGURATION MANAGEMENT ?


It is version control. It covers the process which is used to control, co-ordinate and
track the requirement documentation, the problem faced, change request and design and
the tools to be used. The changes made again and who made the changes.

WHEN U START WRITING TEST CASES ?


Once the requirements are frozen, we begin writing test cases.

TESTING TECHNIQUE:
Way of executing and preparing the test cases.

TESTING METHODOLOGIES:
Way of developing the test.

WHATS THE DIFFERENCE BETWEEN IST & UAT?

Particulars IST UAT


Acronym Integration System User Acceptance Test
Testing
Base line Functional Specification Business
Doc’s Requirements
Location Off site On site
Data Simulated Live data
Purpose Validation & Verification User needs.

www.rainav.com (rain photos)

You might also like