Test Factor Test Technique Matrix
Test Factor Test Technique Matrix
Test Factor Test Technique Matrix
SOFTWARE QUALITIES
Meet Customer Requirements and Meet Customer Expectation is called Technical and Cost of
Purchase and Time to Release is called non technical.
Definition:
1. Meet Customer Requirements (MCR) :- MCR in term of functionality
2. Meet Customer expectation (MCE):- In term of performance/usability/capability etc...
3. Cost Of Purchase (CP):- By customer
4. Time to release (TR):- By development organization.
REQIREMENT GATHERING
DESIGN
CODING
TESTING
1. WATER FALL MODEL :when customer requirements are clear and constant
2. PROTO TYPE MODEL: When customer requirements are in ambiguity, the S/W organization
develop sample model, which goes first to develop real S/W.
3. SPIRAL MODEL: Spiral model when the requirements are enhancing.
4. AGILE MODEL: Agile model when the customer requirements suddenly changing.
Note 1:-All S/W development models are derived from water fall model or linear sequential model.
Note 2: Above all S/W development models are maintaining single stage of testing and that too by same
development people.
SQA: The monitory and measuring the strength of development process is called SQA.
SQC: The validation of the S/W product with respect to the customer requirements and expectation
called SQC.
BRS: BRS is defining the requirements of the customer to be developed as new software. This
document is also known as CRS/URS.
1
Siddiq 2
SRS: Software requirements specification is defining functional requirements to be developed and
system requirements to be used.
REVIEW: Determining the complete ness and correctness of documents by responsible people through
Walkthrough, Inspection and peer-review is REVIEW.
Walkthrough: Checking from first line to last line
Inspection search and faker
Peer-review: Comparing one Document with another document for each point word to word.
HLD: Designing the overall architecture of system from root module to leaf module.
Ex: Login ATM, Chat. HLD is also known as architectural design or external design.
LLD: LLD defines the internal architectural correspondence model (or) functionality. LLD is also
known as internal design documents.
PROTO TYPE: A simple model of software is called proto type. It’s consists of interface (screen) with
out functionality.
PROGRAMME: It indicates the set of executable statements. Statements in performing process and
displaying outputs.
WHITE BOX OR GLASS BOX OR OPEN BOX TESTING TECHNIQUES: This is a program
basis testing techniques. This technique also known as glass box or open box testing. Those responsible
use this technique to verify the internal structure of corresponding program.
BLACK BOX TESTING: It’s a system level testing techniques. Those responsible use this technique to
validate external functionality.
BUILD: An .exe executable form of system is called build. OR a finally integrated all modules set is
call build.
V-MODEL: V-Stands for verification and validation. This model defines conceptual mapping in
between develop man stage and testing stages.
V-model consists of multiple stages of developing process. Each embedding with
multiple stages of testing process. From this model maximum organization are maintaining separate
testing team only for system testing pass. B’coz this stage is a bottle neck stage of Software
development. After system testing the organization is planning to release S/W to customer site.
2 3 4
2A 3A 4A 7
2
Siddiq 3
1. Required gathering
2. Analysis 2A. Review
3. Design 3A. Review
4. Coding 4A. White box testing
5. System testing (or) black box testing
6. Main
7. Test S/W changes.
I. REVIEW IN ANALYSIS
Generally software development process starts with requirements gathering and analysis. In this phase
business analysts develop BRS/SRS document. For complete ness and correctness of document the same
business analyst conducts a review. In the review these are considered.
After completion of analysis & Review, Designers develop HLD & LLD’S. To verifying completeness
and correctness those document, the same designer conducts review meeting. In this review, they
concentrate on below factors.
1. Basis Path Testing: - Programmer is verifying whether a programmer is running or not. In basis
path testing, programmer are following the below procedure to test complete programme.
Draw flow diagram for that programme
Calculate no of independent paths in that programme (geometric complexity).
Run that programme more than are time to cover all independent paths in that
programme.
2. Control Structure Testing: - Programmers concentrate on the correctness and completeness of
the corresponding programme outputs. They are checking every statement including ‘if’
conditions for ‘loops, memory allocations.
3
Siddiq 4
3. Program Technique Testing:- Programmers verify the execution speed of corresponding
program. In this testing, programmer is taking the help of ‘monitors’ and “properties”. If the
program speed is not good, then programmers are performing changes in structure of the
programme without disturbing the functionality.
4. Mutation Testing:-
Test Test Test
- - -
- - -
- Change Change
- - -
-
Passed passed passed/fail
Mutation means a change is program. Programmers perform willing changes in programs and perform
test separately. In this test programmers verify completeness and correctness of test on program.
TESTING TERMINOLOGY
1. Testing Strategy: It’s a document and it does define the required Testing approach to be
followed by testing people.
2. Test Plan: It’s a document and its provides work allocation and in terms of schedule
3. Test Case: It does define test condition to validate functionality in term of completeness and
correctness.
4. Test Log: It’s defining the result of test case in term of passed (or) fail of execution of testing
case application build.
5. Error-Defect (Issue) -& Debug: a). A mistake in a coding is called ERROR b) This mistake
found by Test engineer, during Testing called DEFECT/ISSUE. C). this defect/issue review &
Accepted by development team to release is called BUG.
6. Re-Testing: It’s also known as data driven (or) iterative testing. Test engineers repeat the same
test on same application build with multiple I/P values. This type of test repetition is called Re-
Testing.
7. Regression: The Re-execution of scheduled test cases on modified build to ensure, bug-fix work
without any side effect is called regression test.
4
Siddiq 5
Test
reporting
Initial Build
Test cases
Sign off
5
Siddiq 6
2. Business issues; cost allocation in between development process and testing process.
3. Test approach; list of test factors of test issues to be applied by testing team on
corresponding S/W build. This selection is depending on requirement is that S/W build scope of
the requirements and risk involve in our project testing.
4. Roles & responsibilities; The names of the job in testing team & their responsibilities.
5. Communication & status Reporting; This required negotiation between every two consecutive
jobs in our testing team.
6. Test automation and Testing tools; The purpose of automation & available tools in our
organization.
7. Defect Reporting & Tracking; The required negotiation between this testing team &
development team to review and resolve defect during testing.
8. Testing Measurements and machines; To estimate quality, compatibility & status testing team
uses a set of measurement & metrics.
9. Risks & Assumptions; The exemptions list of problems this solution to overcome.
S/W
coding
11. Training Plan; The required number of training session for testing team. To understand
customers requirement (or) business logics.
6
Siddiq 7
12. Test deliverable; Names of test document to be prepared by testing team during testing. Ex: Test
plans, cases, log, defect reports and same reports.
13. Test factors & testing issues; To define quality S/W, Testing team use 15 issues (or) topics.
These all topics are not mandatory in every project.
TEST PLAN
Test factor indicate testing issue topic. This test engineer’s topic in our project testing team is
following ECT of testing techniques
In Above example the project manager/ Test manager finalized 9 testing topics are issues to be applied
by testing team On S/W Build.
8
Siddiq 9
After completion of program development and unit testing, programmers are connecting them to form a
complete software field. In this integration of programs, programmers verify interfaces in between every
2 programs or modules.
They are four types of approaches to interactive modules such as
1. Top-Down Approach: In this model programmers interconnect main module to sub modules. In
the place of under constructing sub modules, programmer use temporary programs, called as
“stubs (or) called programs.
Main
Stub
Sub Sub
2. Bottom-Up Approach: - In this programmers interconnect sub modules without using under
construction main module. In place of that under construction main module programmers use the
temporary programme called driver or calling program.
Main
Driver
Sub
Sub
3. Hybrid Approach:- In this approach we combine from top-down and bottom-up approaches
also known as Hybrid approach. Or approach.
Driver
Sub
Stub
Sub Sub
9
Siddiq 10
After completion of integration testing, development people release S/W build to the separate testing
team. This separate testing team is validating the S/W build with respect to customer requirements. In
this level of testing the separate testing team uses block-box testing technique.
These techniques are classifying into three categories.
1. Usability / Accessibility Testing: Generally the system test executing is starting with usability
test. During this test, test engineers validate user-friendliness of every screen in application build.
This Usability testing two consists of Sub-Techniques
During this test, Test engineers are applying below 3 factors for every screen of our application
window.
B): Manual Support Testing: During this Test, Test engineers study help of application build to
estimate context sensitiveness. Generally, technical writers of company develop user manuals before
the releasing the S/W to customer. Due to this reason the manual supporting testing is coming into
picture at the end of the system testing.
USABILITY TESTING
Receive S/W from developers
UI Testing
2. Functional Testing.
It’s monitory testing level in system testing. During functional testing, test engineer concentrate
to meet customer requirements.
Functional testing is classified into two sub testing techniques.
10
Siddiq 11
A) Functionality Testing:-
1) GUI Coverage or Behavioral
2) Error- Handling-Coverage
3) I/P domain Coverage
4) Manipulation Coverage or O/P Coverage
5) Back-End Coverage
6) Order of Functionality Coverage
During this test, test engineer verify whether functionality is working correct or not. In this testing,
test engineer construct below coverage’s.
B.) Sanitation Testing: Is also known as GARBAGE Testing. During this test, test engineers find
test functionality of application build with respect to customer.
B) Configuration Testing: Is also known as H/W compatibility testing. During this test, test
engineer test remaining application builds with various technologies. H/W devices to
estimate H/W compatibility.
Example: Technical Printers, Technical Networks, Technical Topologies.
C) Recovery Testing: Error-Handling is also known as reliability resting. During this test, test
engineer validate whether, application build is changing from, abnormal status to normal
status are not?
D) Inter System Testing: It’s also known as interring operator ability End-to-End testing.
During this test, test engineers validate, whether our application Co-exists with other S/W
application to share common resources.
E) Security Testing: It’s also known as penetration testing. During this test, test engineers
validate below 3 factors.
1. Authorization
2. Access control
3. Encryption/decryption.
In authorization testing, test engineers validate whether application accepting users and reject invalid
users are not?
In access control testing test, test engineers validate permission of users, for utilize of application
11
Siddiq 12
Then En/De testing, test engineers are trying to trace CIPHER test to original test.
Security testing authorization and access control test are reasonable to application, but En/De is
conducted by separate test security testing people.
F) Data volume Testing: It’s also known as storage testing or memory testing. During this test, test
engineer find peek limit data handled by our application build. Example: Ms-Access Technologies,
data bases, Supporting as a 2 GB. Data base Max.
I) Load Testing: No of users using the software at same time. Ex: websites, yahoo—G-mail.
Load means that no of concurrent users which are accessing, application at the same times.
Execution of application on customer expected configuration is called Load Testing.
After completion of testing system, the project management concentrates on user acceptance testing
to collect feed back from real customer or model customer. There are 2 ways to conduct user
acceptance test.
Project management declares release team with developers and test engineers and H/W engineers.
This release team conducts part testing (or) deployment testing (or) release testing. During this test
release team observe below factors.
Complete installation
Our all functionality
I/P devices handling
O/P device handling
Secondary storage devices handling
O/S error handling
Co-Existence with other S/W application
12
Siddiq 13
After completion of above observations, release team gives training on customer software and
coming back.
During utilization of the S/W, customer side people send change requests to Organization. This
responsible team of organization handles changes. This responsible team is also known as
Change Control Board (CCB).
CHANGE REUQEST
Generally every testing process is planned to conduct complete system testing with respect to
project requirements. Testing team may not be able to conduct complete testing due to risks or
challenges.
Ex: lack of times, knowledge, recourses…ECT, lack of Document.
AD-HOC: Due to above risks testing team plans to follow informal testing methods.
A) Monkey Testing: During this style of testing, testing people concentrate on main
activities on S/W. Due to lack of time for testing. This style of testing is also
knowledge Chinpanji (or) Random testing.
B) Buddy Testing: In this style of testing, test engineers are grouped with developers,
due to lack of time for testing. Buddy means group, of a programmer & a tester .
13
Siddiq 14
C) Exploratory Testing: Generally the testing team conducts system testing depending
on functionality and system requirements as in SRS. If the SRS is in complete with
requirements, then test engineer depend on past experience, discussion with others,
similar projects, browsing ECT to collect complete requirements. This style of testing
is called Exploratory Testing.
D) Pair Testing: In this style of testing junior test engineers are grouped with senior test
engineers to share the knowledge of testing. This style of testing is called PAIR
Testing.
E) De-bugging: The development people add bugs into coding and release to testing
team. This type of defect seeding/feeding is useful to estimate efficiency of testing
team. It’s also known as defect seeding/feeding.
TESTING PLANNING
After completion of test strategy finalization, the test lead categorizes people and develops test plan.
In this stage the test lead categorizes people and develops system test plan & divide that plan into
module test plan. The plan defines ‘What to test’, ‘How to test’, ‘When to test’, and ‘Who to test’.
Project plan
Testing team formation
Develop document (SRS) Identify tactical Risks
Prepare test plan Test plan
Test strategy Review test plan
A) Testing Team formation: Generally test planning task starts with testing team formation. In this
stage test lead depends on below factors.
Project Size
Lines of columns (or) Functionality
Availability of Test engineer
Test time/duration
Test/ environment
Case Study 1.
Client server, website, ERP--- 3 to 5 months of system technique
System S/W (networking, mobile) 7 to 9 months of system technique
Machine Criteria S/W (SAT) 12 to 15 months of system technique
B) Identify technical risks: After completion of testing team formation test lead concentrates on
risk analysis & exception with that of formed testing team.
Example:
C) Prepare Test plan: After completion of team formation & risks analysis, test lead concentrates
on test plan document. In this stage test lead uses IEEE 829 test plan document format.
Formats:
1. Test plan ID: The title of test plan document for format reference.
2. Introduction: About project.
3. Test Item: List of modules in project.
4. Feature to be tested: List of modules (or) Function to be tested.
5. Feature not to be tested: List of modules, which are already tested in previous version
6. Approach:-List of testing technique to be applied on modules ( from test strategy)
7. Test deliverables: Required testing document to be prepared by testing team.
8. Test environment: Required H/W, S/W to test modules.
9. Entry criteria: Test engineer are able to start test execution after creating as
a. Test cases develop & review
b. Test environment established
c. S/W build received from development.
10. Suspension criteria: Same times test engineer stop test execution. Due to
a. Testing environment is not working.
b. Reported defects are more at development side. (Quality Gap)
What to test 3, to 5
How to test 6 to 11
Who to test 12 to 13
When to test 14
12. Staff & training needs: Selected test engineers name & required no of training sessions
13. Responsibilities: The match between the names of test engineer & requirements.
14. Schedule( dates and time): Date and time of Project
15. Risks & exemption: List of analysts and their exemptions to overcome.
16. Approach: Signature of project manager (or) test manager and test lead.
15
Siddiq 16
D) Review Test Plan: After completion of test plan document preparation, test lead conducts
review meeting, to estimate completeness and correctness of document. In this review the
Testing team members of that project also involve.
In the preview meeting the testing people depend on below factors.
a. Requirement based coverage (what to test).
b. Testing technique based coverage(How to test)
c. Risks based coverage (Who and when to meet).
TEST DESIGN
After completion of test planning the corresponding selected test engineers concentrate on test
design, test execution, and test reporting.
Generally the selected test engineers design test cases for every project. In test design every test
engineer studies all the requirements of the project and prepares test cases for selected requirement
only with respect to test plan. In this test design, test engineers use three types of test case design
methods to prepare test cases for responsible requirements.
1. Function and systems specification based test design.
2. Use cases based test case design.
3. Application build test case design based.
Every test case defines a unique test condition, and then every test case is self standing and clear. To
ensure understandability test engineers, start every test case with verify or check English words.
Every test case is traceable with requirements.
BRS
From the above diagram test engineers prepare the maximum test cases depending on functional and
system requirement as in SRS. In this type of Test cases, Test engineer follow below approach.
A login process involves user ID & password authorization. Use ID is accepting alpha numeric in upper
case of 4 to 16 characters along.
The password is accepting alphabets in upper case of 4 to 8 characters.
Test case 1:
Verify user ID Value
Test case 2:
Verify password value
Test case 3:
Verify login Operation
Decision Table
17
Siddiq 18
Functional specification 2
An insurance application apply for different types of policies when a user selects the type, insurance
system asks age of that user. The age value should be greater than 16 years and should be less than 80
years prepare test case titles and scenarios.
Functional specification 3
A door opened when a person comes in front of the door and the door closed when that person comes
inside. Prepare test case title for this scenario.
TC3: verify door operation when that person is standing in the middle of the door.
18
Siddiq 19
Functional specification 4
A computer shut down operation prepare test case title or scenario.
TC1: verify shutdown option section using start menu.
TC2: verify shutdown option section using Alt+ F4.
TC3: verify shutdown operation.
TC4: verify shutdown operation when a process is in running.
TC5: verify shutdown using power ON/OFF.
Functional specification 5
In a shopping application users purchase different types of items. In purchase order, system is allowing
users select item number and enter quality up to 10, purchase order returns total amount along with one
item price. Prepare test cases or scenario
TC1: verify the selection of item numbers
TC2: verify quantity value
BVA ECP(type)
Valid Invalid
Min 1pass 0-9 a-z
Max 10 pass A-Z
Min-1, 0, fail Special
Min+1, 2 pass characters
Max-1, 9 pass Blank field
Max+1,11 fail
Functional specification 6
Washing machine operation: prepare test case titles or scenario
TC1.verify power supply
TC2.verify door open
TC3.verify water filling with detergent
TC4.verify cloths filling
TC5.verify door closing
TC6.verify door closing due to cloths overflow
TC7 verify washing settings
TC8 verify washing operation
TC9 verify washing operation with low voltage.
TC10 verify washing operation with cloths overload inside
TC11 verify washing operation with door open in middle of this process
TC12 verify washing operation with lack of water
TC13 verify washing operation with water leakage
TC14 verify washing operation with in proper setting
TC15 verify washing operation with machine problem.
19
Siddiq 20
Functional specification 7
In an e_banking application, users are connecting to bank server through internet connection. In this
application users are filling below fields to connect to bank server.
Password 6 digit numbers.
Area code 3 digit numbers.
Prefix 3 digit numbers but doesn’t start with 0 and 1.
Suffix 6 digit alpha numeric.
Commands cheque deposit , money transfer, mini statement, and bill pay
Prepare tests cases title or scenarios.
TC1: verify password value
BVA on size ECP type
valid Invalid
Min=max=6 digits 0-9 a-z
Min-1,5 –fail A-Z
Min+1,7—fail Special
Max+1,7 fail character
Max-1, 5 fail Blank fields
20
Siddiq 21
Functional specification 8
A computer restart operation prepares test case tittles or scenario.
Functional specification 9
Money withdrawal from ATM machine. Prepare test case titles.
1. Verify card insertion
2. Verify card insertion is wrong angle or improper angle
3. Verify card insertion with improper account
4. Verify PIN number entry
5. Verify operation when you enter wrong pin number 3 times
6. Verify language selection
7. Verify account type selection
8. Verify operation when you selected invalid account type with respect that inserted card.
9. Verify withdrawal option selection
10. Verify amount entry
11. Verify withdrawal operation correct amount, right receipt and able to take back the card.
12. Verify withdrawal operation with wrong demonstrations in amount
13. Verify withdrawal operation when our amount greater than possible amount
14. Verify withdrawal operation due to lack of amount in ATM.
15. Verify withdrawal operation when our amount is greater than day limit
16. Verify withdrawal operation whenever current transaction number greater than day limit on
number of transaction.
17. Verify withdrawal operation when we have network problem
18. Verify cancel after insertion of card
19. Verify cancel after entry of pin number.
20. Verify cancel after selection language.
21. Verify cancel after selection of correct type.
22. Verify cancel after entry of amount.
Test case documentation: After completion of test case scenarios selection, test engineers document the
test cases with complete information. In this test case document test engineers use IEEE 829 formats.
('Institute of Electrical and Electronics Engineers').
21
Siddiq 22
Data matrix:
11. Input ECP(type) BVA(Size)
11. Test cases pass or fail criteria: when this case is passed and when this case is failed.
Note 1: Above 11 fields test case format is not mandatory because some field values are common to
max test cases and some fields values are is to remember or derive
Note 2: Generally the test cases cover objects and operations. These test cases cover objects values,
then test engineers’ prepare data matrix.
Note3: If test case covers operation or execution then test engineers prepare test procedures from
base state to end state.
Functional specification 10
A login is process is allowing a user id and password to authorise users. User id is taking alpha numeric
in lower case from 4 to 16 long. The password object is accepting alphabets in lowercase from 4 to 8
characters long. Prepare test case document
Document 1
1. Test case ID: TC_login_arjun_1
2. Test case name: verify user ID
3. Test suite ID: TS_Login
4. Priority: Po
5. Pre condition user id object taking values from key board
6. Data matrix:
I/P Object ECP (type) BVA (size)
Valid Invalid Min Max
a-z A-Z 4 16
0-9 Special man sasidhar
blank
Document 2
1. Test case ID: TC_login_arjun_2
22
Siddiq 23
2. Test case name: verify user ID
3. Test suite ID: TS_Login
4. Priority: Po
5. Pre condition password, object is taking values from Key board.
6. Data matrix
I/P Object ECP (type) BVA (size)
Valid Invalid Min Max
a-z 0-9 4 8
A-Z man sasidhar
Special
blank
Document 3
1. Test case ID: TC_login_arjun_3
2. Test case name: verify login operation
3. Test suite ID: TS_Login
4. Priority: Po
5. Pre condition password, object is taking values from Key board.
6. Test procedure
Step no Action I/P required Expected
1 Focus to None User id object
login focused
window
2 Fill fields User ID & ‘OK’ button
password
3 Click ‘OK’ Valid-valid Next window
Valid-invalid Error message
Invalid-valid --do—
Value-blank --do—
Blank-password --do—
Use cases Based: The other method for test case selection is use cases based test case design. This
method is preferable for out sourcing testing companies. Generally max testers preparing TEST CASE
depending functional system specifications in corresponding project SRS, some times the testing people
prepare test cases depending on use cases also. Use cases are more elaborative and understandable than
functional and systems specifications.
BRS
Depends
SRS Test cases
HLD
LLD’S
CODING (BUILD)
23
Siddiq 24
From the above diagram, testing team receives use cases from project management to prepare test cases.
Every use case is describing functionality with all required information. Every use case follows a
standard format unlike theoretical functional specification.
FORMATS:
1. Use case name: The name of use case for future reference.
2. Use case description: Summary of functionality
3. Actors: Name of actors, who participate in corresponding function
4. Related use cases: Name of selected use cases, which have dependency with other use cases
5. Pre conditions: List of necessary tasks to do before start this functionality testing
6. Activity flow diagram: The graphical notation of corresponding functionality.
7. Primary scenario: A step by step action to perform corresponding functionality
8. Alternative scenario: Alternative list of actions to perform same functionality
9. Post conditions: It specifies the exist point of corresponding functionality
10. User Interface makeup: Model screen or proto type
11. Special requirement: List of rules to be followed if possible from the above use case format;
project management provide all functionality documents with complete details. Depending on
that use cases, test engineers prepare test case titles and then documentation using IEEE 829
format.
Generally the test engineers prepare test cases depending on functional and system specifications or
use cases. After completion of maximum test cases selection, test engineers prepare some test cases
depending on application build, which received from development team these new test cases
concentrate on usability of the screens in an application. These test cases cover ease of use, look
and feel, speed in interface, and user manual correctness.
24
Siddiq 25
Note: Generally the test engineers prepare maximum test cases depending on functional and system
specifications in SRS. The remaining test cases are prepared using application build because the
functional and system specifications don’t provide complete information about every small issue in
our project. Some times the testing people use Use cases instead of functional and system
specification in SRS.
Review Test Cases: After completion of test cases selection and documentation, test lead conducts
review meeting along with test engineers. In this review test lead concentrates on completeness and
correctness of test cases. In coverage analysis, test lead considers 2 types of factors.
Testing techniques based test cases coverage after completion of this review meeting test
engineers concentrate on test execution.
IV.TEST EXECUTION
In test execution, test engineers concentrate on test cases execution, defect reporting and tracking. In this
stage the testing team conduct a small meeting with development team for version controlling and
establishment of test environment.
Builds
Defect
1. Version control: During test execution developers assign unique version numbers to software
builds after performing required changes. This version numbers system should be understandable
to testing people. For build version controlling the development people use version control
software’s. Example: VSS (visual source save).
25
Siddiq 26
Level 0 Basic functionality
Level 1 Total functionality
Level 2 Selected test cases W.R.T. Modification
Level 3 Selected test cases W.R.T. Bug density
After receiving stable build from development team, test engineers execute all test cases
sequentially either in manual or in automation. In manual test execution, test engineers compare
test cases specified expected values & build specified actual value. In test execution test engineers
prepare test log . This document consists of 3 types of entries.
1. Passed: all excepted values of the test case are equal to all actual value build.
2. Failed: Any one expected value is variation with in anyone actual value builds.
3. Blocked: dependent Test case execution postponed to next cycle. Due to wrong parent
functionality. SKIP Passed
CLOSE
All test cases in QUE Execution FAIL
Level 0
Level 1
Case 1: The development team resolves bug severity as high, then test engineers re-execute all Po, All
P1, and carefully select max, P2 test cases. On modified build with respect to, modification
mentioned in release note.
Case 2: If the development team resolves bug severity as medium, then test engineers execute all Po,
carefully selected all P1, and some of P2, test cases, on modified build W.R.T Modification
mentioned in release note.
Case 3: If the development team resolves bug severity as how, then test engineers are re-executing some
Po, P1, P2, test cases are that modified build W.R.T. to mention is release.
Case 4: If the test team modified build due to sudden changes in customer requirements, and then test
engineers re-execute all Po allP1, carefully selected max P2, test cases on modified build W.R.T.
modification mentioned in release notes.
V.TEST REPORTING
During level-1 and level2 test execution test engineers report miss-matches is between test cases
expected values and build actual values as defect report to development team. In test reporting
development receive defect report from testing team in standard format. This format is followed by
every test engineer during test execution to report defects.
27
Siddiq 28
IEEE 829 defect report format:
11. Priority Importance of the defect to restore in terms of customer (high, medium low)
12. Detected by the name of test engineer
13. Detected on date and time of defect reporting
14. Assigned to: Name of responsible person at development side to receive defect report.
15. Suggest fix :( Optional): Reasons to accept and restore these defect.
Resolution type: After receiving defect report from testing team the responsible
development people conduct review meeting and then send resolution to responsible testing
team.
1. Enhancement: The reported defect is rejected because this defect is related to future
requirements of this customer.
2. Duplicate: The reported defect is rejected because that defect is similar to previously accepted
defect.
3. H/W invitation: Reported defect is rejected because the defect rose due to limitation of H/W
devices.
4. S/W limitations: The reported defect is rejected because that defect rose due to limitation of
corresponding S/W technology.
5. Not applicable: The reported defect is rejected because this defect has improper meaning.
6. Functions as designed: The reported defect is rejected because the coding is correct
W.R.T.Designed documents.
7. Need more information: The reported defect is not accepted and not rejected but the developers
are requesting more information about the defect to understand.
8. UnProduceble: The reported defect is not accepted and not rejected but the developer is
requiring correct procedure to reproduce that defect.
9. No_ plan to fix it: The reported defect is not accepted and not rejected but the development
people are requiring same more extra time.
10. Open: The reported defect is accepted and the development people are ready to resolve through
changes in coding.
28
Siddiq 29
11. Deferred: The reported defect is accepted but postponed to future release due to low severity
and low priority
12. Use Direction: The reported defect is accepted but developer is providing same valid
information about that defect to customer site people through is application build screens.
3
Test manager PM
2 9 8 7 4
Test lead Test lead
1 10 6 5
Test engineer Programmer
Note: Generally test engineers judge the severity and priority of defect during reporting; generally
the development people assign low priority and low severity.
PM
2 7 6 3
1 8 5 4
New
Closed Reopen
29
Siddiq 30
Types of defects: Generally black box testers find the fallowing type of defects during system
testing such as
1. User interface defects: (Low severity):Example1: spelling mistake (low severity and high priority)
Example 2: Improper right alignment: how severity and low severity.
2. Boundary related defects: (Medium severity): Example1: The object is not taking valid type of
values as I/P (medium severity and high priority) Example2 :One object is taking invalid type also
( medium severity and low priority)
3. Error handling bugs (medium severity): Example1 does not return error message to prevent
wrong operation on build (Medium severity and low priority) Example 2: returns error message but
complex to under stand( medium severity and low priority)
4. calculation bugs: (High severity): Example1 dependent I/P is wrong (application shows stopper)
(High severity and high priority) Example2 final O/P is wrong;(module shows stopper) (high
severity and low priority)
5. Race condition bugs:(high severity): Example dead lock or hang (application shows stopper)( high
severity and how priority) Example2: does not run on other customer expected platforms (high
severity low priority)
6. Load condition bugs: (high severity): Example does not allow multiple users (application shows
stopper) (high severity and high priority). Example2 : does not allow customer expected load (high
severity and low priority.)
7. H/W related bugs :( High severity): Example1 does not activate required H/W device(application
shows stopper) (High severity and high priority) Example2: does not support all customer excepted
H/W technologies (high severity and low priority)
30
Siddiq 31
8. ID control bugs(medium priority severity) Example wrong logo, logo missing, copy right window
missing wrong version number, develop meet and testing people names missing…
9. Version control bugs (medium severity) Example Invalid differences in between old build version
and current version build.
31
Siddiq 32
TEST CLOSURE
After completion of reasonable cycles of test execution, test lead concentrates on test closure to estimate
completeness and correctness of test execution and bugs resolved. In review meeting, the test lead is
considering some factors to review testing team to responsibility.
2. Defect density:
Module name no of defects
A 20%
B 20%
C 40% need for
D 20% regression
3. Analysis of deferred (postponed) defects: Whether deferred defects are postponed or not?
After completion, closure review by testing team concentrates on postmortem testing or final
regression testing or pre acceptance testing, if required.
Select high
defect density
module
Effort
Test estimation
reporting
Plan regression
ion
e g ress
R ng
i
test
4. User acceptance testing: After completion of testing and their reviews, project management
concentrates on user acceptance testing to collect feedback from real customer model customers. There
are two way to conduct UAT such as α-testing and β-testing.
5. Sign off: After completion of user acceptance testing and modification, project management declares
release team and CCB. In both teams few developer and test engineers are involved along with project
manager. In sign off stage testing team submits all prepared testing documents to project manager.
32
Siddiq 33
Test strategy
Test plans
Test case titles/ test scenario
Test case document
Test logo
Test defect reports above all documents combination is also known as a final Test summery
report(FTSR)
Requirements Traceability Matrix
It’s a document. The document creation and updation are done by test lead. This document starts
from test planning and with tests closer.
1. Quality assessment measurement: The measurement used by project manager or test manager
during testing (monthly once).
a) Stability :
b) Sufficiency:-
Requirements coverage
a) Test status:-
b) Delays in delivery :
Defect ageing
c) Test efficiency
III. Process capability measurement This measurement is used by Project manager & test
manager 2 improve testing team effort.
A test engineers executes test cases without using any third party testing tool. This style of test
execution is manual testing
A test engineers executes test cases with the help of testing tool. This style of test execution is
calling of test automation Test engineers are preferring automation.
Test reputation& compatibility manual testing 2 types of testing tools available in market such as
functionality testing tools & load/stress (performance) testing tolls.
Example:
LOAD/STRESS TESTING TOOLS: Load runner, ration load test, SILK performs, ECT.
34