Software Testing Unit2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Unit2

2.1 TEST CASE DESIGN STRATEGIES USING BLACK BOX APPROACH

Black box testing involves examining the functionality of a system without


understanding its internal structures or workings. Here are several test case design
strategies commonly used with the black box approach:

Equivalence Partitioning

This technique is also known as Equivalence Class Partitioning (ECP). In this


technique, input values to the system or application are divided into different classes
or groups based on its similarity in the outcome.

Hence, instead of using each and every input value, we can now use any one value
from the group/class to test the outcome. This way, we can maintain test coverage
while we can reduce the amount of rework and most importantly the time spent.

For Example:

As present in the above image, the “AGE” text field accepts only numbers from 18 to
60. There will be three sets of classes or groups.

Two invalid classes will be:

a) Less than or equal to 17.

b) Greater than or equal to 61.

A valid class will be anything between 18 and 60.

We have thus reduced the test cases to only 3 test cases based on the formed classes
thereby covering all the possibilities. So, testing with any one value from each set of
the class is sufficient to test the above scenario.
Boundary Value Analysis

The name itself defines that in this technique, we focus on the values at boundaries as
it is found that many applications have a high amount of issues on the boundaries.

Boundary refers to values near the limit where the behavior of the system changes. In
boundary value analysis, both valid and invalid inputs are being tested to verify the
issues.

For Example:

If we want to test a field where values from 1 to 100 should be accepted, then we
choose the boundary values: 1-1, 1, 1+1, 100-1, 100, and 100+1. Instead of using all
the values from 1 to 100, we just use 0, 1, 2, 99, 100, and 101.

Decision Table Testing

As the name itself suggests, wherever there are logical relationships like:

If
{
(Condition = True)
then action1 ;
}
else action2; /*(condition = False)*/

Then a tester will identify two outputs (action1 and action2) for two conditions (True
and False). So based on the probable scenarios a Decision table is carved to prepare a
set of test cases.

For Example:

Take an example of XYZ bank that provides an interest rate for the Male senior
citizen as 10% and 9% for the rest of the people.
In this example condition, C1 has two values as true and false, C2 also has two values
as true and false. The total number of possible combinations would then be four. This
way we can derive test cases using a decision table.

4) State Transitio n Testing

State Transition Testing is a technique that is used to test the different states of the
system under test. The state of the system changes depending upon the conditions or
events. The events trigger states which become scenarios and a tester needs to test
them.

A systematic state transition diagram gives a clear view of the state changes but it is
effective for simpler applications. More complex projects may lead to more complex
transition diagrams thereby making it less effective.

For Example:

#5) Error Guessing


This is a classic example of Experience-Based Testing.

In this technique, the tester can use his/her experience about the application behavior
and functionalities to guess the error-prone areas. Many defects can be found using
error guessing where most of the developers usually make mistakes.

Few common mistakes that developers usually forget to handle:

 Divide by zero.
 Handling null values in text fields.
 Accepting the Submit button without any value.
 File upload without attachment.
 File upload with less than or more than the limit size.

2.2 Equivalence Partitioning Method

Equivalence Partitioning Method is also known as Equivalence class partitioning


(ECP). It is a software testing technique or black-box testing that divides input
domain into classes of data, and with the help of these classes of data, test cases can
be derived.
In equivalence partitioning, equivalence classes are evaluated for given input
conditions. Whenever any input is given, then type of input condition is checked,
then for this input conditions, Equivalence class represents or describes set of valid
or invalid states.
Guidelines for Equivalence Partitioning :
 If the range condition is given as an input, then one valid and two invalid
equivalence classes are defined.
 If a specific value is given as input, then one valid and two invalid equivalence
classes are defined.
 If a member of set is given as an input, then one valid and one invalid
equivalence class is defined.
If Boolean no. is given as an input condition, then one valid and one invalid
equivalence class is defined.
Example-1:
Let us consider an example of any college admission process. There is a college that
gives admissions to students based upon their percentage.
Consider percentage field that will accept percentage only between 50 to 90 %,
more and even less than not be accepted, and application will redirect user to an
error page. If percentage entered by user is less than 50 %or more than 90 %, that
equivalence partitioning method will show an invalid percentage. If percentage
entered is between 50 to 90 %, then equivalence partitioning method will show valid
percentage.

Example 2:
Let us consider an example of an online shopping site. In this site, each of products
has a specific product ID and product name. We can search for product either by
using name of product or by product ID. Here, we consider search field that accepts
only valid product ID or product name.
Let us consider a set of products with product IDs and users wants to search for
Mobiles. Below is a table of some products with their product Id.

Product Product ID

Mobiles 45

Laptops 54

Pen Drives 67

Keyboard 76

Headphone
34
s

If the product ID entered by user is invalid then application will redirect customer or
user to error page. If product ID entered by user is valid i.e. 45 for mobile, then
equivalence partitioning method will show a valid product ID.
Example-3 :
Let us consider an example of software application. There is function of software
application that accepts only particular number of digits, not even greater or less
than that particular number.
Consider an OTP number that contains only 6 digit number, greater and even less
than six digits will not be accepted, and the application will redirect customer or
user to error page. If password entered by user is less or more than six characters,
that equivalence partitioning method will show an invalid OTP. If password entered
is exactly six characters, then equivalence partitioning method will show valid

OTP.
2.3 Cause Effect Graphing

Cause Effect Graphing based technique is a technique in which a graph is used to


represent the situations of combinations of input conditions. The graph is then
converted to a decision table to obtain the test cases. Cause-effect graphing
technique is used because boundary value analysis and equivalence class
partitioning methods do not consider the combinations of input conditions. But since
there may be some critical behaviour to be tested when some combinations of input
conditions are considered, that is why cause-effect graphing technique is used.
Steps used in deriving test cases using this technique are:
1. Division of specification:
Since it is difficult to work with cause-effect graphs of large specifications as
they are complex, the specifications are divided into small workable pieces and
then converted into cause-effect graphs separately.
1. Identification of cause and effects:
This involves identifying the causes(distinct input conditions) and effects(output
conditions) in the specification.
2. Transforming the specifications into a cause-effect graph:
The causes and effects are linked together using Boolean expressions to obtain a
cause-effect graph. Constraints are also added between causes and effects if
possible.
3. Conversion into decision table:
The cause-effect graph is then converted into a limited entry decision table. If
you’re not aware of the concept of decision tables, check out this link.
4. Deriving test cases:
Each column of the decision-table is converted into a test case.
Basic Notations used in Cause-effect graph:
Here c represents cause and e represents effect.
The following notations are always used between a cause and an effect:
1. Identity Function: if c is 1, then e is 1. Else e is 0.

2.
3. NOT Function: if c is 1, then e is 0. Else e is 1.

4.
5. OR Function: if c1 or c2 or c3 is 1, then e is 1. Else e is 0.
6.
To represent some impossible combinations of causes or impossible combinations of
effects, constraints are used. The following constraints are used in cause-effect
graphs:
1. Exclusive constraint or E-constraint: This constraint exists between causes. It
states that either c1 or c2 can be 1, i.e., c1 and c2 cannot be 1 simultaneously.
2.
3. Inclusive constraint or I-constraint: This constraint exists between causes. It
states that atleast one of c1, c2 and c3 must always be 1, i.e., c1, c2 and c3
cannot be 0 simultaneously.

4.
5. One and Only One constraint or O-constraint: This constraint exists between
causes. It states that one and only one of c1 and c2 must be 1.

6.
7. Requires constraint or R-constraint: This constraint exists between causes. It
states that for c1 to be 1, c2 must be 1. It is impossible for c1 to be 1 and c2 to
be 0.

8.
9. Mask constraint or M-constraint: This constraint exists between effects. It
states that if effect e1 is 1, the effect e2 is forced to be 0.

2.4 Error Guessing in Software Testing

In software testing error guessing is a method in which experience and skill play an
important role. As here possible bugs and defects are guessed in the areas where
formal testing would not work. That’s why it is also called experience-based testing
which has no specific method of testing. This is not a formal way of performing
testing still it has importance as it sometimes solves many unresolved issues also.

 Determine Which Areas Are Ambiguous: Error guessing is a technique used


by testers to find unclear or poorly specified software areas that might not have
clear specifications. Testers can highlight areas of the application that need more
explanation or elaboration in the requirements by estimating with confidence.
 Experience-Based Testing: Testers use their expertise and understanding of
typical software dangers to estimate possible flaws. This method works
especially well for complicated or inadequately documented systems when
experience is essential for identifying flaws.
 Testing Based on Risk: By enabling testers to concentrate on software
components that pose a high risk, it is consistent with a risk-based testing
methodology. Based on their assessment of regions where flaws have a higher
chance of affecting the system, testers prioritize their testing efforts.
 Early Error Identification: Early defect detection during testing is made
possible by error guessing. By addressing faults at an early stage, testers can
contribute to the overall quality of the product by identifying potential issues
before the execution of formal test cases.
Factors used in error guessing :
1. Lessons learned from past releases.
2. Experience of testers.
3. Historical learning.
4. Test execution report.
5. Earlier defects.
6. Production tickets.
7. Normal testing rules.
8. Application UI.
9. Previous test results.
Error Guessing is one of the popular techniques of testing, even if it is not an
accurate approach of performing testing still it makes the testing work simple and
saves a lots of time. But when it is combined with other testing techniques we get
better results. In this testing, it is essential to have skilled and experienced testers.

2.5 COMPATIBILITY TESTING IN SOFTWARE TESTING

Compatibility testing in software testing ensures that the software or application


functions as expected across different environments, devices, operating systems,
browsers, and network configurations. Here's an overview of compatibility testing:

Types of Compatibility Testing:

 Hardware Compatibility: Ensures the software runs on various hardware


configurations like different processors, memory capacities, and peripherals.
 Operating System Compatibility: Tests the software on different operating
systems such as Windows, macOS, Linux, iOS, and Android.
 Browser Compatibility: Ensures the software works correctly on different
web browsers like Chrome, Firefox, Safari, Edge, and Internet Explorer.
 Mobile Device Compatibility: Tests the software on different mobile devices
like smartphones and tablets running on various platforms such as iOS and
Android.
 Network Compatibility: Checks how the software performs under different
network conditions like varying bandwidths and latency.
 Backward Compatibility: Ensures that newer versions of the software can
still interact with older versions or data formats.
 Localization and Internationalization Compatibility: Verifies if the
software functions correctly in different languages, cultures, and regions.

Key Activities in Compatibility Testing:

 Identifying Target Environments: Determine the specific platforms, devices,


browsers, and configurations that need to be tested.
 Creating Test Environment: Set up a test environment that mirrors the real-
world conditions as closely as possible.
 Test Case Design: Design test cases covering different combinations of
environments, configurations, and devices.
 Execution and Validation: Execute the test cases and validate the behavior of
the software across various environments.
 Bug Reporting and Tracking: Document and report any compatibility issues
found during testing, track them to resolution, and verify fixes.

2.6 DOMAIN TESTING USING WHITEBOX APPROACH TO TEST DESIGN


Domain testing, also known as data domain testing, focuses on testing the
functionality of a software application based on the input data domain. In contrast to
black box testing, white box testing involves examining the internal structure and
logic of the software. Here's how domain testing can be approached using the white
box testing technique:

Identify Input Domains: Begin by identifying the input domains of the software.
These are the sets of valid and invalid inputs that the software can accept.

Partition the Input Domains: Divide the input domains into meaningful partitions.
Each partition represents a subset of inputs that should exhibit similar behavior. For
example, if an input field accepts integers, partitions could include positive integers,
negative integers, and zero.
Exercise Each Partition: Design test cases to exercise each partition of the input
domain. This ensures that the software behaves correctly for different categories of
input data. For example, if testing a function that calculates the square root of a
number, test cases might include valid inputs such as positive numbers and invalid
inputs such as negative numbers.

Boundary Value Analysis (BVA): Apply boundary value analysis to the partitions
to identify critical boundary values. Test cases should include inputs at the boundaries
of each partition as well as values immediately above and below these boundaries. For
example, if testing a function that accepts integers from 1 to 100, test cases might
include inputs such as 1, 50, 100, and 101.

Examine Code Paths: Analyze the internal structure of the software to identify code
paths that correspond to different input domains. Ensure that each code path is
exercised by at least one test case.

Use Control Flow Testing: Use control flow testing techniques, such as statement
coverage or branch coverage, to ensure that each statement and branch of the code is
executed during testing. This helps uncover any logic errors or inconsistencies in the
code.

Apply Data Flow Analysis: Analyze how data flows through the software and design
test cases to ensure that data is processed correctly at each step. This involves tracing
the flow of data from input to output and verifying that it is transformed and
manipulated as expected.

Integration Testing: Perform integration testing to verify that different components


of the software interact correctly with each other. This includes testing how data is
exchanged between modules and ensuring that interfaces are implemented correctly.

You might also like