The Testing Techniques
The Testing Techniques
The Testing Techniques
To perform these types of testing, there are two widely used testing techniques. The above said
testing types are performed based on the following testing techniques.
Black-Box testing technique:
This technique is used for testing based solely on analysis of requirements
(specification, user documentation.). Also known as functional testing.
White-Box testing technique:
This technique us used for testing based on analysis of internal logic (design,
code, etc.)(But expected results still come requirements). Also known as Structural
testing.
Black Box and White Box testing
Introduction
Test Design refers to understanding the sources of test cases, test coverage, how to
develop and document test cases, and how to build and maintain test data. There are 2
primary methods by which tests can be designed and they are:
BLACK BOX & WHITE BOX
Black-box test design treats the system as a literal "black-box", so it doesn't explicitly
use knowledge of the internal structure. It is usually described as focusing on testing functional
requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.
White-box test design allows one to peek inside the "box", and it focuses specifically on
using internal knowledge of the software to guide the selection of test data. It is used to detect
errors by means of execution-oriented test cases. Synonyms for white-box include: structural, glass-
box and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer the terms
"behavioral" and "structural”. Behavioral test design is slightly different from black-box test design
because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice,
it hasn't proven useful to use a single test design method. One has to use a mixture of different
methods so that they aren't hindered by the limitations of a particular one. Some call this "gray-
box" or "translucent-box" test design, but others wish we'd stop talking about boxes altogether!!
Black box testing
Black Box Testing is testing without knowledge of the internal workings of the item being
tested. For example, when black box testing is applied to software engineering, the tester would
only know the "legal" inputs and what the expected outputs should be, but not how the program
actually arrives at those outputs. It is because of this that black box testing can be considered
testing with respect to the specifications, no other knowledge of the program is necessary. For this
reason, the tester and the programmer can be independent of one another, avoiding programmer
bias toward his own work. For this testing, test groups are often used,
Though centered around the knowledge of user requirements, black box tests do not necessarily
involve the participation of users. Among the most important black box tests that do not involve
users are functionality testing, volume tests, stress tests, recovery testing, and benchmarks.
Additionally, there are two types of black box test that involve users, i.e. field and laboratory tests.
In the following the most important aspects of these black box tests will be described briefly
Testing to verify a product meets customer specified requirements. A customer usually does this
type of testing on a product that is developed externally.
Testing without knowledge of the internal workings of the item being tested. Tests are usually
functional.
COMPATIBILITY TESTING
Testing to ensure compatibility of an application or Web site with different browsers, OSs, and
hardware platforms. Compatibility testing can be performed manually or can be driven by an
automated functional or regression test suite.
CONFORMANCE TESTING
Verifying implementation conformance to industry standards. Producing tests for the behavior of
an implementation to be sure it provides the portability, interoperability, and/or compatibility a
standard defines.
FUNCTIONAL TESTING
Validating an application or Web site conforms to its specifications and correctly performs all its
required functions. This entails a series of tests which perform a feature by feature validation of
behavior, using a wide range of normal and erroneous input data. This can involve testing of the
product's user interface, APIs, database management, security, installation, networking, etc
Testing can be performed on an automated or manual basis using black box or white box
methodologies.
INTEGRATION TESTING
Testing in which modules are combined and tested as a group. Modules are typically code modules,
individual applications, client and server applications on a network, etc. Integration Testing follows
unit testing and precedes system testing.
LOAD TESTING
Load testing is a generic term covering Performance Testing and Stress Testing.
PERFORMANCE TESTING
Performance testing can be applied to understand your application or WWW site's scalability, or to
benchmark the performance in an environment of third party products such as servers and
middleware for potential purchase. This sort of testing is particularly useful to identify
performance bottlenecks in high use applications. Performance testing generally involves an
automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional
load conditions.
REGRESSION TESTING
Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of
each new release of a product or Web site. Such testing ensures reported product defects have been
corrected for each new release and that no new quality problems were introduced in the
maintenance process. Though regression testing can be performed manually an automated test
suite is often used to reduce the time and resources needed to perform the required testing.
SMOKE TESTING
A quick-and-dirty test that the major functions of a piece of software work without bothering with
finer details. Originated in the hardware testing practice of turning on a new piece of hardware for
the first time and considering it a success if it does not catch on fire.
STRESS TESTING
Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements to determine the load under which it fails and how. A graceful degradation under
load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed
using the same process as Performance Testing but employing a very high level of simulated load.
SYSTEM TESTING
Testing conducted on a complete, integrated system to evaluate the system's compliance with its
specified requirements. System testing falls within the scope of black box testing, and as such,
should require no knowledge of the inner design of the code or logic.
UNIT TESTING
Functional and reliability testing in an Engineering environment. Producing tests for the behavior
of components of a product to ensure their correct behavior prior to system integration.
Testing based on an analysis of internal workings and structure of a piece of software. Includes
techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass
Box Testing.
1. Black Box Testing (also called Functional Testing) is testing that ignores the internal
mechanism of a system or component and focuses solely on the outputs generated in response to
selected inputs and execution conditions.
2. White Box Testing (also called Structural Testing and Glass Box Testing) is testing that takes
into account the internal mechanism of a system or component.
With black box testing, the software tester does not (or should not) have access to the
source code itself. The code is considered to be a “Big Black Box” to the tester who can’t see inside
the box. The tester knows only that information can be input into to the black box, and the black
box will send something back out. Based on the requirements knowledge, the tester knows what to
expect the black box to send out and tests to make sure the black box sends out what it’s supposed
to send out.
Alternatively, white box testing focuses on the internal structure of the software code. The
white box tester (most often the developer of the code) knows what the code looks like and writes
test cases by executing methods with certain parameters. In the language of V&V, black box testing
is often used for validation (are we building the right software?) and white box testing is often used
for verification (are we building the software right?).
1. Unit Testing
Unit testing is the testing of individual hardware or software units or groups of related units. Using
white box testing techniques, testers (usually the developers creating the code implementation)
verify that the code does what it is intended to do at a very low structural level. For example, the
tester will write some test code that will call a method with certain parameters and will ensure that
the return value of this method is as expected. Looking at the code itself, the tester might notice that
there is a branch (an if-then) and might write a second test case to go down the path not executed
by the first test case. When available, the tester will examine the low-level design of the code;
otherwise, the tester will examine the structure of the code by looking at the code itself. Unit testing
is generally done within a class or a component.
2. Integration testing
Integration test is testing in which software components, hardware components, or both are
combined and tested to evaluate the interaction between them. Using both black and white box
testing techniques, the tester (still usually the software developer) verifies that units work together
when they are integrated into a larger code base. Just because the components work individually,
that doesn’t mean that they all work together when assembled or integrated. For example, data
might get lost across an interface, messages might not get passed properly, or interfaces might not
be implemented as specified. To plan these integration test cases, testers look at high- and low-level
design documents.
Using black box testing techniques, testers examine the high-level design and the customer
requirements specification to plan the test cases to ensure the code does what it is intended to do.
Functional testing involves ensuring that the functionality specified in the requirement
specification works. System testing involves putting the new program in many different
environments to ensure the program works in typical customer environments with various
versions and types of operating systems and/or applications. System testing is testing conducted on
a complete, integrated system to evaluate the system compliance with its specified requirements.
Because system test is done with a full system implementation and environment, several classes of
testing can be done that can examine non-functional properties of the system. It is best when
function and system testing is done by an unbiased, independent perspective (e.g. not the
programmer)
Stress Testing – testing conducted to evaluate a system or component at or beyond the limits of its
specification or requirement [11]. For example, if the team is developing software to run cash
registers, a non-functional requirement might state that the server can handle up to 30 cash
registers looking up prices simultaneously. Stress testing might occur in a room of 30 actual cash
registers running automated test transactions repeatedly for 12 hours. There also might be a few
more cash registers in the test lab to see if the system can exceed its stated requirements.
Usability Testing – testing conducted to evaluate the extent to which a user can learn to operate,
prepare inputs for, and interpret outputs of a system or component. While stress and usability
testing can be and is often automated, usability testing is done by human-computer interaction
specialists that observe humans interacting with the system.
4. Acceptance testing
After functional and system testing, the product is delivered to a customer and the customer runs
black box acceptance tests based on their expectations of the functionality. Acceptance testing is
formal testing conducted to determine whether or not a system satisfies its acceptance criteria (the
criteria the system must satisfy to be accepted by a customer) and to enable the customer to
determine whether or not to accept the system . These tests are often pre-specified by the customer
and given to the test team to run before attempting to deliver the product. The customer reserves
the right to refuse delivery of the software if the acceptance test cases do not pass. However,
customers are not trained software testers. Customers generally do not specify a “complete” set of
acceptance test cases. Their test cases are no substitute for creating your own set of
functional/system test cases. The customer is probably very good at specifying at most one good
test case for each requirement. As you will learn below, many more tests are needed. Whenever
possible, we should run customer acceptance test cases ourselves so that we can increase our
confidence that they will work at the customer location.
5. Regression testing
Throughout all testing cycles, regression test cases are run. Regression testing is selective retesting
of a system or component to verify that modifications have not caused unintended effects and that
the system or component still complies with its specified requirements [11]. Regression tests are a
subset of the original set of test cases. These test cases are re-run often, after any significant
changes (bug fixes or enhancements) are made to the code. The purpose of running the regression
test case is to make a “spot check” to examine whether the new code works properly and has not
damaged any previously-working functionality by propagating unintended side effects. Most often,
it is impractical to re-run all the test cases when changes are made. Since regression tests are run
throughout the development cycle, there can be white box regression tests at the unit and
integration levels and black box tests at the integration, function, system, and acceptance test levels.
6. Beta testing
When an advanced partial or full version of a software package is available, the development
organization can offer it free to one or more (and sometimes thousands) potential users or beta
testers. These users install the software and use it as they wish, with the understanding that they
will report any errors revealed during usage back to the development organization. These users are
usually chosen because they are experienced users of prior versions or competitive products.
Unit Testing
In computer programming, unit testing is a procedure used to validate that individual units of
source code are working properly. A unit is the smallest testable part of an application. In
procedural programming a unit may be an individual program, function, procedure, etc., while in
object-oriented programming, the smallest unit is a method; which may belong to a base/super
class, abstract class or derived/child class.
Ideally, each test case is independent from the others; mock objects and test harnesses can be used
to assist testing a module in isolation. Unit testing is typically done by developers and not by
Software testers or end-users.
esting methods
Software testing methods are traditionally divided into white- and black-box testing. These two
approaches are used to describe the point of view that a test engineer takes when designing test
cases.
White box testing is when the tester has access to the internal data structures and algorithms
including the code that implement these.
API testing (application programming interface) - testing of the application using public and private
APIs
Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test designer can
create tests to cause all statements in the program to be executed at least once)
Fault injection methods - improving the coverage of a test by introducing faults to test code paths
Test coverage
White box testing methods can also be used to evaluate the completeness of a test suite that was
created with black box testing methods. This allows the software team to examine parts of a system
that are rarely tested and ensures that the most important function points have been tested.[21]
Statement coverage, which reports on the number of lines executed to complete the test
Black box testing treats the software as a "black box"—without any knowledge of internal
implementation. Black box testing methods include: equivalence partitioning, boundary value
analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix, exploratory testing
and specification-based testing.
Advantages and disadvantages: The black box tester has no "bonds" with the code, and a tester's
perception is very simple: a code must have bugs. Using the principle, "Ask and you shall receive,"
black box testers find bugs where programmers do not. On the other hand, black box testing has
been said to be "like a walk in a dark labyrinth without a flashlight," because the tester doesn't
know how the software being tested was actually constructed. As a result, there are situations when
(1) a tester writes many test cases to check something that could have been tested by only one test
case, and/or (2) some parts of the back-end are not tested at all.
Therefore, black box testing has the advantage of "an unaffiliated opinion", on the one hand, and the
disadvantage of "blind exploring", on the other. [24]
Grey box testing (American spelling: gray box testing) involves having knowledge of internal data
structures and algorithms for purposes of designing the test cases, but testing at the user, or black-
box level. Manipulating input data and formatting output do not qualify as grey box, because the
input and output are clearly outside of the "black-box" that we are calling the system under test.
This distinction is particularly important when conducting integration testing between two
modules of code written by two different developers, where only the interfaces are exposed for test.
However, modifying a data repository does qualify as grey box, as the user would not normally be
able to change the data outside of the system under test. Grey box testing may also include reverse
engineering to determine, for instance, boundary values or error messages.
[edit]Testing levels
Tests are frequently grouped by where they are added in the software development process, or by
the level of specificity of the test.
[edit]Unit testing
Unit testing refers to tests that verify the functionality of a specific section of code, usually at the
function level. In an object-oriented environment, this is usually at the class level, and the minimal
unit tests include the constructors and destructors.[25]
These type of tests are usually written by developers as they work on code (white-box style), to
ensure that the specific function is working as expected. One function might have multiple tests, to
catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality
of a piece of software, but rather is used to assure that the building blocks the software uses work
independently of each other.
[edit]Integration testing
Integration testing is any type of software testing that seeks to verify the interfaces between
components against a software design. Software components may be integrated in an iterative way
or all together ("big bang"). Normally the former is considered a better practice since it allows
interface issues to be localised more quickly and fixed.
Integration testing works to expose defects in the interfaces and interaction between integrated
components (modules). Progressively larger groups of tested software components corresponding
to elements of the architectural design are integrated and tested until the software works as a
system.[26]
[edit]System testing
System testing tests a completely integrated system to verify that it meets its requirements.[27]
System integration testing verifies that a system is integrated to any external or third party systems
defined in the system requirements.[citation needed]
[edit]Regression testing
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it
seeks to uncover software regressions, or old bugs that have come back. Such regressions occur
whenever software functionality that was previously working correctly stops working as intended.
Typically, regressions occur as an unintended consequence of program changes, when the newly
developed part of the software collides with the previously existing code. Common methods of
regression testing include re-running previously run tests and checking whether previously fixed
faults have re-emerged. The depth of testing depends on the phase in the release process and the
risk of the added features. They can either be complete, for changes added late in the release or
deemed to be risky, to very shallow, consisting of positive tests on each feature, if the changes are
early in the release or deemed to be of low risk.
[edit]Acceptance testing
A smoke test is used as an acceptance test prior to introducing a new build to the main testing
process, i.e. before integration or regression.
Acceptance testing performed by the customer, often in their lab environment on their own
hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part
of the hand-off process between any two phases of development.[citation needed]
[edit]Alpha testing
[edit]Beta testing
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are
released to a limited audience outside of the programming team. The software is released to groups
of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta
versions are made available to the open public to increase the feedback field to a maximal number
of future users.[citation needed]
[edit]Non-functional testing
Special methods exist to test non-functional aspects of software. In contrast to functional testing,
which establishes the correct operation of the software (correct in that it matches the expected
behavior defined in the design requirements), non-functional testing verifies that the software
functions properly even when it receives invalid or unexpected inputs. Software fault injection, in
the form of fuzzing, is an example of non-functional testing. Non-functional testing, especially for
software, is designed to establish whether the device under test can tolerate invalid or unexpected
inputs, thereby establishing the robustness of input validation routines as well as error-handling
routines. Various commercial non-functional testing tools are linked from the software fault
injection page; there are also numerous open-source and free software tools available that perform
non-functional testing.
Performance testing is executed to determine how fast a system or sub-system performs under a
particular workload. It can also serve to validate and verify other quality attributes of the system,
such as scalability, reliability and resource usage. Load testing is primarily concerned with testing
that can continue to operate under a specific load, whether that be large quantities of data or a large
number of users. This is generally referred to as software scalability. The related load testing
activity of when performed as a non-functional activity is often referred to as endurance testing.
Volume testing is a way to test functionality. Stress testing is a way to test reliability. Load testing is
a way to test performance. There is little agreement on what the specific goals of load testing are.
The terms load testing, performance testing, reliability testing, and volume testing, are often used
interchangeably.
[edit]Stability testing
Stability testing checks to see if the software can continuously function well in or above an
acceptable period. This activity of non-functional software testing is often referred to as load (or
endurance) testing.
[edit]Usability testing
Usability testing is needed to check if the user interface is easy to use and understand.
[edit]Security testing
Security testing is essential for software that processes confidential data to prevent system
intrusion by hackers.
Internationalization and localization is needed to test these aspects of software, for which a
pseudolocalization method can be used. It will verify that the application still works, even after it
has been translated into a new language or adapted for a new culture (such as different currencies
or time zones).
[edit]Destructive testing
Destructive testing attempts to cause the software or a sub-system to fail, in order to test its
robustness.