Most Imp
Most Imp
Most Imp
Black box
It is because of this that black box testing can be considered testing with respect to
the specifications; no other knowledge of the program is necessary. For this reason,
the tester and the programmer can be independent of one another, avoiding
programmer bias toward his own work.
The so-called ``functionality testing'' is central to most testing exercises. Its primary
objective is to assess whether the program does what it is supposed to do, i.e. it
should meet user specified requirements. There are different approaches to
functionality testing. One is the testing of each program feature or function in
sequence. The other is to test module by module, i.e. each function where it is called
first.
Advantages:
Disadvantages
• Only a small number of possible inputs can actually be tested, to test every
possible input stream would take nearly forever
• Without clear and concise specifications, test cases are hard to design
• There may be unnecessary repetition of test inputs if the tester is not
informed of test cases the programmer has already tried
Most testing related research has been directed toward glass box testing
1 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Black box testing attempts to derive sets of inputs that will fully exercise all the
functional requirements of a system. It is not an alternative to white box
testing. This type of testing attempts to find errors in the following categories:
• Causes (input conditions) and effects (actions) are listed for a module and an
identifier is assigned to each.
• A cause-effect graph is developed.
• The graph is converted to a decision table.
SQA team members upon receipt of the Development builds, walk through the GUI
and either update existing hard copy of the product Roadmaps, or create new hard
copy. This is then passed on to the Tools engineer to automate for new builds and
regression testing. Defects are entered into the bugs tracking database, for
investigation and resolution.
Features & Functions - SQA test engineers, swearing on the team definition,
exercise the product features and functions accordingly. Defects in feature/function
capability are entered into the defect tracking system and are communicated to the
team. Features are expected to perform as expected and their functionality should be
oriented toward ease of use and clarity of objective.
2 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Tests are planned around new features and regression tests are exercised to validate
Existing features and functions are enabled and performing in a manner consistent
with prior releases. SQA using the exploratory testing method manually tests and
then plans more exhaustive testing and automation. Regression tests are exercised
which consist of using developed test cases against the product to validate field
input, boundary conditions and so on... Automated tests developed for prior releases
are also used for regression testing.
Team prior to Beta. SQA not only verifies technical accuracy, clarity and
completeness, they also provide editorial input on consistency, style and
typographical errors.
Functionality Testing
The first step in functionality testing is to become familiar with the program itself,
and with the program’s desired behavior. For this the tester should have clear idea
about the documentation such as the program’s functional specification or user
manual. Once a program’s expected functionality has been defined, test cases or test
procedures can be created that exercise the program in order to test actual behavior
against expected behavior. Testing the program’s functionality then involves the
execution of any test cases that have been created. Certain portions of a
functionality testing effort can also be automated, depends on several factors, and
should be discussed with a qualified engineer.
3 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
5. Check ‘Open file’ feature (must open correct file extensions and incorrect file
type should give error messages)
6. Check ‘Graph’ features
7. If there are logins, enter invalid login information for each field
8. Check for error messages for clarity and whether they come up when they are
supposed to.
9. In the presence of a database, check all connections through application are
valid when accessing data (error messages like “could not connect to
database” should not appear.
10. Modify data files (like add extra special characters) to make sure the
application gives correct error messages
11. For administrative features make sure only administrators of application may
access the features
12. Check by adding duplicate records
13. Delete all records to check whether such an action does not crash the
application
14. Check for compatibility using MS Office application (like copy and paste)
15. Click all buttons to make sure all of them are functioning appropriately
16. Click ‘save’ feature (should not be able to overwrite existing file without
permission), should save to correct directory, must create correct extension)
17. Check options/settings
18. Check international units are converted correctly
19. Make sure no spellings are incorrect
20. Check for valid date formats
21. Make sure windows are properly minimized, maximized and resized
22. Check whether keyboard shortcuts are working properly
23. Check that right mouse clicks show correct pop up menus
24. If hardware/software keys are present check if the application works as
intended with and without execution of keys
Compatibility Testing
4 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Regression Testing
Regression testing is testing the module in which a bug was identified earlier along
with the impacted areas to ensure that this fix has not introduced any further
defects.
The purpose of regression testing is to ensure that previously detected and fixed
issues really are fixed, they do not reappear, and new issues are not introduced into
the program as a result of the changes made to fix the issues.
Regression Testing is in general a black box testing strategy where test case
execution of previously written test cases, that has exposed bugs, is done to check
whether previously fixed faults have reemerged. In a test suite, all the tests that has
caused bug are written and are re-tested whenever changes are made to the
program to fix any bug. But this is a tedious process as after every compilation it is
difficult to go through the process of retesting all the test cases repeatedly. To make
this process simpler regression testing is automated using some testing tools?
Tracking database has been fixed it is reassigned back for final resolution. Now it can
be either reopens the issue, if it has not been satisfactorily addressed, or close the
issue if it has, indeed, been fixed.
Performance Testing
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and
establish a baseline for future regression testing
5 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
For example, for a Web application, you need to know at least two things:
Load testing:
Load testing is usually defined as the process of exercising the system under test by
feeding it the largest tasks it can operate with. Load testing is sometimes called
volume testing, or longevity/endurance testing
Examples of volume testing:
Although performance testing and load testing can see similar, their goals are
different. On one hand, performance testing uses load testing techniques and tools
for measurement and benchmarking purposes and uses various load levels whereas
load testing operates at a predefined load level, the highest load that the system can
accept while still functioning properly.
Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given
system or entity. This is designed to test the software with abnormal situations.
Stress testing attempts to find the limits at which the system will fail through
abnormal quantity or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resources or
by taking resources away from it (in which case it is sometimes called negative
testing). The main purpose behind this madness is to make sure that the system fails
and recovers gracefully -- this quality is known as recoverability.
6 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Stress testing does not break the system but instead it allows observing how the
system reacts to failure. Stress testing observes for the following.
Web Testing
• Functionality
• Performance
• Usability
• Server side interface
• Client side compatibility
• Security
Functionality:
In testing the functionality of the web sites the following should be tested.
• Links
Internal links
External links
Mail links
Broken links
• Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
• Database
• Cookies
Performance:
Performance testing can be applied to understand the web site's scalability, or to
benchmark the performance in the environment of third party products such as
servers and middleware for potential purchase.
7 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
• Connection speed:
• Load
• Stress
o Continuous load
o Performance of memory, CPU, file handling etc.
Usability:
Usability testing is the process by which the human-computer interaction
characteristics of a system are measured, and weaknesses are identified for
correction.
Usability can be defined as the degree to which a given piece of software assists the
person sitting at the keyboard to accomplish a task, as opposed to becoming an
additional impediment to such accomplishment. The broad goal of usable systems is
often assessed using several criteria:
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection
8 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Test Planning: Analyzing a project to determine the kinds of testing needed, the
kinds of people needed, the scope of testing needed, the kinds of people needed, the
scope of testing (including what should and should not be tested), the time available
for testing activities, the initiation criteria for testing, the completion criteria and the
critical success factors of testing
Test Tool Usage: Knowing which tools are most appropriate in a given testing
situation, how to apply the tools to solve testing problems effectively, how to
organize automated testing, and how to integrate test tools into an organization
Test Execution: Performing various kinds of tests, such as unit testing, system
testing, UAT, stress testing and regression testing. This can also include how to
determine which conditions to test and how to evaluate whether the system under
test passes or fails. Test execution can often be dependent on your unique
environment and project needs, although basic testing principles can be adopted to
test most projects
Risk analysis: Understanding the nature of risk, how to assess project and software
risks, how to use the results of a risk assessment to prioritize and plan testing, and
how to use risk analysis to prevent defects and project failure.
Test Measurement: Knowing what to measure during a test, how to use the
measurements to reach meaningful conclusions and how to use measurements to
improve the testing and development processes
Design Validation
Data Validation
What types of data will require validation? What parts of the feature will use what
types of data? What are the data types that test cases will address? Etc.
9 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
API Testing
What level of API testing will be performed? What is justification for taking this
approach (only if none is being taken)
Content Testing
Is your area/feature/product content based? What is the nature of the content? What
strategies will be employed in your feature/area to address content related issues?
Low-Resource Testing
What resources does your feature use? Which are used most, and are most likely to
cause problems? What tools/methods will be used in testing to cover low resource
(memory, disk, etc.) issues?
Setup Testing
How is your feature affected by setup? What are the necessary requirements for a
successful setup of your feature? What is the testing approach that will be employed
to confirm valid setup of the feature?
Interoperability
How will this product interact with other products? What level of knowledge does it
need to have about other programs -- “good neighbor”, program cognizant, program
interaction, and fundamental system changes? What methods will be used to verify
these capabilities?
Integration Testing
Go through each area in the product and determine how it might interact with other
aspects of the project. Start with the ones that are obviously connected, but try
every area to some degree. There may be subtle connections you do not think about
until you start using the features together. The test cases created with this approach
may duplicate the modes and objects approaches, but there are some areas, which
do not fit in those categories and might be missed if you do not check each area.
Compatibility: Clients
10 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
there non-standard, but widely practiced use of your protocols that might cause
incompatibilities?
Compatibility: Servers
Beta Testing
What is the beta schedule? What is the distribution scale of the beta? What are the
entry criteria for beta? How is testing planning on utilizing the beta for feedback on
this feature? What problems do you anticipate discovering in the beta? Who is
coordinating the beta, and how?
Environment/System – General
Are there issues regarding the environment, system, or platform that should get
special attention in the test plan? What are the run time modes and options in the
environment that may cause difference in the feature? List the components of critical
concern here. Are there platform or system specific compliance issues that must be
maintained?
Configuration
Are their configuration issues regarding hardware and software in the environment
that may get special attention in the test plan? Some of the classical issues are
machine and bios types, printers, modems, video cards and drivers, special or
popular TSR’s, memory managers, networks, etc. List those types of configurations
that will need special attention.
User Interface
List the items in the feature that explicitly require a user interface. Is the user
interface designed such that a user will be able to use the feature satisfactorily?
Which part of the user interface is most likely to have bugs? How will the interface
testing be approached?
How fast and how much can the feature do? Does it do enough fast enough? What
testing methodology will be used to determine this information? What criterion will
be used to indicate acceptable performance? If modifications of an existing product,
what are the current metrics? What are the expected major bottlenecks and
performance problem areas on this feature?
11 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Scalability
Is the ability to scale and expand this feature a major requirement? What parts of
the feature are most likely to have scalability problems? What approach will testing
use to define the scalability issues in the feature?
Stress Testing
How does the feature do when pushed beyond its performance and capacity limits?
How is its recovery? What is its breakpoint? What is the user experience when this
occurs? What is the expected behavior when the client reaches stress levels? What
testing methodology will be used to determine this information? What area is
expected to have the most stress related problems?
Volume Testing
Volume testing differs from performance and stress testing in so much as it focuses
on doing volumes of work in realistic environments, durations, and configurations.
Run the software, as expected user will - with certain other components running, or
for so many hours, or with data sets of a certain size, or with certain expected
number of repetitions.
International Issues
Confirm localized functionality that strings are localized and that code pages are
mapped properly. Assure program works properly on localized builds, and that
international settings in the program and environment do not break functionality.
How is localization and internationalization being done on this project? List those
parts of the feature that are most likely to be affected by localization. State
methodology used to verify International sufficiency and localization.
Robustness
How stable is the code base? Does it break easily? Are there memory leaks? Are
there portions of code prone to crash, save failure, or data corruption? How good is
the program’s recovery when these problems occur? How is the user affected when
the program behaves incorrectly? What is the testing approach to find these problem
areas? What is the overall robustness goal and criteria?
Error Testing
How does the program handle error conditions? List the possible error conditions.
What testing methodology will be used to evoke and determine proper behavior for
error conditions? What feedback mechanism is being given to the user, and is it
sufficient? What criteria will be used to define sufficient error recovery?
Usability
What are the major usability issues on the feature? What is testing’s approach to
discover more problems? What sorts of usability tests and studies have been
performed, or will be performed? What is the usability goal and criteria for this
feature?
12 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Accessibility
Is the feature designed in compliance with accessibility guidelines? Could a user with
special accessibility requirements still be able to utilize this feature? What are the
criteria for acceptance on accessibility issues on this feature? What is the testing
approach to discover problems and issues? Are there particular parts of the feature
that are more problematic than others?
User Scenarios
What real world user activities are you going to try to mimic? What classes of users
(i.e. secretaries, artist, writers, animators, construction worker, airline pilot,
shoemaker, etc.) are expected to use this program, and doing which activities? How
will you attempt to mimic these key scenarios? Are there special niche markets that
your product is aimed at (intentionally or unintentionally) where mimic real user
scenarios are critical?
Are their particular boundaries and limits inherent in the feature or area that deserve
special mention here? What is the testing methodology to discover problems
handling these boundaries and limits?
Operational Issues
Backup
Identify all files representing data and machine state, and indicate how those will be
backed up. If it is imperative that service remains running, determine whether or not
it is possible to backup the data and still keep services or code running.
Recovery
If the program goes down, or must be shut down, are there steps and procedures
that will restore program state and get the program or service operational again? Are
there holes in this process that may make a service or state deficient? Are there
holes that could provide loss of data? Mimic as many states of loss of services that
are likely to happen, and go through the process of successfully restoring service.
Archiving
Archival is different from backup. Backup is when data is saved in order to restore
service or program state. Archive is when data is saved for retrieval later. Most
archival and backup systems piggyback on each other's processes.
Is archival of data going to be considered a crucial operational issue on your feature?
If so, is it possible to archive the data without taking the service down? Is the data,
once archived, readily accessible?
Monitoring
Does the service have adequate monitoring messages to indicate status,
performance, or error conditions? When something goes wrong, are messages
13 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
sufficient for operational staff to know what to do to restore proper functionality? Are
the "heartbeat" counters that indicate whether or not the program or service is
working? Attempt to mimic the scenario of an operational staff trying to keep a
service up and running.
Upgrade
Does the customer likely have a previous version of your software, or some other
software? Will they be performing an upgrade? Can the upgrade take place without
interrupting service? Will anything be lost (functionality, state, data) in the upgrade?
Does it take unreasonably long to upgrade the service?
Migration
Is there data, script, code or other artifacts from previous versions that will need to
be migrated to a new version? Testing should create an example of installation with
an old version, and migrate that example to the new version, moving all data and
scripts into the new format.
List here all data files, formats, or code that would be affected by migration, the
solution for migration, and how testing will approach each.
How much focus will be placed on code coverage? What tools and methods will be
used to measure the degree to which testing coverage is sufficiently addressing all of
the code?
Metrics related to software error detection ("Testing") in the broad sense, grouped
into the following categories:
General metrics that may be captured and analyzed throughout the product life
cycle
14 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
15 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Test planning is one of the keys to successful software testing. Test plan can be
defined as a document that describes the scope, approach, resources and schedule
of intended test activities. The main purpose of preparing test plan is that every one
concerned with the project are in synchronized with regards to scope, deliverables,
deadlines and response for the project.
The complete document will help the people outside the test group understand the
“WHY” and “HOW” of the product validation.
Test planning can and should occur at several levels. The first plan to consider is the
Master Test Plan. The purpose of the Master Test Plan is to orchestrate testing at all
levels (unit, integration, system, acceptance, beta, etc.). The Master Test Plan is to
testing what the Project Plan is to the entire development effort.
The goal of test planning is not to create a long list of test cases, but rather to deal
with the important issues of testing strategy, resource utilization, responsibilities,
risks, and priorities.
Purpose:
This section should contain the purpose of preparing the test plan.
Scope:
This section should talk about the areas of the application, which are to be tested by
the QA team and specify those areas, which are definitely out of the scope.
Test approach:
This would contain details on how the testing is to perform and whether any specific
strategy is to be followed.
Entry criteria:
This section explains the various steps to be performed before the start of test (i.e.)
pre-requisites.
Resources:
This list out the people who would be involved in the project and their designation
etc
16 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
This talk about the tasks to be performed and the responsibilities assigned to the
various members in the project.
Exit criteria:
This contains tasks like bringing down the system or server, restoring system to pre-
test environment, database, refresh etc.
Schedules/ Milestones:
This section deals with the final delivery date and the various milestone dates to be
met in the course of project.
This section contains the details of system/server required to install the application
or perform the testing, specific s/w that needs to be installed on the system to get
the application running or to connect to the database, connectivity related issues etc.
This section should list out all the possible risks that can arise during the testing and
mitigation plans that the QA team plans to implement incase the risk actually turns
into a reality.
Tools to be used:
This would list out the testing tools or utilities that are to be used in the project.
Deliverables:
This section contains various deliverables that are due to the client at various points
of time. I.e. daily, weekly, start of project, end of project etc. these could include test
plans, test procedures, test matrices, status reports, test scripts etc. templates for
all these also be attached.
Annexure:
This section contains the embedded documents or links to document, which have
been/will be used in the course of testing. E.g. Templates used for reports, test cases
etc. reference documents can also be attached here.
Sign off:
This section contains the mutual agreement between the client and QA team with
both leads/ -managers signing off their agreement on the test plan.
17 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Acceptance Testing:
Accessibility Testing:
Ad Hoc Testing:
Ad-hoc testing is the interactive testing process where developers invoke application
units explicitly,
Agile Testing:
Testing practice for projects using agile methodologies, treating development as the
customer of testing and emphasizing a test-first design paradigm. See also Test
Driven Development.
Automated Testing:
• Testing employing software tools which execute tests without manual intervention.
Can be applied in GUI, performance, API, etc. testing.
• The use of software to control the execution of tests, the comparison of actual
outcomes to predicted outcomes, the setting up of test preconditions, and other test
control and test reporting functions.
A white box test case design technique that uses the algorithmic flow of the program
to design tests.
Beta Testing:
18 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Bottom Up Testing:
An approach to integration testing where the lowest level components are tested
first, then used to facilitate the testing of higher-level components. The process is
repeated until the component at the top of the hierarchy is tested.
Boundary Testing:
Test which focus on the boundary or limit conditions of the software being tested.
(Some of these tests are stress tests).
Branch Testing:
Testing in which all branches in the program source code are tested at least once.
Breadth Testing:
A test suite that exercises the full functionality of a product but does not test
features in detail.
Compatibility Testing:
Testing whether software is compatible with other elements of a system with which it
should operate, e.g. browsers, Operating Systems, or hardware.
Concurrency Testing:
Multi-user testing geared towards determining the effects of accessing the same
application code, module or database records. Identifies and measures the level of
locking, deadlocking and use of single thread code and locking semaphores.
Conversion Testing:
Testing of programs or procedures used to convert data from existing systems for
use in replacement systems.
Testing in which the action of a test case is parameterized by externally defined data
values, maintained as a file or spreadsheet. A common technique in Automated
Testing.
19 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Dependency Testing:
Depth Testing:
Dynamic Testing :
Endurance Testing :
Checks for memory leaks or other problems that may occur with prolonged
execution.
End-to-End testing :
Exhaustive Testing :
Testing which covers all combinations of input values and preconditions for an
element of the software under test.
Gorilla Testing :
A combination of Black Box and White Box testing methodologies: testing a piece of
software against its specification but using some knowledge of its internal workings.
Integration Testing :
Installation Testing :
Confirms that the application under test recovers from expected or unexpected
events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.
20 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Localization Testing :
This term refers to making software specifically designed for a specific locality.
Loop Testing :
Monkey Testing :
Testing a system or an Application on the fly, i.e just few tests here and there to
ensure the system or an application does not crash out.
Negative Testing :
Testing aimed at showing software does not work. Also known as "test to fail".
N+1 Testing:
Path Testing:
Testing in which all paths in the program source code are tested at least once.
Performance Testing:
Positive Testing:
Testing aimed at showing software works. Also known as "test to pass". See also
Negative Testing.
Recovery Testing:
Confirms that the program recovers from expected or unexpected events without
loss of data or functionality. Events can include shortage of disk space, unexpected
loss of communication, or power out conditions.
Regression Testing:
21 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Sanity Testing:
Scalability Testing:
Security Testing:
Testing which confirms that the program can restrict access to authorized personnel
and that the authorized personnel can access the functions available to their security
level.
Smoke Testing:
Soak Testing:
Running a system at high load for a prolonged period of time. For example, running
several times more transactions in an entire day (or night) than would be expected in
a busy day, to identify and performance problems that appear after a large number
of transactions have been executed.
Static Testing:
Storage Testing:
Testing that verifies the program under test stores data files in the correct directories
and that it reserves sufficient space to prevent unexpected termination resulting
from lack of space. This is external storage as opposed to internal storage.
Stress Testing:
Structural Testing:
System Testing:
22 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Testing that attempts to discover defects that are properties of the entire system
rather than of its individual components.
Thread Testing:
An approach to integration testing where the component at the top of the component
hierarchy is tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is
repeated until the lowest level components have been tested.
Usability Testing:
Testing the ease with which users can learn and use a product.
Unit Testing:
The testing done to show whether a unit satisfies its functional specification or its
implemented structure matches the intended design structure.
Volume Testing:
Testing which confirms that any values that may become large over time can be
accommodated by the program and will not cause the program to stop working or
degrade its operation in any manner.
Workflow Testing:
Scripted end-to-end testing which duplicates specific workflows, which are expected
to be utilized by the end-user.
23 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
Nowadays automated testing tools more often used than ever before to ensure that
their applications are working properly prior to deployment. That's particularly
important today, because more applications are written for use on the Web—the
most public of venues. If a browser-based application crashes or performs
improperly, it can cause more problems than a smaller, local application. But for
many IT and quality assurance managers, the decision of which testing tools to use
can cause confusion
The first decision is which category of tool to use— one that tests specific units of
code before the application is fully combined, one that tests how well the code is
working as envisioned, or one that tests how well the application performs under
stress. And once that decision is made, the team must wade through a variety of
choices in each category to determine which tool best meets its needs.
Functional-Testing Tools
Automated Testing is automating the manual testing process currently in use. The
real use and purpose of automated test tools is to automate regression testing. This
means that you must have or must develop a database of detailed test cases that
are repeatable, and this suite of tests is run every time there is a change to the
application to ensure that the change does not produce unintended consequences.
Win Runner provides a relatively simple way to design tests and build reusable
scripts without extensive programming knowledge. Win Runner captures, verifies,
and replays user interactions automatically, so you can identify defects and ensure
that business processes work flawlessly upon deployment and remain reliable. Win
Runner supports more than 30 environments, including Web, Java, and Visual Basic.
It also provides solutions for leading ERP and Customer Relationship Management
(CRM) applications.
Astra Quick Test: This functional-testing tool is built specifically to test Web-based
applications. It helps ensure that objects, images, and text on Web pages function
properly and can test multiple browsers. Astra Quick Text provides record and
playback support for every ActiveX control in a Web browser and uses checkpoints to
verify specific information during a test run.
Silk Test tool is specifically designed for doing regression and FUNCTIONALITY
testing. This tool tests both mainframe and client/server applications. This is the
leading functional testing product foe e-business application. It also provides facilities
for rapid test customization and automated infrastructure development.
Rational Suite Test Studio is a full suite of testing tools. Its functional-testing
component, called Rational Robot uses automated regression as the first step in the
24 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
functional testing process. The tool records and replays test scripts that recognize
objects through point-and-click processes. The tool also tracks, reports, and charts
information about the developer's quality assurance testing process and can view
and edit test scripts during the recording process. It also enables the developer to
use the same script to test an application in multiple platforms without modifications.
QACenter is a full suite of testing tools. One functional tool considered part of
QACenter is QARun, which automates the creation and execution of test scripts,
verifies tests, and analyzes test results. A second functional tool under the QACenter
is Test Partner, which uses visual scripting and automatic wizards to help test
applications based on Microsoft, Java, and Web-based technologies. Test Partner
offers fast record and playback of application test scripts, and provides facilities for
testers without much programming experience to create and execute tests.
Software testing approaches that examine the program structure and derive test
data from the program logic. White-box testing strategies include designing tests
such that every line of source code is executed at least once, or requiring every
function to be individually tested.
25 of 26
[email protected] Testing Document Software Testing Engineer
----------------------------------------------------------------------------------------------
• Expensive
• Cases omitted in the code could be missed out.
26 of 26