Software Testing

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 21

Software Testing.

Introduction Aliona Ganta (Grozavu)


23.07.2012
Agenda

What is Testing?
Why Testing?
Fundamental Test Processes
Types of Testing
Black Box Techniques
White box Techniques
Testing in SDLC
When to stop Testing?
Conclusion

2
What is Testing?

Testing is the process of analyzing a software product to detect the differences between existing and required
conditions (that is defects/errors/bugs) and to evaluate the features of the software product.

Purpose of testing:

Verification : testing of items, including software, for conformance and consistency by evaluating the results
against pre-specified requirements. [Verification: Are we building the system right?]

Validation: the process of checking that what has been specified is what the user actually wanted.
[Validation: Are we building the right system?]

Error detection: should intentionally attempt to make things go wrong to determine if things happen when
they shouldnt or things dont happen when they should.

IN YOUR ZONE
Why Testing?

Quality Control & Assurance

Verification & Validation

Compatibility & Interoperability.

IN YOUR ZONE 4
Fundamental Test Processes

Planning and Control

Planning is the activity of verifying the mission of testing, defining the objectives of testing and the
specification of test activities in order to meet the objectives and mission.

Control is the ongoing activity of comparing actual progress against the plan, and reporting the status,
including deviations from the plan.

Analysis and Design

The activity where general testing objectives are transformed into tangible test conditions and test cases.

Implementation and Execution

Test procedures or scripts are specified by combining the test cases in a particular order and including any
other information needed for test execution, the environment is set up and the tests are run.

Evaluating Exit Criteria and Reporting

Test execution is assessed against the defined objectives.

Test Closure Activities

Collect data from completed test activities to consolidate experience, testware, facts and numbers.

IN YOUR ZONE 5
Type of Testing

Functional Testing - is done using the functional specifications provided by the client or by using the design
specifications like use cases provided by the design team and ensure that each component of the
application under test meets the functional requirements documents produced during the project
development lifecycle.

Non-Functional Testing - ensure that each component of the application under test meets the non-
functional requirements.

Testing related to changes - confirming that defects have been fixed (retesting) and looking for unintended
changes (regression testing)

IN YOUR ZONE 6
Types of Testing Functional Testing

Functional testing types:

Static reviews, walkthroughs, inspections. The purpose of requirements static testing is to assure that
requirements are:

o Clear and specific , with no ambiguity


o Testable and having some evaluation criteria for each requirement
o Complete; without any contradictions
Dynamic - testing of the dynamic behavior of code. Dynamic testing is divided in:

o White-box testing - the tester has access to the internal data structures and algorithms including the
code that implement these.

o Black-box testing test without any knowledge of internal system implementation. Black box testing
methods include: equivalence partitioning, boundary value analysis, traceability matrix, exploratory
testing and specification-based testing.

IN YOUR ZONE 7
Types of Testing Non-Functional Testing

Performance - objectives are to determine if the application under test can support the expected workload;
find and report any bottlenecks. Performance sub-genres:

Simple performance tests

Load

Stress

Security - evaluate the application under test against industry security standards and security requirements.

Usability - assess learnability, efficiency, memorability in application usage.

Accessibility make sure that System is compliant with accessibility guidelines and recommendations.

Compatibility - verifying compatibility across various hardware and software environment.

IN YOUR ZONE 8
Types of Testing Testing related to changes

Regression Testing is the repeated testing of an already tested program, after modification, to discover any
defects introduced or uncovered as a result of the change(s).

Re-testing (Confirmation Testing) - is done to make sure that the tests cases which failed in last execution are
passing after the defects against those failures are fixed.

IN YOUR ZONE 9
Black Box Techniques

Black-box techniques (which include specification-based and experienced-based techniques) are a way to
derive and select test conditions or test cases based on an analysis of the test basis documentation and the
experience of developers, testers and users, whether functional or non-functional, for a component or system
without reference to its internal structure.

Equivalence partitioning

Boundary value analysis

Decision table testing

State transition testing

Use case testing

IN YOUR ZONE 10
Black Box Techniques

Equivalence partitioning - Inputs to the software or system are divided into groups that are expected to exhibit similar behavior,

so they are likely to be processed in the same way.

Example:

Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you
select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should
be sufficient.

2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.

3) Input data with any value greater than 1000 to represent third invalid input class.

Boundary value analysis -The maximum and minimum values of a partition are its boundary values. A boundary value for a valid

partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value.

Example:

1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.

2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.

IN YOUR ZONE 11
Black Box Techniques

Decision table testing - Decision tables are a good way to capture system requirements that contain logical

conditions, and to document internal system design. They may be used to record complex business rules that

asystem is to implement.

If the amount of the order is greater than $35, subtract 5% of the order; if the amount is greater than or
equal to $20 and less than or equal to $35, subtract a 4% discount; if the amount is less than $20, do not
apply any discount.

IN YOUR ZONE 12
Black Box Techniques

State transition testing - A system may exhibit a


different response depending on current conditions
or previous history (its state). In this case, that
aspect of the system can be shown as a state
transition diagram. It allows the tester to view the
software in terms of its states, transitions between
states, the inputs or events that trigger state
changes (transitions) and the actions which may
result from those transitions. product risks;

IN YOUR ZONE 13
Black Box Techniques

Use case testing - Tests can be specified from use cases


or business scenarios. A use case describes interactions
between actors, including users and the system, which
produce a result of value to a system user. Each use case
has preconditions, which need to be met for a use case
to work successfully. Each use case terminates with post-
conditions, which are the observable results and final
state of the system after the use case has been
completed.

IN YOUR ZONE 14
White Box Techniques

Statement testing and coverage - statement coverage is the assessment of the percentage of executable
statements that have been exercised by a test case suite. Statement testing derives test cases to execute
specific statements, normally to increase statement coverage

Decision testing and coverage - is the assessment of the percentage of decision outcomes (e.g. the True and
False options of an IF statement) that have been exercised by a test case suite. Decision testing derives test
cases to execute specific decision outcomes, normally to increase decision coverage.

IN YOUR ZONE 15
Testing in SDLC

Waterfall - is a sequential design process, often used in software development processes, in which progress is
seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis,
Design, Construction, Testing, Production/Implementation and Maintenance.

V-model - represents a software development process which may be considered an extension of the waterfall
model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to
form the typical V shape. The V-Model demonstrates the relationships between each phase of the
development life cycle and its associated phase of testing.

Agile - Agile software development is a group of software development methods based on iterative and
incremental product development, allows to get things done at the right time, maximizing the value of what is
delivered. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative
approach, and encourages rapid and flexible response to change.

IN YOUR ZONE 16
Testing in SDLC V-model/Waterfall

Component (Unit) Testing


searches for defects in, and verifies the functioning of,
software (e.g. modules, programs, objects, classes, etc.) that
are separately testable
Integration Testing
tests interfaces between components, interactions with
different parts of a system, such as the operating system, file
system, hardware, or interfaces between systems
System Testing
concerned with the behavior of a whole system/product as
defined by the scope of a development project or
programme
investigate both functional and non-functional requirements
of the system
Acceptance Testing
often the responsibility of the customers or users of a system
goal in acceptance testing is to establish confidence in the
system, parts of the system or specific non-functional
characteristics of the system
Release Testing - seeing if the new or changed system will work in
the existing business environment

IN YOUR ZONE 17
Testing in SDLC Agile

Plan
Define and estimate testing tasks for each User story in the
Sprint Planning
Analyse User Stories and acceptance criteria to identify test
objectives / test conditions
Execute
Design & Execute Manual & Automated tests (including
regression)
Prepare test data
Raise defects, validate defects
Regression testing
Demo
Meeting to decide the relevant cases to be presented
Prepare the sprint demo
Demonstrate the user stories and get acceptance from
product owners
Retrospective
Assess what went well
Assess what needs improving
Make necessary adjustments to the testing process

IN YOUR ZONE 18
When to stop Testing?

All tests for System test have been run and coverage targets achieved

All tests and results are recorded

All Critical, High and Medium issues must be fixed, tested and closed

Outstanding Low priority issues are no more than 20 lows

Metrics are collected and recorded

The System Test Completion document has been accepted by the UAT Manager and Delivery
Manager.

IN YOUR ZONE 19
Conclusion

Testing is an Art!

"It's more about good enough


than it is about right or
wrong." (James Bach)

IN YOUR ZONE 20
Thank you!

Aliona Ganta(Grozavu)| Software Test Engineer


[email protected]
Tel +373 (0)68 015 032|Skype en_agrozavu

IN YOUR ZONE 21

You might also like