Testing

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 14

 My Expectations:

 Overview of different types of testing


 Where and how they fit into the overall SDLC
 Manual Testing using Test Cases / Scripts
 Automating unit testing using JUnit Framework
 Test Coverage Analysis

 Your Expectations ?

 Software testing is a critical element of software quality assurance and represents


the ultimate process to ensure the correctness of the product.
 A quality product which enhances the customer's confidence also increases the
business economics (and hence improves the economics of consulting companies
and in turn developers / testers etc)
 Testing is the process of executing a program with the intent of finding errors
 Testing is:
 Process of demonstrating that errors are not present
 Done to demonstrate that a program performs its intended functions
correctly
 Process of establishing confidence that a program does what is supposed
to do

Types of Testing

Activities Based:
 Unit Testing
 Functional Testing
 Regression Testing
 Functional Regression
 Unit Regression
 Break Testing / Monkey Testing
 Usability Testing
 UI Testing
 OS/Browser Testing
 Performance Testing
Project Phase Based:
 Build Testing
 Integration Testing
 Stabilization Testing
 User Acceptance Testing
 Pilot / Beta
Types of Testing … (continued)

Nov Dec Jan Feb Mar Apr May Jun Jul

Project
Design
Live !!
Integration /
Development
Stabilization
UAT Pilot / Beta
Build Testing
Warranty Support

Unit Testing
Functional Testing

Regression Testing (Both Unit and Functional level)

Performance Testing

NOTE: Diagram not to Scale

Unit Testing

 Test at a ‘unit / building block’ level


 At a single class level in OO terms
 Written by developers during implementation
 Unit tests are written to test the fundamental building blocks of the system from
the inside out
 Can be manual or automated
 Helps developers in certifying that their code is written to a given specification
 Gives a safety net to developers – reduces the risk of breaking the application
because of a code change
 Cost of fixing bugs increases exponentially as time goes by… Unit testing helps
in bringing this cost down

Functional Testing

 Treats the system as a black box and ensure that the software works as a whole
 Test the system by providing know inputs and ensure that it returns
expected outputs
 Tests are done at a scenario / flow level
 Done post implementation by a QA team
 Can be manual or automated
 The ‘scenarios’ can be at different levels (depending on the phase of the project)
 Interaction of all layers in a given module
 Interaction between two / three related modules
 Business flows / scenarios involving multiple modules in the system
Regression Testing

 Bugs have a way of creeping into the systems and there are many ways and many
reasons why they occur
 When a particular bug gets fixed….there is always a chance that this particular fix
could potentially break some other code (either in the same module or other
related module)
 This could happen because the developer who fixes a bug might not
understand all the intricacies of the system
 Or developer could be careless while fixing
 The testing done by a developer after fixing a bug, to ensure that he/she has not
‘introduced’ a bug else where… is called Regression Testing
 Note, Regression testing can be done a ‘Unit’ level and/or at a ‘Functional’ level
 It is usually a combination of these two types of tests

Build Testing

 Testing that is done right after a regular build is completed and released to a ‘test’
environment is called Built Testing
 Build testing may include unit testing / functional testing / regression testing
 Testing done by the Infrastructure / CM team, before, releasing the new build to
other is called ‘Sanity Test’
 The purpose of this test is to ensure that the build is done correctly and
there are no configuration issues
 Infra team does some sanity tests like logging into the site, app can access
the DB, browsing to see if main pages are accessible, test if emails are
being sent etc
Break Testing

 Also called as ‘Monkey Testing’


 Usually done without any script/scenario to tell the user as to what to test (very
similar to ‘Free hand’ testing)
 Idea of this testing is to ‘monkey around’ to try and break the system by any
means possible
 Do things that you do not expect to do in a regular flow
 Enter huge amounts of data into text fields
 Change the query string parameters and their values
 In a wizard, go back and forth using browser back button multiple times
 Go to ‘random’ links from one module to the other
Usability Testing
 Usability testing is a means for measuring how well people can use some human-
made object (such as a web page, a computer interface, a document, or a device)
for its intended purpose, i.e. usability testing measures the usability of the object.
 Usability testing usually involves a controlled experiment to determine how well
people can use the product
 Rather than showing users a rough draft and asking, "Do you understand this?",
usability testing involves watching people trying to use something for its intended
purpose
 For example, to test the attachment function of an e-mail program, a scenario
would describe a situation where a person needs to send an e-mail attachment, and
ask him or her to undertake this task.
 http://en.wikipedia.org/wiki/Usability_testing

UI and OS/Browser Testing

UI Testing:
 User Interface Testing tests the system to ensure that it meets all the UI
specifications/standards
 Just like code has coding standards, applications do have UI standards for
the look and feel of the site
 CSS, font size, colours, lengths, button style, language etc

OS / Browser Testing:
 Testing the application on different OS / Browser combinations to ensure that the
app works from a functionality as well as a look and feel perspective

Performance Testing

 Performance parameters are defined as a NFR (Non Functional Requirement)


 Parameters that define how ‘fast’ a given page should be served back to the user,
when there is a ‘expected load’ on the system
 There are many different types of performance tests:
 Load Testing
 Stress Testing
 Endurance / Soak Testing
Phase Based Testing
Nov Dec Jan Feb Mar Apr May Jun Jul

Project
Design
Live !!
Integration /
Development
Stabilization
UAT Pilot / Beta
Build Testing
Warranty Support

Unit Testing
Functional Testing

Regression Testing (Both Unit and Functional level)

Performance Testing

NOTE: Diagram not to Scale

Why so many types of testing?


V - Model
Requirements
Gathering
User Acceptance
Requirements Testing

Business Design
Functional Functional
Specification Testing

Tech Design
Technical Integration
Specification Testing

Detailed Tech
Program
Design
Unit Testing
Specification
Development

CODE

Manual Testing

How is manual testing done?

 Free hand testing:


 The tester is assumed to know the system (at least at a high level) so that
he can test what he wants
 Done without any ‘structure’ being provided … either in terms of flow of
app or what test data to use etc
 Tester will open new bugs that he finds on the system

 Script based testing:


 The user is provided with a ‘script’ that tells what exactly to do, what test
data to enter and what to expect from the system after each step
 Tester then captures his findings in the script
 Bugs can get opened
 Feedback about how the script was / changes to the script are also
captured

Test Scripts

 The general term used is ‘Test Scripts’ and any given test script might have
multiple ‘Test Cases’

Functional Verification Scripts (FVS):


 Usually written at a ‘screen’ level
 Used to verify that a completed screen and all its underlying components are
working as per the specifications
Scenario Scripts:
 These are scripts written to test the app across screens and multiple modules
 They are used to test if the build app handles a ‘business scenario’ correctly i.e as
designed
End to End Scripts:
 They are scripts written to test the app across a given set of applications …. i.e
interaction of one application with another new/existing application

JUNIT FRAME WORK

EXERCISE 1

 Writing a java class to manually test the following classes that have been
provided:
 IAccount.java
 Account.java
 AccountInsufficientFundsException.java
 How was the unit test ?
 What are its drawback ?

 How do we improve on it ….. use JUnit Framework.

What is JUnit ?

Open source regression testing framework for Java, written by Erich Gamma and Kent
Beck
 Part of XUnit family (HTTPUnit, CppUnit, Cactus etc)
 Consists of
 Base classes for definition of tests and test suites
 Tools for running tests (graphical and text-based)
 Used to write and run repeatable unit tests for Java objects ( Not EJB, JSP,
Servlets)

 Supported by many IDEs and build tools


 Eclipse, Visual Age, JDeveloper, ANT

Why JUnit ?

 JUnit tests allow you to write code faster while increasing quality
 JUnit is elegantly simple
 JUnit tests check their own results and provide immediate feedback (Green / Red
Status Bar)
 JUnit tests can be composed into a hierarchy of test suites
 Writing JUnit tests is inexpensive
 JUnit tests increase the stability of software
 JUnit tests are developer tests
 JUnit tests are written in Java
 JUnit is free !!

What does JUnit Provide?


 Framework provides tools for:
 Base classes for definition of test cases and suite
 Ability to execute these tests
 Assertions
 Aggregating tests (suites)
 Reporting results

 Philosophy :
 Let developers write unit tests
 Make it easy and painless
 So that they test early and test often !!!

assertion in JPL
 An assertion is a statement that enables you to test your assumptions about your
program
 Each assertion contains a boolean expression that you believe will be true when
the assertion executes.
 If it is not true, the system will throw an error
 By verifying that the boolean expression is indeed true, the assertion confirms
your assumptions about the behavior of your program, increasing your confidence
that the program is free of errors.
 Experience has shown that writing assertions while programming is one of the
quickest and most effective ways to detect and correct bugs.
 As an added benefit, assertions serve to document the inner workings of your
program, enhancing maintainability.
 Syntax:
 assert <boolean expression>
 OR
 assert <boolean expression> : <string expression>

 In the first statement, you only get a AssertError, where as in the second one, the
AssertError that is generated will contain the <string_expression>
 You will have to enable assertion via a jvm argument:
 “-enableassertions” or “-ea”

 Demo AssertionDemo.java

 http://java.sun.com/j2se/1.4.2/docs/guide/lang/assert.html

TestCase

 Any JUnit test should extend TestCase (provided by framework)


 Naming Convention: Test<ClassName>.java
 Each single test is contained in a single method
 Naming Convention: test<MethodName>
 A single test case can contain multiple testXYZ() methods
 Ideally every public method in the class that is being tested should have
1..n testXYZ()
 Each testXYZ() method should have at least one assertXYZ() method call

assertXYZ()

 assertEquals(double expected, double actual, double delta)


 assertEquals(java.lang.String message, double expected, double actual,
double delta)
 assertEquals(float expected, float actual, float delta)
 assertEquals(java.lang.String message, float expected, float actual, float delta)

 assertEquals(java.lang.Object[] expecteds, java.lang.Object[] actuals)


 assertEquals(java.lang.String message, java.lang.Object[] expecteds,
java.lang.Object[] actuals)

 assertEquals(java.lang.Object expected, java.lang.Object actual)


 assertEquals(java.lang.String message, java.lang.Object expected,
java.lang.Object actual)
 assertFalse(boolean condition)
 assertFalse(java.lang.String message, boolean condition)

 assertTrue(boolean condition)
 assertTrue(java.lang.String message, boolean condition)

 assertNotNull(java.lang.Object object)
 assertNotNull(java.lang.String message, java.lang.Object object)

 assertNull(java.lang.Object object)
 assertNull(java.lang.String message, java.lang.Object object)

 assertNotSame(java.lang.Object unexpected, java.lang.Object actual)


 assertNotSame(java.lang.String message, java.lang.Object unexpected,
java.lang.Object actual)

 assertSame(java.lang.Object expected, java.lang.Object actual)


 assertSame(java.lang.String message, java.lang.Object expected,
java.lang.Object actual)

 fail()
 fail(java.lang.String message)

EXERCISE 2

 Creating the first TestCase to test the Account.java class


Errors vs. Failures

 Assertions are used to check for the possibility of success and failures, therefore
success and failures within actual code should be anticipated and tested
appropriately
 Positive vs. Negative testing
 Failed assertions will result in JUnit failures
 Uncaught errors/exceptions result in JUnit errors
 Errors are unanticipated problems (within actual code or in a test case) resulting
in uncaught exceptions being propagated to a JUnit test method
 Both failures and errors will cause the test to fail, the behaviour of a failed test
within a test suite is up to the developer
 Ideally there should be zero errors and failures in the entire test suite
Fixtures

 Using a test fixture avoids duplicating the test code that is necessary to initialize
and cleanup objects that are common for each test within a given TestCase

 A test fixture is useful if you have two or more tests that use a common set of
objects
 Used to insure there are no side effects between tests
 Enforce the test independence rule, test execution order is not guaranteed
 Override setUp() method to initialize the variables
 Override tearDown() to release any permanent resources allocated in setUp()
 File Pointers, DB connections, sockets etc

EXERCISE 3

 Removing redundant set up operations in TestAccount1.java

EXERCISE 4

 Handling Exceptions that are thrown in code by writing positive and a negative
test cases

Test Suites

 TestCase instances can be composed into TestSuite hierarchies that automatically


invoke all the testXYZ() methods defined in each TestCase instance
 The composite behavior exhibited by the TestSuite allows you to assemble test
suites of test suites of tests, to an arbitrary depth, and run all the tests
automatically and uniformly to yield a single pass or fail status.

EXRCISE 5

 Creating Test Suite to include multiple TestCases


JUnit – Best Practices

 One Test Case for each Java Class and each public method in the class should
have 1…n textXYZ methods
 TestClass naming: Test<className>.java
 Test method naming convention: textXYZ()
 Test each class in isolation. Each test should test only one class, not indirectly its
collaborators.
 Do not use the constructor of the Test Case to set up data for the tests…use the
setUp() method
 Each test method should be independent of each other. One should not assume a
certain ‘state’ that will be build by the other test method
 Each test method should test only one specific flow through a method… if the test
method tests multiple flows, then it is really not unit testing
 Tests should be self verifying …. each and every test should run automatically
and should assert ….no manual intervention
 Be careful while writing test cases that changes the state of data, which might
affect other test cases
 Changing System data by a test case
 Do not load data from hard-coded locations on a filesystem
 FileInputStream inp ("C:\\TestData\\dataSet1.dat"); in the setUp() method
 As far as possible, use the assertXYZ() methods with the message option, so that
you get a good user friendly message in case of an error
 Running the entire suite of test cases for any given system should not take
hours….keep them simple and make them fast

JUnit – What to Test and What not to?

 It is not necessary to test the setters and getters of a class…..else we will be


testing the java compiler
 Every public method should be tested
 Test positive case (success scenario)
 Test negative case (i.e error conditions…and business exception being
thrown)
 Every exit point within a method should be tested
 If a method has a switch case with 3 switch statements…then you
got to have 3 test methods
 Exit points could be either positive or negative
 Always test for boundary conditions
Limitations of JUnit

 Can only test Java Objects


 Cannot test server side objects like servlets, EJB’s, JSP
 Use framework like Cactus, Mock Objects
 Cannot test Framework based code like Struts
 Use framework like Struts JUnit

 Good news is that, all the other frameworks mentioned above are build on top of
JUnit….so if you know JUnits then you already have a head start in understanding
other frameworks

Test Coverage

 To talk about test coverage, you must have two things:


1. A software product
2. A automated test suite used to verify that the product is working correctly

 Test coverage provides a measure of how well your test suite actually test the
product i.e if each and every line of the actual code that has been written is
executed, by running the entire test suite, then you have 100% coverage

 Procedure-Level Test Coverage:


 Probably the most basic form of test coverage is to measure what
procedures were and were not executed during the test suite
 In java this translates to which objects and which of its methods were
executed during the test suite
 Line-Level Test Coverage:
 The basic measure of a dedicated test coverage tool is tracking which lines
of code are executed, and which are not. This result is often presented in a
summary at the procedure, file, or project level giving a percentage of the
code that was executed

 Higher the % of test coverage, better the ‘quantity’ of testing and hopefully
quality also increases
 Test coverage tools give you these metrics…using which you can decide if you
need to write more test cases….and for which part/modules do you need to write
them or which parts of the code are being tested multiple times

Java Test Coverage Tools


 Open Source Code Coverage Frameworks:
 Coverlipse (Exercise - Eclipse Plugin … given to you -
http://coverlipse.sourceforge.net/howto.php)
 EMMA (Demo using ECLEMMA - updates from the internet -
http://www.eclemma.org/installation.html)
 NoUnit
 Jester
 InsECT
 Hensel
 GroboCodeCoverage
 Jcoverage/gpl
 Cobertura

Resources

 JUnit
 http://www.junit.org/index.htm

 Cactus
 http://jakarta.apache.org/cactus

 Code Coverage Analysis


 http://www.bullseye.com/coverage.html

 EMMA – Java Code Coverage Tool


 http://emma.sourceforge.net/

JUnit – TextUI and SwingUI

 Installation
 unzip the junit.zip file
 add junit.jar to the CLASSPATH. For example: set classpath=%classpath
%;INSTALL_DIR\junit3\junit.jar
 Testing
Test the installation by using either the batch or the graphical TestRunner tool to run the
tests that come with this release. All the tests should pass OK.
 for the batch TestRunner type:
java junit.textui.TestRunner junit.samples.AllTests
 for the graphical TestRunner type:
java junit.awtui.TestRunner junit.samples.AllTests
 for the Swing based graphical TestRunner type:
java junit.swingui.TestRunner junit.samples.AllTests
 Notice:
 The tests are not contained in the junit.jar but in the installation directory
directly. Therefore make sure that the installation directory is on the class
path
 Important:
 Don't install the junit.jar into the extension directory of your JDK
installation.
 If you do so the test class on the files system will not be found.

You might also like