Software Testing Framework: Document Version: 2.0
Software Testing Framework: Document Version: 2.0
Software Testing Framework: Document Version: 2.0
Table of Contents
Table of Contents................................................................................................2 Revision History..................................................................................................4 Testing Framework.............................................................................................5
1.0 INTRODUCTION..................................................................................................................5
1.2 TRADITIONAL TESTING CYCLE ................................................................................5
4.3.2 BOTTOM-UP INTEGRATION ................................................................................15 4.4 SMOKE TESTING............................................................................................16 4.5 SYSTEM TESTING...........................................................................................16 4.5.1. RECOVERY TESTING.....................................................................................16 4.5.2. SECURITY TESTING......................................................................................16 4.5.3. STRESS TESTING........................................................................................16 4.5.4. PERFORMANCE TESTING.................................................................................16 4.5.5. REGRESSION TESTING...................................................................................17 4.6 ALPHA TESTING.............................................................................................17 4.7 USER ACCEPTANCE TESTING................................................................................17 4.8 BETA TESTING..............................................................................................17
5.0 METRICS.............................................................................................................................17
9.0 DELIVERABLES................................................................................................................25
3 of 25
Revision History
Version No. 1.0 2.0 Date August 6, 2003 December 15, 2003 Author Harinath Harinath Notes Initial Document Creation and Posting on web site. Renamed the document to Software Testing Framework V2.0 Modified the structure of the document. Added Testing Models section Added SBT, ET testing types.
Next Version of this framework would include Test Estimation Procedures and More Metrics.
4 of 25
Testing Framework
Through experience they determined, that there should be 30 defects per 1000 lines of code. If testing does not uncover 30 defects, a logical solution is that the test process was not effective.
1.0 Introduction
Testing plays an important role in todays System Development Life Cycle. During Testing, we follow a systematic procedure to uncover defects at various stages of the life cycle. This framework is aimed at providing the reader various Test Types, Test Phases, Test Models and Test Metrics and guide as to how to perform effective Testing in the project. All the definitions and standards mentioned in this framework are existing ones. I have not altered any definitions, but where ever possible I tried to explain them in simple words. Also, the framework, approach and suggestions are my experiences. My intention of this framework is to help Test Engineers to understand the concepts of testing, various techniques and apply them effectively in their daily work. This framework is not for publication or for monetary distribution. If you have any queries, suggestions for improvements or any points found missing, kindly write back to me.
Test
Maintenance
In the above diagram (Fig A), the Testing phase comes after the Coding is complete and before the product is launched and goes into maintenance.
5 of 25
But, the recommended test process involves testing in every phase of the life cycle (Fig B). During the requirement phase, the emphasis is upon validation to determine that the defined requirements meet the needs of the project. During the design and program phases, the emphasis is on verification to ensure that the design and programs accomplish the defined requirements. During the test and installation phases, the emphasis is on inspection to determine that the implemented system meets the system specification. The chart below describes the Life Cycle verification activities. Life Cycle Phase Requirements Verification Activities Determine verification approach. Determine adequacy of requirements. Generate functional test data. Determine consistency of design with requirements. Determine adequacy of design. Generate structural and functional test data. Determine consistency with design Determine adequacy of implementation Generate structural and functional test data for programs. Test application system. Place tested system into production. Modify and retest.
Design
Program (Build)
Throughout the entire lifecycle, neither development nor verification is a straight-line activity. Modifications or corrections to a structure at one phase will require modifications or re-verification of structures produced during previous phases.
Design Reviews
Code Walkthroughs
6 of 25
Code Inspections
Formal analysis of the program source code to find defects as defined by meeting system design specification.
2.1.1 Reviews The focus of Review is on a work product (e.g. Requirements document, Code etc.). After the work product is developed, the Project Leader calls for a Review. The work product is distributed to the personnel who involves in the review. The main audience for the review should be the Project Manager, Project Leader and the Producer of the work product. Major reviews include the following: 1. In Process Reviews 2. Decision Point or Phase End Reviews 3. Post Implementation Reviews Let us discuss in brief about the above mentioned reviews. As per statistics Reviews uncover over 65% of the defects and testing uncovers around 30%. So, its very important to maintain reviews as part of the V&V strategies. In-Process Review In-Process Review looks at the product during a specific time period of a life cycle, such as activity. They are usually limited to a segment of a project, with the goal of identifying defects as work progresses, rather than at the close of a phase or even later, when they are more costly to correct. Decision-Point or Phase-End Review This review looks at the product for the main purpose of determining whether to continue with planned activities. They are held at the end of each phase, in a semiformal or formal way. Defects found are tracked through resolution, usually by way of the existing defect tracking system. The common phase-end reviews are Software Requirements Review, Critical Design Review and Test Readiness Review.
The Software Requirements Review is aimed at validating and approving the documented software requirements for the purpose of establishing a baseline and identifying analysis packages. The Development Plan, Software Test Plan, Configuration Management Plan are some of the documents reviews during this phase. The Critical Design Review baselines the detailed design specification. Test cases are reviewed and approved. The Test Readiness Review is performed when the appropriate application components are near completing. This review will determine the readiness of the application for system and acceptance testing.
Post Implementation Review These reviews are held after implementation is complete to audit the process based on actual results. Post-Implementation reviews are also known as Postmortems and are held to assess the success of the overall process after release and identify any
7 of 25
opportunities for process improvement. They can be held up to three to six months after implementation, and are conducted in a format. There are three general classes of reviews: 1. Informal or Peer Review 2. Semiformal or Walk-Through 3. Format or Inspections Peer Review is generally a one-to-one meeting between the author of a work product and a peer, initiated as a request for import regarding a particular artifact or problem. There is no agenda, and results are not formally reported. These reviews occur on an as needed basis throughout each phase of a project. 2.1.2 Inspections A knowledgeable individual called a moderator, who is not a member of the team or the author of the product under review, facilitates inspections. A recorder who records the defects found and actions assigned assists the moderator. The meeting is planned in advance and material is distributed to all the participants and the participants are expected to attend the meeting well prepared. The issues raised during the meeting are documented and circulated among the members present and the management. 2.1.3 Walkthroughs The author of the material being reviewed facilitates walk-Through. The participants are led through the material in one of two formats; the presentation is made without interruptions and comments are made at the end, or comments are made throughout. In either case, the issues raised are captured and published in a report distributed to the participants. Possible solutions for uncovered defects are not discussed during the review.
Deliverable Software unit ready for testing with other system component. Portions of the system ready for testing with other portions of the system. Tested computer system, based on what was specified to be developed. Stable application.
Integration Testing.
System Testing.
Test Engineers.
Production Environment
Installation Testing.
Test Engineers.
Beta Testing
Users.
before rolling out to the UAT. Testing of computer system to make sure it will work in the system regardless of what the system requirements indicate. Testing of the Computer System during the Installation at the user place. Testing of the application after the installation at the client place.
9 of 25
3.1.6 Loop Testing Loop Testing is a white box testing technique that focuses exclusively on the validity of loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops, nested loops, and unstructured loops. 3.1.6.1 Simple Loops The following sets of tests can be applied to simple loops, where n is the maximum number of allowable passes through the loop. 1. Skip the loop entirely. 2. Only one pass through the loop. 3. Two passes through the loop. 4. m passes through the loop where m<n. 5. n-1, n, n+1 passes through the loop. 3.1.6.2 Nested Loops If we extend the test approach for simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases. 1. Start at the innermost loop. Set all other loops to minimum values. 2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter values. Add other tests for out-of-range or exclude values. 3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum values and other nested loops to typical values. 4. Continue until all loops have been tested. 3.1.6.3 Concatenated Loops Concatenated loops can be tested using the approach defined for simple loops, if each of the loops is independent of the other. However, if two loops are concatenated and the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not independent. 3.1.6.4 Unstructured Loops Whenever possible, this class of loops should be redesigned to reflect the use of the structured programming constructs.
1. If an input condition specifies a range, one valid and one two invalid classes are defined. 2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. 3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined. 4. If an input condition is Boolean, one valid and one invalid class are defined. 3.2.3 Boundary Value Analysis BVA is a test case design technique that complements equivalence partitioning. Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at the edges of the class. Rather than focusing solely on input conditions, BVA derives test cases from the output domain as well. Guidelines for BVA are similar in many respects to those provided for equivalence partitioning. 3.2.4 Comparison Testing Situations where independent versions of software be developed for critical applications, even when only a single version will be used in the delivered computer based system. These independent versions from the basis of a black box testing technique called Comparison testing or back-to-back testing. 3.2.5 Orthogonal Array Testing The orthogonal array testing method is particularly useful in finding errors associated with region faults an error category associated with faulty logic within a software component.
Example Prove system requirements. Unchanged system segments function. Error introduced into the test. Manual procedures developed. Intersystem parameters changed. File reconciliation
Parallel
acceptable level. Old systems and new system are run and the results compared to detect unplanned differences.
Architecture Design
Architecture Design
Unit Test Case Document Design Document Functional Specification Document System Test Case Document Integration Test Case Document Regression Test Case Document
Functional Specification Document Performance Criteria Software Requirement Specification Regression Test Case Document Performance Test Cases and Scenarios
14 of 25
15 of 25
4.5.5. Regression Testing Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side affects. Regression may be conducted manually, by re-executing a subset of al test cases or using automated capture/playback tools. The Regression test suit contains three different classes of test cases: A representative sample of tests that will exercise all software functions. Additional tests that focus on software functions that are likely to be affected by the change. Tests that focus on the software components that have been changed.
5.0 Metrics
Metrics are the most important responsibility of the Test Team. Metrics allow for deeper understanding of the performance of the application and its behavior. The fine tuning of the application can be enhanced only with metrics. In a typical QA process, there are many metrics which provide information. The following can be regarded as the fundamental metric: IEEE Std 982.2 - 1988 defines a Functional or Test Coverage Metric. It can be used to measure test coverage prior to software delivery. It provide a measure of the percentage of the software tested at any point during testing. It is calculated as follows: Function Test Coverage = FE/FT Where FE is the number of test requirements that are covered by test cases that were executed against the software FT is the total number of test requirements Software Release Metrics The software is ready for release when: 1. It has been tested with a test suite that provides 100% functional coverage, 80% branch coverage, and 100% procedure coverage. 2. There are no level 1 or 2 severity defects. 3. The defect finding rate is less than 40 new defects per 1000 hours of testing 4. The software reaches 1000 hours of operation 5. Stress testing, configuration testing, installation testing, Nave user testing, usability testing, and sanity testing have been completed
17 of 25
IEEE Software Maturity Metric IEEE Std 982.2 - 1988 defines a Software Maturity Index that can be used to determine the readiness for release of a software system. This index is especially useful for assessing release readiness when changes, additions, or deletions are made to existing software systems. It also provides an historical index of the impact of changes. It is calculated as follows: SMI = Mt - ( Fa + Fc + Fd)/Mt Where SMI is the Software Maturity Index value Mt is the number of software functions/modules in the current release Fc is the number of functions/modules that contain changes from the previous release Fa is the number of functions/modules that contain additions to the previous release Fd is the number of functions/modules that are deleted from the previous release Reliability Metrics Perry offers the following equation for calculating reliability. Reliability = 1 - Number of errors (actual or predicted)/Total number of lines of executable code This reliability value is calculated for the number of errors during a specified time interval. Three other metrics can be calculated during extended testing or after the system is in production. They are: MTTFF (Mean Time to First Failure) MTTFF = The number of time intervals the system is operable until its first failure MTBF (Mean Time Between Failures) MTBF = Sum of the time intervals the system is operable Number of failures for the time period MTTR (Mean Time To Repair) MTTR = sum of the time intervals required to repair the system The number of repairs during the time period
18 of 25
Specification
System Tests
Architecture
Integration Tests
Detailed Design
Unit Tests
Coding The diagram is self-explanatory. For an easy understanding, look at the following table: SDLC Phase Test Phase 1. Requirements 1. Build Test Strategy. 2. Plan for Testing. 3. Acceptance Test Scenarios Identification. 2. Specification 1. System Test Case Generation. 3. Architecture 1. Integration Test Case Generation. 4. Detailed Design 1. Unit Test Case Generation
19 of 25
Requirements
Requirements Review
Specification
Specification Review
System Testing
Architecture
Regression Round 1
Architecture Review
Integration Testing
Detailed Design
Unit Testing
Code Walkthrough
The W model depicts that the Testing starts from day one of the initiation of the project and continues till the end. The following table will illustrate the phases of activities that happen in the W model: SDLC Phase 1. Requirements The first V 1. Requirements Review The second V 1. Build Test Strategy. 2. Plan for Testing. 3. Acceptance (Beta) Test Scenario Identification. 1. System Test Case Generation. 1. Integration Test Case Generation. 1. Unit Test Case Generation. 1. Execute Unit Tests 1. Execute Integration Tests. 1. Regression Round 1. 1. Execute System Tests. 1. Regression Round 2. 1. Performance Tests 1. Regression Round 3 1. Performance/Beta Tests
2. 3. 4. 5.
2. 3. 4. 5.
20 of 25
In the second V, I have mentioned Acceptance/Beta Test Scenario Identification. This is because, the customer might want to design the Acceptance Tests. In this case as the development team executes the Beta Tests at the client place, the same team can identify the Scenarios. Regression Rounds are performed at regular intervals to check whether the defects, which have been raised and fixed, are re-tested. 6.3 The Butterfly Model The testing activities for testing software products are preferable to follow the Butterfly Model. The following picture depicts the test methodology.
Test Analysis
Fig: Butterfly Model In the Butterfly model of Test Development, the left wing of the butterfly depicts the Test Analysis. The right wing depicts the Test Design, and finally the body of the butterfly depicts the Test Execution. How this exactly happens is described below. Test Analysis Analysis is the key factor which drives in any planning. During the analysis, the analyst understands the following: Verify that each requirement is tagged in a manner that allows correlation of the tests for that requirement to the requirement itself. (Establish Test Traceability) Verify traceability of the software requirements to system requirements. Inspect for contradictory requirements. Inspect for ambiguous requirements. Inspect for missing requirements. Check to make sure that each requirement, as well as the specification as a whole, is understandable. Identify one or more measurement, demonstration, or analysis method that may be used to verify the requirements implementation (during formal testing). Create a test sketch that includes the tentative approach and indicates the tests objectives. During Test Analysis the required documents will be carefully studied by the Test Personnel, and the final Analysis Report is documented. The following documents would be usually referred: 1. 2. 3. Software Requirements Specification. Functional Specification. Architecture Document.
21 of 25
4.
The Analysis Report would consist of the understanding of the application, the functional flow of the application, number of modules involved and the effective Test Time. Test Design The right wing of the butterfly represents the act of designing and implementing the test cases needed to verify the design artifact as replicated in the implementation. Like test analysis, it is a relatively large piece of work. Unlike test analysis, however, the focus of test design is not to assimilate information created by others, but rather to implement procedures, techniques, and data sets that achieve the tests objective(s). The outputs of the test analysis phase are the foundation for test design. Each requirement or design construct has had at least one technique (a measurement, demonstration, or analysis) identified during test analysis that will validate or verify that requirement. The tester must now implement the intended technique. Software test design, as a discipline, is an exercise in the prevention, detection, and elimination of bugs in software. Preventing bugs is the primary goal of software testing. Diligent and competent test design prevents bugs from ever reaching the implementation stage. Test design, with its attendant test analysis foundation, is therefore the premiere weapon in the arsenal of developers and testers for limiting the cost associated with finding and fixing bugs. During Test Design, basing on the Analysis Report the test personnel would develop the following: 1. 2. 3. 4. 5. Test Plan. Test Approach. Test Case documents. Performance Test Parameters. Performance Test Plan.
Test Execution Any test case should adhere to the following principals: 1. Accurate tests what the description says it will test. 2. Economical has only the steps needed for its purpose. 3. Repeatable tests should be consistent, no matter who/when it is executed. 4. Appropriate should be apt for the situation. 5. Traceable the functionality of the test case should be easily found. During the Test Execution phase, keeping the Project and the Test schedule, the test cases designed would be executed. The following documents will be handled during the test execution phase: 1. Test Execution Reports. 2. Daily/Weekly/monthly Defect Reports. 3. Person wise defect reports. After the Test Execution phase, the following documents would be signed off. 1. 2. 3. 4. 5. Project Closure Document. Reliability Analysis Report. Stability Analysis Report. Performance Analysis Report. Project Metrics.
22 of 25
Defect Classification This section defines a defect Severity Scale framework for determining defect criticality and the associated defect Priority Levels to be assigned to errors found software.
23 of 25
The defects can be classified as follows: Classification Critical Major Minor Cosmetic Suggestion Description There is s functionality block. The application is not able to proceed any further. The application is not working as desired. There are variations in the functionality. There is no failure reported due to the defect, but certainly needs to be rectified. Defects in the User Interface or Navigation. Feature which can be added for betterment.
Priority Level of the Defect The priority level describes the time for resolution of the defect. The priority level would be classified as follows: Classification Immediate At the Earliest Normal Later Description Resolve the defect with immediate effect. Resolve the defect at the earliest, on priority at the second level. Resolve the defect. Could be resolved at the later stages.
Test Lead, Senior Test Engineer, and Test Engineers. Test Lead, Senior Test Engineer, and Test Engineers. Test Engineers.
24 of 25
9.0 Deliverables
The Deliverables from the Test team would include the following: 1. 2. 3. 4. 5. 6. 7. 8. Test Strategy. Test Plan. Test Case Documents. Defect Reports. Status Reports (Daily/weekly/Monthly). Test Scripts (if any). Metric Reports. Product Sign off Document.
25 of 25