Softeware Testing Techniques
Softeware Testing Techniques
Softeware Testing Techniques
Page | 433
Shivkumar et al., International Journal of Advanced Research in Computer Science and Software Engg 2 (10), October- 2012, pp. 433-438 Failure The inability of a system or component to perform its Required function within the specified performance requirement. Error The difference between a computed, observed, or Measured value or condition and the true, specified, or theoretically correct value or condition. Specification A document that specifies in a complete, precise, Verifiable manner, the requirements, design, behaviour, or other Characteristic of a system or component, and often the Procedures for determining whether these provisions have been Satisfied. We observe errors, which can often be associated with failures. But the ultimate cause of the fault is often very hard to find. II. OBJECTIVES
A good test case is one that has a probability of finding an as yet undiscovered error. A good test is not redundant. A successful test is one that uncovers a yet undiscovered error. A good test should be best of breed. A good test should neither be too simple nor too complex. To check if the system does what it is expected to do. To check if the system is Fit for purpose. To check if the system meets the requirements and be executed successfully in the Intended environment. Executing a program with the intent of finding an error. Testing Type Unit Integration Function System Acceptance Beta Specification Low-level design: actual code Low-level design: High level design High level design Requirement Analysis Requirement Analysis Ad hoc Changed documentation; High level design III. General Scope Classes Multiple Classes Whole Product Whole Product in Environment Whole Product in Environment Whole Product in Environment Opacity White box White box; black box Black box Black box Black box Black box Who does it? Programmer Programmer Independent Tester Independent Tester Customer Customer
Regression
1.
2.
Requirements study Testing Cycle starts with the study of clients requirements. Understanding of the requirements is very essential for testing the product. Test Case Design and Development Component Identification Test Specification Design Test Specification Review Test Execution Code Review Test execution and evaluation Performance and simulation
3.
Page | 434
Shivkumar et al., International Journal of Advanced Research in Computer Science and Software Engg 2 (10), October- 2012, pp. 433-438 4. Test Closure Test summary report Project De-brief Project Documentation Test Process Analysis Analysis done on the reports and improving the applications performance by implementing new technology and additional features.
5.
IV. 1.
LEVEL OF TESTING
Unit testing The most micro scale of testing. Tests done on particular functions or code modules. Requires knowledge of the internal program design and code. Done by Programmers (not by testers). To test the function of a program or unit of code such as a program or module,To test internal logic,To verify internal design,To test path & conditions coverage,To test exception conditions & error handling,To test the function of a program or unit of code such as a program or module,To test internal logic,To verify internal design,To test path & conditions coverage,To test exception conditions & error handling After modules are coded Developer Internal Application Design,Master Test Plan,Unit Test Plan Unit Test Report White Box testing techniques,Test Coverage techniques Debug,Re-structure,Code Analyzers,Path/statement coverage tools
Objectives
Incremental Integration Testing Continuous testing of an application as and when a new functionality is added. Applications functionality aspects are required to be independent enough to work separately before completion of development. Done by programmers or testers. Integration Testing Involves building a system from its components and testing it for problems that arise from component interactions. Top-down integration: Develop the skeleton of the system and populate it with components. Bottom-up integration Integrate infrastructure components then add functional components. To simplify error localization, systems should be incrementally integrated.
To technically verify proper interfacing between modules, and within sub-systems After modules are unit tested Developers Internal & External Application Design ,Master Test Plan ,Integration Test Plan Integration Test report White and Black Box techniques ,Problem / Configuration Management Debug ,Re-structure ,Code Analyzers
3.
Functional Testing Black box type testing geared to functional requirements of an application. Done by testers.
Page | 435
Shivkumar et al., International Journal of Advanced Research in Computer Science and Software Engg 2 (10), October- 2012, pp. 433-438 4. System Testing To verify that the system components perform control functions,To perform inter-system test,To demonstrate that the system performs both functionally and operationally as specified,To perform appropriate types of tests relating to Transaction Flow, Installation, Reliability, Regression etc. After Integration Testing Development Team and Users Detailed Requirements & External Application Design,Master Test Plan,System Test Plan System Test Report Problem / Configuration Management Recommended set of tools
Acceptance Testing Objectives When Who Input Output Methods Tools To verify that the system meets the user requirements After System Testing User / End User Business Needs & Detailed Requirements,Master Test Plan,User Acceptance Test Plan User Acceptance Test report Black Box techniques,Problem / Configuration Management Compare, Keystroke capture & Playback, Regression testing
6.
Beta Testing Relied on too heavily by large vendors, like Microsoft. Allow early adopters easy access to a new product on the condition that they report errors to vendor. A good way to stress test a new system. End-to-end Testing Similar to system testing; involves testing of a complete application environment in a situation that mimics real-world use. Regression Testing Re-testing after fixes or modifications of the software or its environment.
7.
8.
9.
Sanity Testing Initial effort to determine if a new software version is performing well enough to accept it for a major testing effort. 10. Load Testing Testing an application under heavy loads. I.e. Testing of a web site under a range of loads to determine, when the system response time degraded or fails. 11. Stress Testing 2012, IJARCSSE All Rights Reserved
Page | 436
Shivkumar et al., International Journal of Advanced Research in Computer Science and Software Engg 2 (10), October- 2012, pp. 433-438 Testing under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database etc. Term often used interchangeably with load and performance testing. Performance Testing Testing how well an application complies to performance requirements. Install/uninstall Testing: Testing of full, partial or upgrade install/uninstall process. Recovery Testing Testing how well a system recovers from crashes, HW failures or other problems. Compatibility Testing Testing how well software performs in a particular HW/SW/OS/NW environment. Loop Testing This white box technique focuses on the validity of loop constructs. four different classes of loops can be defined (1).simple loops (2).nested loops(3).concatenated loops(4).Unstructured loops Recovery Test Confirms that the system recovers from expected or unexpected events without loss of data or functionality I.e. Shortage of disk space and unexpected loss of communication Power out conditions V. TEST PLAN
17.
A]. Purpose of preparing a Test Plan Validate the acceptability of a software product. Help the people outside the test group to understand why and how of product validation. A Test Plan should be Thorough enough (Overall coverage of test to be conducted) Useful and understandable by the people inside and outside the test group. B]. Scope The areas to be tested by the QA team. Specify the areas which are out of scope (screens, database, mainframe processes etc). C]. Test Approach Details on how the testing is to be performed. Any specific strategy is to be followed for testing (including configuration management). VI. TESTING METHODOLOGIES AND TYPES Here, I have considered the two testing methodology and types that is mention above: 1. Black box testing 2. White box testing 1. BLACK BOX TESTING No knowledge of internal design or code required. Tests are based on requirements and functionality 1.1 Black Box - Testing Technique Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors Initialization and termination errors 1.2. Black box / Functional Testing Based on requirements and functionality Not based on any knowledge of internal design or code Covers all combined parts of a system Tests are data driven 2. WHITE BOX TESTING Knowledge of the internal program design and code required. Tests are based on coverage of code statements, branches, paths, conditions. 2.1 White Box - Testing Technique All independent paths within a module have been exercised at least once Exercise all logical decisions on their true and false sides 2012, IJARCSSE All Rights Reserved
Page | 437
Shivkumar et al., International Journal of Advanced Research in Computer Science and Software Engg 2 (10), October- 2012, pp. 433-438 Execute all loops at their boundaries and within their operational bounds Exercise internal data structures to ensure their validity
2.2 White box Testing / Structural Testing Based on knowledge of internal logic of an application's code Based on coverage of code statements, branches, paths, conditions Tests are logic driven 2.3 Other White Box Techniques Statement Coverage execute all statements at least once Example : A+B If (A = 3) Then B=X+Y End-If While (A > 0) Do Read (X) A=A-1 End-While-Do Decision Coverage execute each decision direction at least once Example: If A < 10 or A > 20 Then B=X+Y End -if Condition Coverage execute each decision with all possible outcomes at least once Example: A=X If (A > 3) or (A < B) Then B=X+Y End-If-Then While (A > 0) and (Not EOF) Do Read (X) A=A1 End-While-Do VII. CONCLUSION
Testing can show the presence of faults in a system; it cannot prove there are no remaining faults. Component developers are responsible for component testing; system testing is the responsibility of a separate team. Integration testing is testing increments of the system; release testing involves testing a system to be released to a customer. Use experience and guidelines to design test cases in defect testing. Interface testing is designed to discover defects in the interfaces of composite components. Equivalence partitioning is a way of discovering test cases - all cases in a partition should behave in the same way. Structural analysis relies on analysing a program and deriving tests from this analysis. Test automation reduces testing costs by supporting the test process with a range of software tools.
VIII.
REFERENCES
[1]. Lessons Learned in Software Testing, by C. Kaner, J. Bach, and B. Pettichord [2]. Testing Computer Software, by C. Kaner, J. Falk, and H. Nguyen [3]. Effective Software Testing, by E. Dustin [4]. Software testing, by Ron Patton [5]. Software engineering, by Roger Pressman [6]. http://people.engr.ncsu.edu/txie/testingresearchsurvey.htm [7]. http://www.engpaper.com [8]. http://www.people.engr.ncsu.edu/txie/testingresearchsurvey.htm [9]. www.cs.cmu.edu/luluo/Courses/17939Report.pdf [10]. www.findwhitepapers.com [11]. www.scribd.com 2012, IJARCSSE All Rights Reserved
Page | 438
Shivkumar et al., International Journal of Advanced Research in Computer Science and Software Engg 2 (10), October- 2012, pp. 433-438
Page | 439