Software Reliability - Full

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN 2249-6831 Vol.

3, Issue 3, Aug 2013, 327-340 TJPRC Pvt. Ltd.

SOFTWARE RELIABILITY USING SOFTWARE METRICS AND SOFTWARE FAULT ANALYSIS


INDU SHARMA1 & PARVEEN BANO2
1

Department of Computer Science and Engineering, PDM College of Engineering, Bahadurgarh, Haryana, India
2

Assistant Professor, PDM College of Engineering, Bahadurgarh, Haryana, India

ABSTRACT
In this Paper, Prediction, progress, and process improvement. Measurement permeates everyday life and is an essential part in every scientific and engineering discipline. Measurement allows the acquisition of information that can be used for developing theories and models, and devising, assessing, and using methods and techniques. Software measurement is a way to track the process. As Grady states, Without such measures for managing software, it is difficult for any organization to understand whether it is successful, and it is difficult to resist frequent changes of strategy".

KEYWORDS: Reliability Relevant Software Metrics, Software Reliability Growth Models, Proposed Model,
Implementation Model, Existing Model, Research Design Algorithm and Weightage Assignment to Risk, Calculation of Different Risk Factors

INTRODUCTION
Software has become so essential to human in their daily lives that today it is difficult to imagine living without devices controlled by software. Software reliability and quality has become the primary concern during the software development. It is difficult to produce fault-free software due to the problem complexity, complexity of human behaviors, and the resource constrains. Failure is an unavoidable phenomenon in all technological products and systems. System failures due to the software failure are very common and results undesirables consequences which can adversely affect both reliability and safety of the system. IEEE defines software reliability as the probability of a software system or component to perform its intended function under the specified operating conditions over the specified period of time. In other words it can also be defined as the probability of failure-free software operation for a specified period of time in a specified environment. In order to achieve high software reliability, the number of faults in delivered code should be at minimum level.

Software Reliability Includes the Following Parameters


Software Measurement Software measurement is necessary for achieving the basic management objectives of prediction, progress, and process improvement. Software Metrics Metrics which are used to evaluate the software processes, products and services is referred to as the software metrics. Software Defect Defect (or Fault or bug) is a result of an entry of erroneous information into software. Defects are raised when expected and actual test results differ.

328

Indu Sharma & Parveen Bano

Key Issues in Software Reliability


The nature of software engineering requires measurements to be made, as more rigorous methods for the purpose of production planning, monitoring and control are needed, else the level of risk of the software projects may become immense, and there are chances that the software production gets out of industrial control. This will result in damage to both software producers (e.g., higher costs, schedule slippage) and users (e.g., poor quality products, late product delivery, and high prices).It should be effective and in order to make proper use of the resources devoted to it, software measurement should deal with important development issues, i.e., measurement should be done within industrial interest. In this regard, software measurement may serve several purposes, but it depends on the knowledge about a process of product.

Software Measurement
Measurement is introduced by information technology organizations to better understand, evaluate, control and predict software processes. Measurement as the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules. Measurement can be categorized in two ways: - (a) Direct measures: It includes software process (e.g., cost and effort applied) and product (e.g., lines of code (LOC) produced, execution speed, and defects rate), and (b) Indirect measures: It includes the product functionality, complexity, efficiency, reliability, maintainability and many others.

Software Measurement Process


The software measurement process must be an objective, orderly method for quantifying, assessing, adjusting, and ultimately improving the development process. Data are collected based on known, development issues, concerns, and questions. The measurement process is used to assess quality, progress, and performance throughout all life cycle phases. This measurement process includes collecting and receiving actual data (not just graphs or indicators), and analyzing those data. The key components of an effective measurement process are: Clearly defined software development issues and the measure (data elements) needed to provide insight into those issues; Processing of collected data into graphical or tabular reports (indicators) to aid in issue analysis; Analysis of indicators to provide insight into development issues; and,

Software Development
Software engineering is an engineering discipline that is concerned with all aspects of software production. It is a systematic approach to the analysis, design, assessment, implementation, test, maintenance of software.Software engineering is concerned with theories, methods and tools for the software development. The software development process transforms a user's needs into software, and this process can integrate a subset of the following phases. Requirement Phase The first main step in software development is gathering the requirements. The software requirements can change which depends on the software product which is going too developed. After analysis of the system requirement the next step is to analyze of the software requirements or system requirements.

Software Reliability Using Software Metrics and Software Fault Analysis

329

Design Phase Design phase is the important phase of system development life cycle. In this phase the database design, the design of the architecture is chosen , functional specification design, low level design documents, high level design documents is done in this phase. Coding Phase On the Basis of the design document the coding is done. The small modules of the software are prepared. All the modules are then combined together. Testing Phase A software which is not tested properly that may be not reliable or of bad quality. In this phase the developed system is tested properly and on the basis of testing the reports are prepared that the system is having bugs or errors in system.

Software Testing
Software testing is a process of error detection. Software testing is a process to provide the customers or stakeholders with information about the quality of the product or service under test. Software testing is a process of validating and verifying that a software product. Software testing is done to know that the system meets the requirements that guided its design and development.

Testing Methods
White-Box Testing White testing is internal and external testing .In this tester can test the whole code. White-box testing is when the tester has access to the internal data structures and algorithms including the code that implements these. Black Box Testing Black-box testing treats the software as a black without any knowledge of internal implementation. Specification-Based Testing Specification-based testing aims to test the functionality of software according to the applicable requirements.

Reliability
Software Reliability is the probability of failure free software which work for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. Software reliability is different from hardware reliability because it defines the design perfection, rather than manufacturing perfection.

There are Two Types of Reliability


Hardware Reliability It is the ability of hardware to perform its functions for some period of time without any failure. It is expressed as MTBF which means mean time between failures. The hardware reliability is described on the basis of bath tub curve. Software Reliability Software Reliability is the probability of failure free software which work for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. Software reliability is

330

Indu Sharma & Parveen Bano

different from hardware reliability because it defines the design perfection, rather than manufacturing perfection.

Software Reliability Metrics


Product Metrics The product metrics are used to measure the size of the software. The size of the software can be measured as total no. of Lines Of Code (LOC), or thousand lines of cod (KLOC). The LOC only includes the executable lines ,code .It does not includes comments. Project Management Metrics Project management metrics includes the good management of the software. If there is good management of the software then the software is of good quality. Process Metrics The quality of the product is mainly based on the process used. The process metrics are mainly used to estimate the reliability, monitor the reliability and improve the reliability .Process metrics are also used to improve the quality of software. Fault and Failure Metrics The fault and failure are found during the testing process. The faults are found during the testing and the failure is when the system stops working. The main functions of fault and failure metrics are to found the failure free software. The failure metrics are mainly based on the customer information about the failures found after release of the software.

There are Two Types of Software Reliability Models


Prediction Models:-Prediction models use historical data. In the software development life cycle it is mainly used prior to development or test phases or it can be used as early as concept phase. The prediction model predicts reliability at some future time. Estimation Model: - The estimation model uses data from current from current software development effort. It is usually used later in life cycle that is after some data have been collected. It is not typically used in development phases.

Reliability Growth Models


There are six software reliability models which are used for different purposes. These models are explained below:Jelinski Moranda Model The Jelinski Moranda model is the earliest models in software reliability .it is given in 1972.It is time between failure models. It assumes N software faults at the start of testing, failure occur purely at random and all the faults contribute equally to cause a failure during testing. It also assumes the fix time is negligible and the fix for each failure is perfect. The software products failure rate improves by the same amount at each fix. The hazard function at time t is given Z (t) = [N-(i-1)] Where N is the number of software defects at the beginning is probability constant

Software Reliability Using Software Metrics and Software Fault Analysis

331

Littlewoods Model It is similar to Jelinski Moranda Model but it assumes that different faults have different size. Large sized defects tend to detected and fixed earlier. As the number of errors is driven down with the progress in the test so is the average error size causing a law of diminishing return in debugging. Goel Okumoto Imperfect Model The J-M model assumes that the fix is negligible and that the fix for each failure is perfect. It assumes perfect debugging .In practice this is not always case. In the process of fixing a defect new defects may be injected. This model proposed imperfect debugging model to overcome the limitation of the assumption. In this model the hazard function during the interval between the (i-1) st and the failure is given Z (ti) = [N-p (i-1)] Where N is number of faults at start of testing P is probability of imperfect debugging failure rate per fault Goel Okumoto Non Homogeneous Poisson Process Model The NHHP model is given in 1979. It is concerned with modeling the number of failures observed in given testing intervals. It defines the cumulative number of failure observed at time t. N (t) can be modelled as a non-homogenous Poisson process with the time dependent failure rate. Musa Okumoto Logarithmic Poisson Execution Time Model In the model the observed number of failure by a certain time is also assumed to NHPP .Its mean value function is different. It attempts to consider that later fixes have smaller effect on the software reliability than earlier ones. This model consists two components that is execution time component and calendar time component. The mean value function of this model is () = (1/)ln(+1) Where is the initial failure intensity is the rate of reduction in the normalized failure intensity per failure. The Delayed S and Inflection S Models With the help of defect removal process Yamada said that a testing process consists of not only a defect detection process but also defect isolation process because of the time needed for failure analysis significant delay can occur between time of first failure observation and the time of reporting. They offer the delayed S-Shaped reliability growth model for such a process in which the observed growth curve of cumulative number of detected defects is S-shaped. The model is based on the no homogeneous poison process but with a different mean value function to reflect the delay in failure reporting. m(t)=k[1-(1+t)e-t] Where t is the time is the error detection rate k is the total number of defects or total cumulative defect rate.

332

Indu Sharma & Parveen Bano

RELATED WORK
In Year 2011, Zigmund Bluvband describes that there are two advanced analytical models for obtaining accurate results for software reliability prediction. The First model can inhibit some specific features of software testing process and it is based on well-known S shaped Ohba model. This advanced model is applicable only for non-rare bug testing. In Year 2011, Akira Hada, describes in his research paper which presents various models to estimate reliability for a future profile with increased stress using the current observations to develop a model for future reliability. In Year 2011, Kuei-Chen Chiu described that over the last two decades, various software reliability growth models (SRGM) have been proposed, and there has been a gradual but marked shift in the balance between software reliability and software testing cost in recent years. In Year 2011 Shuanqi Wang describes that in order to incorporate the effect of test coverage, two novel software reliability growth models (SRGMs) are proposed in this paper using failure data and test coverage simultaneously. One is continuous using testing time, and the other is discrete with respect to the number of executed test cases instead of testing time. Sultan Aljahdali Performed a Work, Improved Software Reliability Prediction through Fuzzy Logic Modeling. This paper presents a new approach to software reliability assessment by using Fuzzy logic. The Series of Fuzzy logic modeling associated with different time segments can be directly used as a piecewise linear model for reliability assessment and Problem identification, which can produce meaningful results early in the testing process. The model has been applied to three different applications using fuzzy logic and normalized root mean of the square of error as an evaluation criterion. Results show that the fuzzy model adopted has good predictive capability. Sultan Aljahdali Performed a Work, Predicting the Reliability of Software Systems Using Fuzzy Logic . In this paper, Author explores the use of fuzzy logic to build a SRGM. The proposed fuzzy model consists of a collection of linear sub-models joined together smoothly using fuzzy membership functions to represent the fuzzy model. Khalaf Khatatneh Performed a Work, Software Reliability Modeling Using Soft Computing Technique . In this paper, a model that can be used for software reliability prediction is explored. The proposed model is implemented using the fuzzy logic technique and has been applied on a custom set of test data. The model is characterized as a growth reliability model.

APPROACHES USED
A number of different approaches have been used to detect and remove faults in Software. We will use different approaches to detect and remove the Defects and faults in software. One is Software Defects and Object Oriented approach, Second is Software Reliability Growth Model Approach and another are Proposed Model approach, Implementation Approach, Existing Model Approach. Software Defects and Object Oriented Approach Object oriented design, today, is becoming more popular in Software development environment and Object Oriented design metrics is an essential part of software environment. The main objective of analyzing these metrics is to improve the quality of the software. Detection and removal of defects prior to the customer delivery is extremely important in the software development. In this paper first we have discussed the Chidamber and Kemerer metrics suite first, then the

Software Reliability Using Software Metrics and Software Fault Analysis

333

methodology & analysis to answer above mentioned questions [1998].

CK Metrics Suite [2003-2009]


Weighted Method per Class (WMC) WMC measures the complexity of a class. WMC is a predictor of how much time and effort is required to develop and maintain the class. A large number of methods also mean a greater potential impact on derived classes, since the derived classes inherit the methods of the base class. If we analyze the WMC then we will find that high WMC leads to more faults which increases the density of bugs and decreases quality [2009]. Depth of Inheritance Tree (DIT) DIT metric is the length of the maximum path from the node to the root of the tree. The deeper a class is in the hierarchy, the more methods and variables it is likely to inherit, making it more complex. High depth of the tree indicates greater design complexity. Thus it can be hard to understand a system with many inheritance layers [2009]. Number of Children (NOC) It is equal to the number of immediate child classes derived from a base class. NOC measures the breadth of a class hierarchy. A high NOC indicates several things like- High reuse of base class, base class may require more testing, improper abstraction of parent class etc. [2009] Coupling Between Objects (CBO) Two classes are coupled when methods declared in one class use methods or Instance variables defined by the other classes. Multiple accesses to the same class are counted as one access. Only method calls and variable references are counted. An increase of CBO indicates the reusability of a class will decrease; also a high coupling has been found to indicate fault proneness. Thus, the CBO values for each class should be kept as low as possible [2009].

Software Reliability Model


Jelinski Moranda Model The Jelinski Moranda model is the earliest models in software reliability .it is given in 1972.It is time between failure models. It assumes N software faults at the start of testing, failure occur purely at random and all the faults contribute equally to cause a failure during testing. It also assumes the fix time is negligible and the fix for each failure is perfect. The software products failure rate improves by the same amount at each fix. The hazard function at time t is given Z (t) = [N-(i-1)] Where N is the number of software defects at the beginning i is probability constant Littlewoods Model It is similar to Jelinski Moranda Model but it assumes that different faults have different size. Large sized defects tend to detected and fixed earlier. As the number of errors is driven down with the progress in the test so is the average error size causing a law of diminishing return in debugging. Goel Okumoto Imperfect Model The J-M model assumes that the fix is negligible and that the fix for each failure is perfect. It assumes perfect debugging .In practice this is not always case. In the process of fixing a defect new defects may be injected. This model

334

Indu Sharma & Parveen Bano

proposed imperfect debugging model to overcome the limitation of the assumption. In this model the hazard function during the interval between the (i-1) st and the failure is given Z (ti) = [N-p (i-1)] Where N is number of faults at start of testing P is probability of imperfect debugging failure rate per fault Goel Okumoto Non Homogeneous Poisson Process Model The NHHP model is given in 1979.It is concerned with modelling the number of failures observed in given testing intervals. It defines the cumulative number of failure observed at time t.N (t) can be modelled as a non homogenous Poisson process with the time dependent failure rate. Model Implementation For measuring the software reliability there are many models. We cannot use all he models at same time. It means that like or the design phase we can use early prediction models or architecture based models. Now these models have further sub models which is more that 20.We cannot we all these models for the prediction of software reliability. How this Algorithm Assigns Weight The weights are assigned on the basis of Input desired by Model Nature of project Output desired by user Testing Validation

Precautions which Should be Taken For choosing the best model to find the software reliability, first of all carefully find the SDLC phase .Then the weights should be assigned properly. If the weights are assigned incorrectly the wrong result should be found. Find the nature of the project, input desired by the model, output desired by the user, testing process properly. For example if there are three inputs available out of four and we are assigning the weight of two inputs then the wrong result should be found. How can we Choose Model There are many software reliability models we cannot use all the models. The model is chosen on the basis of SDLC phases. For example if SDLC phase is Implementation Phase we can use early prediction Model and architecture Based Model.

RESEARCH DESIGN
Define Number of Faults Respective to Each Test Case

Software Reliability Using Software Metrics and Software Fault Analysis

335

Generate the Test Sequence for the Software Project Define Fault Parameters that can influence the Test and Reliability Define fault Categories and Subcategories Assign the weightage to each Sub Category Define no. of faults respective to each test case Define the Fuzzy Rule based to Software Faults Estimate the Software Risk based on defined rule set and Define the Fuzzy Rule based to Software Faults Analysis of Results

Weightage Assignment to Risk Here we have divided all kind of faults in different categories and assign each kind of fault a specific weightage. The weightage is assigned between 0 and 1. Here 0 represents the less critical fault and 1 represents the most critical fault. Table 1 Fault F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 Existing Model According to the Existing Model the faults occurrence in a test case is defined by the weightage sum of all faults. This sum value represents the test case criticality Table 2 Test Cases T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 Proposed Model As we can see in this table the faults relationship is described in terms of association between faults. The association is described in the form of two operators called AND and OR operators. The AND operator here represents the occurrence of all faults and OR represents the occurrence of any fault. Faults F1,F2,F3 F1,F2,F3,F4 F1,F2,F3,F6,F7 F1,F6,F9 F1,F2,F3 F1,F2,F6,F9 F1,F3,F4,F8,F10 F1,F3,F4,F5,F8 F1,F3,F10 F8,F9,F10 Weight .1+.2+.3=.6 .1+.2+.3+.4=1 .1+.2+.3+.6+.7=1.9 .1+.6+.9=1.6 .1+.2+.3=.6 .1+.2+.6+.9=1.8 .1+.3+.4+.8+1=2.6 .1+.3+.4+.5+.8=2.1 .1+.3+1=1.4 .8+.9+1=2.7 Category Low Risk Low Risk Low Risk Medium Risk Medium Risk Medium Risk Critical Critical Critical Critical Weightage .1 .2 .3 .4 .5 .6 .7 .8 .9 1

336

Indu Sharma & Parveen Bano

Table 3 Test Cases T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 Faults F1&F2 = max(F1,F2) F6||f7 = min(f6,f7) ((F3&f5)||f8) = min(max(F3,F5),F8) ((F1||F6)&F9) F1&f2&f3&f4&f5 F6||F8||F9 F1||F2 F1||F2||F3 F1&F2&F3 F8&F9&F10 Weight 0.2 0.6 0.5 0.9 0.5 0.6 .1 .1 .3 1

MATLAB IMPLEMENTATION
MATLAB is a package that has been purpose-designed to make computations easy, fast and reliable. It is installed on machines run by Bath University Computing Services (BUCS), which can be accessed in the BUCS PC Labs such as those in 1 East 3.9, 1 West 2.25 or 3 East 3.1, as well as from any of the PCs in the Library. The machines which you will use for running MATLAB are SUN computers that run on the UNIX operating system. MATLAB started life in the 1970s as a user-friendly interface to certain clever but complicated programs for solving large systems of equations. Features MATLAB is an interactive system for doing numerical computations. A numerical analyst called Cleve Moller wrote the rst version of MATLAB in the 1970s. It has since evolved into a successful commercial software package. MATLAB makes use of highly respected algorithms and hence you can be condent about your results. Powerful operations can be performed using just one or two commands. You can build up your own set of functions for a particular application.

Calculation of Different Risk Factors


Low Risk (Factor) Function factor1=LowRisk (Factor) a = .3; b = .5; if (Factor < a) factor1=1; elseif (Factor >=a && Factor < b) factor1=((b - Factor)) / (b-a); else factor1=0; end

Software Reliability Using Software Metrics and Software Fault Analysis

337

High Risk (Factor) Function factor=HighRisk (Factor) a = .5; b = .8; c=1; if (Factor ==b) factor=1; elseif (Factor >= a && Factor < b) factor=( Factor-a) / (b-a); elseif (Factor>= b && Factor<= c) factor=(c- Factor) / (c-b); else factor=0; end Medium Risk (Factor) function factor=MediumRisk (Factor) a = .2; b = .5; c=.8; if (Factor == b) factor=1; elseif (Factor >= a && Factor < b) factor= (Factor-a) / (b-a); elseif (Factor>= b && Factor< c) factor=(c- Factor) / (c-b); else factor=0 end Relation and Test Cases between Modules Relation Test case

338

Indu Sharma & Parveen Bano

Operations Attributes Nested Components

CONCLUSIONS AND FUTURE SCOPE


Conclusions In the end I have reached to the conclusion that to find that which software reliability model should be used we can use software reliability selection algorithm. I have found that for measuring the software reliability there are many models. We cannot use all he models at same time. It means that like or the design phase we can use early prediction models or architecture based models. Now these models have further sub models which is more than 20.We cannot we all these models for the prediction of software reliability. I have designed an algorithm to select the software reliability models. In this algorithm I have given standard weights and relevant weights. On the basis of weights the model which is having highest weight should be selected. At last the algorithm is validated by taking the example which shows that the model which is having highest weight is selected. At the last the algorithm is validated by using the example which shows that which model is best for use. Future Scope On the basis of this algorithm a database should be developed which can store the information about their different models. In the database most commonly used models should be stored. If after applying the algorithm the results are already stored in the database then it is easy for the user to know that which model is best for use.

REFERENCES
1. 2. 3. Sultan Aljahdali, Improved Software Reliability Prediction through Fuzzy Logic Modeling. Sultan Aljahdali,Predicting the Reliability of Software Systems Using Fuzzy Logic. Khalaf Khatatneh, Software Reliability Modeling Using Soft Computing Technique, European Journal of Scientific Research ISSN 1450-216X 4. GUO JUNHONG, Software Reliability Nonlinear Modeling and Its Fuzzy Evaluation, 4th WSEAS Int. Conf. on NON-LINEAR ANALYSIS, NON-LINEAR SYSTEMS and CHAOS 5. Sultan H. Aljahdali, Employing four ANNs Paradigms for Software Reliability Prediction: an Analytical Study, ICGST-AIML Journal, ISSN: 1687-4846 6. Ajeet Kumar Pandey, Fault Prediction Model by Fuzzy Profile Development of Reliability Relevant Software Metrics, International Journal of Computer Applications (0975 8887) 7. 8. 9. K. Krishna Mohan, Selection of Fuzzy Logic Mechanism for Qualitative Software Reliability Prediction. Jie Yang, Managing knowledge for quality assurance: an empirical study. Michael R. Lyu, Optimal Allocation of Test Resources for Software Reliability Growth Modeling in Software Development, IEEE TRANSACTIONS ON RELIABILITY 0018 -9529/02 2002 IEEE

Software Reliability Using Software Metrics and Software Fault Analysis

339

10. A. Yadav, Critical Review on Software Reliability Models, International Journal of Recent Trends in Engineering 11. Sultan H. Aljahdali, Employing four ANNs Paradigms for Software Reliability Prediction: an Analytical Study, ICGST-AIML Journal, ISSN: 1687-4846 12. Michael R. Lyu, Optimization of Reliability Allocation and Testing Schedule for Software Systems. 13. J. O. Omolehin, Graphics to fuzzy elements in appraisal of an in-house software based on Inter-failure data analysis, African Journal of Mathematics and Computer Science Research 14. P.C. Jha, A fuzzy approach for optimal selection of COTS components for modular software system under consensus recovery block scheme incorporating execution time, TJFS: Turkish Journal of Fuzzy Systems (eISSN: 13091190) 15. Qiuying Li, A Software Reliability Evaluation Method, 2011 International Conference on Computer and Software Modeling IPCSIT 16. N. Raj Kiran, Software reliability prediction by soft computing techniques, The Journal of Systems and Software 17. B. Hailpern, Software debugging, testing, and verification. 18. E.A. Zanaty, Improving Fuzzy Algorithms For Automatic Magnetic Resonance Image Segmentation. 19. Kevin K.F. Yuen, Evaluating Software Quality of Vendors using Fuzzy Analytic Hierarchy Process, Proceedings of the International MultiConference of Engineers and Computer Scientists 2008 IMECS 2008

You might also like