Master in Business Administration-MBA SEM - III MI0033 - Software Engineering - 4 Credits Assignment Set-1 (60 Marks)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Master in Business Administration-MBA SEM III MI0033 Software Engineering 4 Credits Assignment Set- 1 (60 Marks)

Answer the Following ( 6 * 10 = 60 Marks )


1. Discuss the CMM 5 Levels for Software Process. Ans 1 : The Capability Maturity Model for Software describes the principles and practices underlying software process maturity and is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes. The CMM is organized into five maturity levels: 1) Initial. The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort and heroics. 2) Repeatable. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. 3) Defined. The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software. 4) Managed. Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled. 5) Optimizing. Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

2. Discuss the Water Fall model for Software Development. Ans 2 : Software products are oriented towards customers like any other engineering products. It is either driver by market or it drives the market. Customer Satisfaction was the main aim in the 1980s. Customer Delight is today's logo and Customer Ecstasy is the new buzzword of the new millennium. Products which are not customer oriented have no place in the market although they are designed using the best technology. The front end of the product is as crucial as the internal technology of the product. A market study is necessary to identify a potential customer's need. This process is also called market research. The already existing need and the possible future needs that are combined together for study. A lot of assumptions are made during market study. Assumptions are the very important factors in the development or start of a product's development. The assumptions which are not realistic can cause a nosedive in the entire venture. Although assumptions are conceptual, there should be a move to develop tangible assumptions to move towards a successful product.

Once the Market study is done, the customer's need is given to the Research and Development Department to develop a cost-effective system that could potentially solve customer's needs better than the competitors. Once the system is developed and tested in a hypothetical environment, the development team takes control of it. The development team adopts one of the software development models to develop the proposed system and gives it to the customers. The basic popular models used by many software development firms are as follows: A) System Development Life Cycle (SDLC) Model B) Prototyping Model C) Rapid Application Development Model D) Component Assembly Model A) System Development Life Cycle Model (SDLC Model): This is also called Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This model has the following activities. 1. 2. 3. 4. 5. 6. System/Information Engineering and Modeling Software Requirements Analysis Systems Analysis and Design Code Generation Testing Maintenance

1) System/Information Engineering and Modeling As software development is large process so work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. The view of this system is necessary when software must interface with other elements such as hardware, people and other resources. System is the very essential requirement for the existence of software in any entity. In some cases for maximum output, the system should be re-engineered and spruced up. Once the ideal system is designed according to requirement, the development team studies the software requirement for the system. 2) Software Requirement Analysis Software Requirement Analysis is also known as feasibility study. In this requirement analysis phase, the development team visits the customer and studies their system requirement. They examine the need for possible software automation in the given software system. After feasibility study, the development team provides a document that holds the different specific recommendations for the candidate system. It also consists of personnel assignments, costs of the system, project schedule and target dates. The requirements analysis and information gathering process is intensified and focused specially on software. To understand what type of the programs to be built, the system analyst must study the information domain for the software as well as understand required function, behavior, performance and interfacing. The main purpose of requirement analysis phase is to find the need and to define the problem that needs to be solved. 3) System Analysis and Design In System Analysis and Design phase, the whole software development process, the overall

software structure and its outlay are defined. In case of the client/server processing technology, the number of tiers required for the package architecture, the database design, the data structure design etc are all defined in this phase. After designing part a software development model is created. Analysis and Design are very important in the whole development cycle process. Any fault in the design phase could be very expensive to solve in the software development process. In this phase, the logical system of the product is developed. 4) Code Generation In Code Generation phase, the design must be decoded into a machine-readable form. If the design of software product is done in a detailed manner, code generation can be achieved without much complication. For generation of code, Programming tools like Compilers, Interpreters, and Debuggers are used. For coding purpose different high level programming languages like C, C++, Pascal and Java are used. The right programming language is chosen according to the type of application. 5) Testing After code generation phase the software program testing begins. Different testing methods are available to detect the bugs that were committed during the previous phases. A number of testing tools and methods are already available for testing purpose. 6) Maintenance Software will definitely go through change once when it is delivered to the customer. There are large numbers of reasons for the change. Change could happen due to some unpredicted input values into the system. In addition to this the changes in the system directly have an effect on the software operations. The software should be implemented to accommodate changes that could be happened during the post development period.

3. Explain the Different types of Software Measurement Techniques.

Ans 3: Software Testing Types: Black box testing Internal system design is not considered in this type of testing. Tests are based on requirements and functionality. White box testing This testing is based on knowledge of the internal logic of an applications code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions. Unit testing Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

Incremental integration testing Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers. Integration testing Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. Functional testing This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application. System testing Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system. End-to-end testing Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix. Regression testing Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application. Load testing Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails. Stress testing System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load. Performance testing Term often used interchangeably with stress and load testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Usability testing User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing. Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall processes on different operating systems under different hardware, software environment. Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Security testing Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks. Compatibility testing Testing how well software performs in a particular hardware/software/operating system/network environment and different combination s of above. Comparison testing Comparison of product strengths and weaknesses with previous versions or other similar products. Alpha testing In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing. Beta testing Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.

4. Explain the COCOMO Model & Software Estimation Technique.

Ans 4: The COCOMO cost estimation model is used by thousands of software project managers, and is based on a study of hundreds of software projects. Unlike other cost estimation models, COCOMO is an open model, so all of the details are published, including:

The underlying cost estimation equations Every assumption made in the model (e.g. "the project will enjoy good management") Every definition (e.g. the precise definition of the Product Design phase of a project) The costs included in an estimate are explicitly stated (e.g. project managers are included, secretaries aren't)

Because COCOMO is well defined, and because it doesn't rely upon proprietary estimation algorithms, Costar offers these advantages to its users:

COCOMO estimates are more objective and repeatable than estimates made by methods relying on proprietary models

COCOMO can be calibrated to reflect your software development environment, and to produce more accurate estimates

Costar is a faithful implementation of the COCOMO model that is easy to use on small projects, and yet powerful enough to plan and control large projects. Typically, you'll start with only a rough description of the software system that you'll be developing, and you'll use Costar to give you early estimates about the proper schedule and staffing levels. As you refine your knowledge of the problem, and as you design more of the system, you can use Costar to produce more and more refined estimates. Costar allows you to define a software structure to meet your needs. Your initial estimate might be made on the basis of a system containing 3,000 lines of code. Your second estimate might be more refined so that you now understand that your system will consist of two subsystems (and you'll have a more accurate idea about how many lines of code will be in each of the subsystems). Your next estimate will continue the process -- you can use Costar to define the components of each subsystem. Costar permits you to continue this process until you arrive at the level of detail that suits your needs. One word of warning: It is so easy to use Costar to make software cost estimates, that it's possible to misuse it -- every Costar user should spend the time to learn the underlying COCOMO assumptions and definitions from Software Engineering Economics and Software Cost Estimation with COCOMO II. Introduction to the COCOMO Model The most fundamental calculation in the COCOMO model is the use of the Effort Equation to estimate the number of Person-Months required to develop a project. Most of the other COCOMO results, including the estimates for Requirements and Maintenance, are derived from this quantity. Source Lines of Code The COCOMO calculations are based on your estimates of a project's size in Source Lines of Code (SLOC). SLOC is defined such that:

Only Source lines that are DELIVERED as part of the product are included -- test drivers and other support software is excluded SOURCE lines are created by the project staff -- code created by applications generators is excluded One SLOC is one logical line of code Declarations are counted as SLOC Comments are not counted as SLOC

The original COCOMO 81 model was defined in terms of Delivered Source Instructions, which are very similar to SLOC. The major difference between DSI and SLOC is that a single Source Line of Code may be several physical lines. For example, an "if-then-else" statement would be counted as one SLOC, but might be counted as several DSI. The Scale Drivers In the COCOMO II model, some of the most important factors contributing to a project's duration and cost are the Scale Drivers. You set each Scale Driver to describe your project; these Scale Drivers determine the exponent used in the Effort Equation.

The 5 Scale Drivers are:


Precedentedness Development Flexibility Architecture / Risk Resolution Team Cohesion Process Maturity

Note that the Scale Drivers have replaced the Development Mode of COCOMO 81. The first two Scale Drivers, Precedentedness and Development Flexibility actually describe much the same influences that the original Development Mode did.

Cost Drivers COCOMO II has 17 cost drivers you assess your project, development environment, and team to set each cost driver. The cost drivers are multiplicative factors that determine the effort required to complete your software project. For example, if your project will develop software that controls an airplane's flight, you would set the Required Software Reliability (RELY) cost driver to Very High. That rating corresponds to an effort multiplier of 1.26, meaning that your project will require 26% more effort than a typical software project. Click here to see which Cost Drivers are in which Costar models. COCOMO II defines each of the cost drivers, and the Effort Multiplier associated with each rating. Check the Costar help for details about the definitions and how to set the cost drivers. COCOMO II Effort Equation The COCOMO II model makes its estimates of required effort (measured in Person-Months PM) based primarily on your estimate of the software project's size (as measured in thousands of SLOC, KSLOC)): Effort = 2.94 * EAF * (KSLOC)E Where EAF Is the Effort Adjustment Factor derived from the Cost Drivers E Is an exponent derived from the five Scale Drivers As an example, a project with all Nominal Cost Drivers and Scale Drivers would have an EAF of 1.00 and exponent, E, of 1.0997. Assuming that the project is projected to consist of 8,000 source lines of code, COCOMO II estimates that 28.9 Person-Months of effort is required to complete it: Effort = 2.94 * (1.0) * (8)1.0997 = 28.9 Person-Months Effort Adjustment Factor The Effort Adjustment Factor in the effort equation is simply the product of the effort multipliers corresponding to each of the cost drivers for your project. For example, if your project is rated Very High for Complexity (effort multiplier of 1.34), and Low for Language & Tools Experience (effort multiplier of 1.09), and all of the other cost drivers are rated to be Nominal (effort multiplier of 1.00), the EAF is the product of 1.34 and

1.09. Effort Adjustment Factor = EAF = 1.34 * 1.09 = 1.46 Effort = 2.94 * (1.46) * (8)1.0997 = 42.3 Person-Months COCOMO II Schedule Equation The COCOMO II schedule equation predicts the number of months required to complete your software project. The duration of a project is based on the effort predicted by the effort equation: Duration = 3.67 * (Effort)SE Where Effort Is the effort from the COCOMO II effort equation SE Is the schedule equation exponent derived from the five Scale Drivers Continuing the example, and substituting the exponent of 0.3179 that is calculated from the scale drivers, yields an estimate of just over a year, and an average staffing of between 3 and 4 people: Duration = 3.67 * (42.3)0.3179 = 12.1 months Average staffing = (42.3 Person-Months) / (12.1 Months) = 3.5 people The SCED Cost Driver The COCOMO cost driver for Required Development Schedule (SCED) is unique, and requires a special explanation. The SCED cost driver is used to account for the observation that a project developed on an accelerated schedule will require more effort than a project developed on its optimum schedule. A SCED rating of Very Low corresponds to an Effort Multiplier of 1.43 (in the COCOMO II.2000 model) and means that you intend to finish your project in 75% of the optimum schedule (as determined by a previous COCOMO estimate). Continuing the example used earlier, but assuming that SCED has a rating of Very Low, COCOMO produces these estimates: Duration = 75% * 12.1 Months = 9.1 Months Effort Adjustment Factor = EAF = 1.34 * 1.09 * 1.43 = 2.09 Effort = 2.94 * (2.09) * (8)1.0997 = 60.4 Person-Months Average staffing = (60.4 Person-Months) / (9.1 Months) = 6.7 people Notice that the calculation of duration isn't based directly on the effort (number of PersonMonths) instead it's based on the schedule that would have been required for the project assuming it had been developed on the nominal schedule. Remember that the SCED cost driver means "accelerated from the nominal schedule". There are many models for software estimation available and prevalent in the industry. Researchers have been working on formal estimation techniques since 1960. Early work in estimation which was typically based on regression analysis or mathematical models of other domains, work during 1970s and 1980s derived models from historical data of various software

projects. Among many estimation models expert estimation, COCOMO, Function Point and derivatives of function point like Use Case Point, Object Points are most commonly used. While Lines Of Code (LOC) is most commonly used size measure for 3GL programming and estimation of procedural languages, IFPUG FPA originally invented by Allen Alrecht at IBM has been adopted by most in the industry as alternative to LOC for sizing development and enhancement of business applications. FPA provides measure of functionality based on end user view of application software functionality. Some of the commonly used estimation techniques are as follows:

Lines of Code (LOC): A formal method to measure size by counting number of lines of Code, Source Lines of Code (SLOC) has two variants- Physical SLOC and Logical SLOC. While two measures can vary significantly care must be taken to compare results from two different projects and clear guideline must be laid out for the organization. IFPUG FPA: Formal method to measure size of business applications. Introduces complexity factor for size defined as function of input, output, query, external input file and internal logical file. Mark II FPA: Proposed and developed by Mark Simons and useful for measuring size for functionality in real time systems where transactions have embedded data COSMIC Full Function Point (FFP): Proposed in 1999, compliant to ISO 14143. Applicable for estimating business applications that have data rich processing where complexity is determined by capability to handle large chunks of data and real time applications where functionality is expressed in terms of logics and algorithms. Quick Function Point (QFP): Derived out of FPA and uses expert judgment. Mostly useful for arriving at a ballpark estimate for budgetary and marketing purposes or where go-no go decision is required during project selection process. Object Points: Best suited for estimating customizations. Based on count of raw objects, complexity of each object and weighted points. COCOMO 2.0: Based on COCOMO 81 which was developed by Barry Boehme. Model is based on the motivation of software reuse, application generators, economies or diseconomies of scale and process maturity and helps estimate effort for sizes calculated in terms of SLOC, FPA, Mark IIFP or any other method. Predictive Object Points: Tuned towards estimation of the object oriented software projects. Calculated based on weighted methods per class, count of top level classes, average number of children, and depth of inheritance. Estimation by Analogy: Cost of project is computed by comparing the project to a similar project in the same domain. The estimate is accurate if similar project data is available.

Estimation methods mentioned above use various factors that affect productivity or size based on system characteristics. COCOMO I uses 15, while COCOMO II uses 23 productivity factors, IFPUG FPA uses 14 General System Characteristics to arrive at the adjusted function point count. Some of these are tuned to early estimation during proposal, project selection phase or budget estimation phase while others are fairly detailed. Selection of estimation approach is based on the availability of historical data, availability of trained estimators, availability of tools and estimation schedule and cost constraints. Each estimation technique has its own advantages and disadvantages; however, selection of a particular approach is based on the goal for estimation.

5. Write a note on myths of Software. Ans 5 : Software Myths

Myths about software startups are hard to bust since so few financial statistics about private companies are available. Take, for example, the prevailing assumption that a typical software company will achieve profit margins of 15% to 20% three to five years after raising its first round of VC. That's a projection many VCs have heard from entrepreneurs seeking funding. But how many companies actually achieve it? One of the few researchers that has collected data to test such assumptions is Sand Hill Group, which just released a study based on a survey of software company CEOs. I was introduced to Sand Hill Group about five years ago by Rick Sherlund, the software analyst at Goldman Sachs. He had just come back from Enterprise, an annual software conference that Sand Hill organizes, and he was impressed by the number of heavyweight CEOs in attendance. With little fanfare, the event had become one of the industry's must-attend confabs. That's largely because of M.R. Rangaswami, Sand Hill's founder and a former marketing executive at erstwhile software maker Baan. Over 20 years in the business, Rangaswami has built up quite a network, which he taps for conferences and surveys. Of the software executives questioned in Sand Hill's latest survey, 86% were from private companies, and 65% worked at companies with less than 100 employees. In other words, a lot of the respondents were from mid-stage, VC-backed software startups. When asked, "What is your company's budgeted net profit for the next 12 months?" the largest group of respondents, or 25% of the total, said less than 1%. The second largest group, or 23% of the total, said 5% to 9%. Clearly, that's far below the amount you'd expect by relying on conventional wisdom. Rangaswami says it may be time for conventional wisdom to change. "If no one is meeting the 10% to 15% profit assumption, why are your companies working with those numbers?" he says. One reason why margins may be under pressure is the growing popularity of delivering software as a service over the Internet. Traditional software licensing is still the primary selling method for 54% of Sand Hill's respondents. But software-as-a-service and subscription licensing are the chief methods for 21% and 14%, respectively. When companies sell services and subscriptions, they recognize revenue from a sale in chunks over several years rather than in one lump sum. Though that approach creates predictable revenue streams for many years, it can also lower revenues in each quarter, which can cause thinner margins

Master in Business Administration-MBA SEM III MI0033 Software Engineering 4 Credits Assignment Set- 2 (60 Marks)
Answer the Following ( 6 * 10 = 60 Marks )
1. Quality and reliability are related concepts but are fundamentally different in a number of ways. Discuss them. Ans :1 Quality Concepts It has been said that no two snowflakes are alike. Certainly when we watch snow falling it is hard to imagine that snowflakes differ at all, let alone that each flake possesses a unique structure. In order to observe differences between snowflakes, we must examine the specimens closely, perhaps using a magnifying glass. In fact, the closer we look the more differences we are able to observe. This phenomenon, variation between samples, applies to all products of human as well as natural creation. For example, if two identical circuit boards are examined closely enough, we may observe that the copper pathways on the boards differ slightly in geometry, placement, and thickness. In addition, the location and diameter of the holes drilled in the boards varies as well. All engineered and manufactured parts exhibit variation. The variation between samples may not be obvious without the aid of precise equipment to measure the geometry, electrical characteristics, or other attributes of the parts. However, with sufficiently sensitive instruments, we will likely come to the conclusion that no two samples of any item are exactly alike. Quality Quality of design refers to the characteristics that designers specify for an item. The grade of materials, tolerances, and performance specifications all contribute to the quality of design. As higher-grade materials are used, tighter tolerances and greater levels of performance are specified, the design quality of a product increases, if the product is manufactured according to specifications. Quality of conformance is the degree to which the design specifications are followed during manufacturing. Again, the greater the degree of conformance, the higher is the level of quality of conformance. In software development, quality of design encompasses requirements, specifications, and the design of the system. Quality of conformance is an issue focused primarily on implementation. If the implementation follows the design and the resulting system meets its requirements and performance goals, conformance quality is high.

Figure Quality Control Variation control may be equated to quality control. But how do we achieve quality control? Quality control involves the series of inspections, reviews, and tests used throughout the software process to ensure each work product meets the requirements placed upon it. Quality control includes a feedback loop to the process that created the work product. The combination of measurement and feedback allows us to tune the process when the work products created fail to meet their specifications. This approach views quality control as part of the manufacturing process. Quality control activities may be fully automated, entirely manual, or a combination of automated tools and human interaction. A key concept of quality control is that all work products have defined measurable specifications to which we may compare the output of each process. The feedback loop is essential to minimize the defects produced. Quality Assurance Quality assurance consists of the auditing and reporting functions of management. The goal of quality assurance is to provide management with the data necessary to be informed about product quality, thereby gaining insight and confidence that product quality is meeting its goals. Of course, if the data provided through quality assurance identifies problems, it is the managements responsibility to address the problems, and apply the necessary resources to resolve quality issues. Cost of Quality The cost of quality includes all costs incurred in the pursuit of quality or in performing qualityrelated activities. Cost of quality studies are conducted to provide a base-line for the current cost of quality, identify opportunities for reducing the cost of quality, and provide a normalized basis of comparison. The basis of normalization is almost always dollars. Once we have normalized quality costs on a dollar basis, we have the necessary data to evaluate where the opportunities lie, to improve our processes. Furthermore, we can evaluate the effect of changes in dollar-based terms. Quality costs may be divided into costs associated with prevention, appraisal, and failure. Prevention costs include

Quality planning Formal technical reviews Test equipment

2. Explain Version Control & Change Control.

Version Control A version control system (or revision control system) is a combination of technologies and practices for tracking and controlling changes to a project's files, in particular to source code, documentation, and web pages. If you have never used version control before, the first thing you should do is go find someone who has, and get them to join your project. These days, everyone will expect at least your project's source code to be under version control, and probably will not take the project seriously if it doesn't use version control with at least minimal competence. The reason version control is so universal is that it helps with virtually every aspect of running a project: inter-developer communications, release management, bug management, code stability and experimental development efforts, and attribution and authorization of changes by particular developers. The version control system provides a central coordinating force among all of these areas. The core of version control is change management: identifying each discrete change made to the project's files, annotating each change with metadata like the change's date and author, and then replaying these facts to whoever asks, in whatever way they ask. It is a communications mechanism where a change is the basic unit of information. This section does not discuss all aspects of using a version control system. It's so allencompassing that it must be addressed topically throughout the book. Here, we will concentrate on choosing and setting up a version control system in a way that will foster cooperative development down the road. Version Control Vocabulary This book cannot teach you how to use version control if you've never used it before, but it would be impossible to discuss the subject without a few key terms. These terms are useful independently of any particular version control system: they are the basic nouns and verbs of networked collaboration, and will be used generically throughout the rest of this book. Even if there were no version control systems in the world, the problem of change management would remain, and these words give us a language for talking about that problem concisely

Change control within Quality management systems (QMS) and Information Technology (IT) systems is a formal process used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. It reduces the possibility that unnecessary changes will be introduced to a system without forethought, introducing faults into the system or undoing changes made by other users of software. The goals of a change control procedure

usually include minimal disruption to services, reduction in back-out activities, and costeffective utilization of resources involved in implementing change. Change control is currently used in a wide variety of products and systems. For Information Technology (IT) systems it is a major aspect of the broader discipline of change management. Typical examples from the computer and network environments are patches to software products, installation of new operating systems, upgrades to network routing tables, or changes to the electrical power systems supporting such infrastructure. Certain portions of the Information Technology Infrastructure Library cover change control.

The process There is considerable overlap and confusion between change management, configuration management and change control. The definition below is not yet integrated with definitions of the others. Certain experts describe change control as a set of six steps 1. 2. 3. 4. 5. 6. Record / Classify Assess Plan Build / Test Implement Close / Gain Acceptance

Record/classify The client initiates change by making a formal request for something to be changed. The change control team then records and categorizes that request. This categorization would include estimates of importance, impact, and complexity. Assess The impact assessor or assessors then make their risk analysis typically by answering a set of questions concerning risk, both to the business and to the process, and follow this by making a judgment on who should carry out the change. If the change requires more than one type of assessment, the head of the change control team will consolidate these. Everyone with a stake in the change then must meet to determine whether there is a business or technical justification for the change. The change is then sent to the delivery team for planning. Plan Management will assign the change to a specific delivery team, usually one with the specific role of carrying out this particular type of change. The team's first job is to plan the change in detail as well as construct a regression plan in case the change needs to be backed out.

Build/test If all stakeholders agree with the plan, the delivery team will build the solution, which will then be tested. They will then seek approval and request a time and date to carry out the implementation phase. Implement
All stakeholders must agree to a time, date and cost of implementation. Following implementation, it is usual to carry out a post-implementation review which would take place at another stakeholder meeting.

Close/gain acceptance When the client agrees that the change was implemented correctly, the change can be closed.

3. Discuss the SCM Process. Ans 3 : In software engineering, software configuration management (SCM) is the task of tracking and controlling changes in the software. Configuration management practices include revision control and the establishment of baselines. SCM concerns itself with answering the question "Somebody did something, how can one reproduce it?" Often the problem involves not reproducing "it" identically, but with controlled, incremental changes. Answering the question thus becomes a matter of comparing different results and of analysing their differences. Traditional configuration management typically focused on controlled creation of relatively simple products. Now, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed. According to another simple definition: Software Configuration Management is how you control the evolution of a software project.

Terminology The history and terminology of SCM (which often varies) has given rise to controversy. Roger Pressman, in his book Software Engineering: A Practitioner's Approach, states that SCM "is a set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made." Source configuration management is a related practice often used to indicate that a variety of artefacts may be managed and versioned, including software code, hardware, documents, design models, and even the directory structure itself. Atria (later Rational Software, now a part of IBM), used "SCM" to mean "software configuration management". Gartner and Forrester Research use the term software change and configuration management.

Purposes The goals of SCM are generally:[citation needed]


Configuration identification - Identifying configurations, configuration items and baselines. Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline. Configuration status accounting - Recording and reporting all the necessary information on the status of the development process. Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals. Build management - Managing the process and tools used for builds. Process management - Ensuring adherence to the organization's development process. Environment management - Managing the software and hardware that host the system. Teamwork - Facilitate team interactions related to the process. Defect tracking - Making sure every defect has traceability back to the source.

4. Explain i. Software doesnt Wear Out. ii. Software is engineered & not manufactured. Ans i. Software doesn't "wear out. " Hardware Failure Rates The illustration below depicts failure rate as a function of time for hardware. The relationship, often called the "bathtub curve," indicates the typical failure rate of individual components within a large batch. It shows that in say a batch of 100 products, a relatively large number will fail early on before settling down to a steady rate. Eventually, age and wear and tear get the better of all them and failure rates rise again near the end of the products life. To assist in quality control, many new batches of products are soak tested for maybe 24 hours in a hostile environment (temperature/humidity/variation etc.) to pinpoint those that are likely to fail early on in their life, this also highlights any inherent design/production weaknesses.

These early failure rates can be attributed to two things Poor or unrefined initial design. Correcting this, results in much lower failure rates for successive batches of the product. Manufacturing defects i.e. defects in the product brought about by poor assembly/materials etc. during production. Both types of failure can be corrected (either by refining the design, or by replacing broken components out in the field), which lead to the failure rate dropping to a Steady-state level for some period of time. As time passes, however, the failure rates rise again as hardware components suffer from the cumulative effects of dust, vibration, abuse, temperature extremes and many other environmental maladies. Stated simply, The hardware begins to wear out. Software Failure Rates Software is not susceptible to the same environmental problems that cause hardware to wear out. In theory, therefore, the failure rate curve for software should take the form shown below.

Undiscovered defects in the first engineered version of the software will cause high failure rates early in the life of a program. However, these are corrected (hopefully without introducing other errors) and the curve flattens as shown. The implication is clear. Software doesn't wear out. However, it does deteriorate with maintenance as shown below.

During its life, software will undergo changes and it is likely that some new defects will be introduced as a result of this, causing the failure rate curve to spike as shown above. Before the curve can return to the original steady-state failure rate (i.e. before the new bugs have been removed), another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level begins to rise-- the software is deteriorating due to change. The Software Crisis/Chronic Affliction In the late 70s early 80s software development was still in its infancy, having had less than 30 years to develop as an engineering discipline. However, the cracks in the development process were already beginning to be noticed and software development had reached a crisis. Some would describe it as a Chronic Affliction which is defined as Something causing repeated and long lasting pain and distress In effect, the use of software had grown beyond the ability of the discipline to develop it. In essence the techniques for producing software that had been sufficient in the 50s, 60s and perhaps 70s were no longer sufficient for the 80s and 90s and there was a need to develop a more structured approach to software analysis, design, programming, test and maintenance. This can be seen by the graph below (seen earlier), that shows the suitability of software procured by the American Department of defence.

To understand how this problem arose, compare the situation in the 60s with the situation we have today.

5. Explain the Advantages of Prototype Model, & Spiral Model in Contrast to Water Fall model. Ans 5: Creating software using the prototype model also has its benefits. One of the key advantages a prototype modeled software has is the time frame of development. Instead of concentrating on documentation, more effort is placed in creating the actual software. This way, the actual software could be released in advance. The work on prototype models could also be spread to others since there are practically no stages of work in this model. Everyone has to work on the same thing and at the same time, reducing man hours in creating a software. The work will even be faster and efficient if developers will collaborate more regarding the status of a specific function and develop the necessary adjustments in time for the integration. as per clients pont of view prtotypes are usefull to knoe future implementation of application Advantages of prototyping 1 May provide the proof of concept necessary to attract funding 2 Early visibility of the prototype gives users an idea of what the final system looks like 3 Encourages active participation among users and producer 4 Enables a higher output for user 5 Cost effective (Development costs reduced). 6 Increases system development speed 7 Assists to identify any problems with the efficacy of earlier design, requirements analysis and coding activities 8 Helps to refine the potential risks associated with the delivery of the system being developed 9 Various aspects can be tested and quicker feedback can be got from the user 10 Helps to deliver the product in quality easily 11 User interaction available during development cycle of prototype Advantages of the Spiral Model The key is continual development; it is intended to help manage risks. You should not define the entire system in detail at first. The developers should only define the highest- priority features. This type of development relies on developing prototypes and then giving them back to the user for trial. With this feedback the next prototype is created. Define and implement this, then get feedback from users / customers (such feedback distinguishes 'evolutionary' from 'incremental' development). With this knowledge, you should then go back to define and implement more features in smaller chunks, until an acceptable system is delivered. The advantages of using the spiral model are varied: its design flexibility allows changes to be implemented at several stages of the project; the process of building up large systems in small segments makes it easier to do cost calculations; and the client, who will be involved in the development of each segment, retains control over the direction and implementation of the project. In addition, the client's knowledge of the project grows as the project grows, so that they can interface effectively with management. The Rapid Application Development methodology was developed to respond to the need to deliver systems very fast. The RAD approach is not appropriate to all projects - an air traffic control system based on RAD would not instill much confidence. Project scope, size and circumstances all determine the success of a RAD approach. The following categories indicate suitability for a RAD approach.

6. Write a Note on Spiral Model.


ANS : The spiral model is a software development process combining elements of both design andprototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, expensive and complicated projects. This should not be confused with the Helical model of modern systems architecture that uses adynamic programming approach in order to optimise the system's architecture before design decisions are made by coders that would cause problems. History The spiral model was defined by Barry Boehm in his 1986 article "A Spiral Model of Software Development and Enhancement". This model was not the first model to discuss iterative development. As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts with a designgoal and ends with the client (who may be internal) reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project. The model The spiral model combines the idea of iterative development (prototyping) with the systematic, controlled aspects of the waterfall model. It allows for incremental releases of the product, or incremental refinement through each time around the spiral. The spiral model also explicitly includes risk management within software development. Identifying major risks, both technical and managerial, and determining how to lessen the risk helps keep the software development process under control. The spiral model is based on continuous refinement of key products for requirements definition and analysis,system and software design, and implementation (the code). At each iteration around the cycle, the products are extensions of an earlier product. This model uses many of the same phases as the waterfall model, in essentially the same order, separated by planning, risk assessment, and the building of prototypes and simulations. Documents are produced when they are required, and the content reflects the information necessary at that point in the process. All documents will not be created at the beginning of the process, nor all at the end (hopefully). Like the product they define, the documents are works in progress. The idea is to have a continuous stream of products produced and available for user review. The spiral lifecycle model allows for elements of the product to be added in when they become available or known. This assures that there is no conflict with previous requirements and design. This method is consistent with approaches that have multiple software builds and releases and allows for making an orderly transition to a maintenance activity. Another positive aspect is that the spiral model forces early user involvement in the system development effort. For projects with heavy user interfacing, such as user application programs or instrument interface applications, such involvement is helpful.

Starting at the center, each turn around the spiral goes through several task regions :

Determine the objectives, alternatives, and constraints on the new iteration. Evaluate alternatives and identify and resolve risk issues. Develop and verify the product for this iteration. Plan the next iteration.

Note that the requirements activity takes place in multiple sections and in multiple iterations, just as planning and risk analysis occur in multiple places. Final design, implementation, integration, and test occur in iteration 4. The spiral can be repeated multiple times for multiple builds. Using this method of development, some functionality can be delivered to the user faster than the waterfall method. The spiral method also helps manage risk and uncertainty by allowing multiple decision points and by explicitly admitting that all of anything cannot be known before the subsequent activity starts. Applications The spiral model is mostly used in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military had adopted the spiral model for its Future Combat Systems program. The FCS project was cancelled after six years (20032009), it had a two year iteration (spiral). The FCS should have resulted in three consecutive prototypes (one prototype per spiralevery two years). It was cancelled in May 2009. The spiral model thus may suit small (up to $3 million) software applications and not a complicated ($3 billion) distributed interoperable, system of systems. Also it is reasonable to use the spiral model in projects where business goals are unstable but the architecture must be realized well enough to provide high loading and stress ability. For example, the Spiral Architecture Driven Development is the spiral based Software Development Life Cycle (SDLC) which shows one possible way how to reduce the risk of noneffective architecture with the help of a spiral model in conjunction with the best practices from other models.

You might also like