Unit - V SEPM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Unit-V

Software Engineering and Project Management – BTCS402N

Metrics in the process and project domain

Project managers have a wide variety of metrics to choose from. We can classify
the most commonly used metrics into the following groups:

Process Metrics

These are metrics that pertain to Process Quality. They are used to measure the
efficiency and effectiveness of various processes.

Project Metrics

These are metrics that relate to Project Quality. They are used to quantify defects,
cost, schedule, productivity and estimation of various project resources and
deliverables.

Product Metrics

These are metrics that pertain to Product Quality. They are used to measure cost,
quality, and the product’s time-to-market.

Organizational Metrics

These metrics measure the impact of organizational economics, employee


satisfaction, communication, and organizational growth factors of the project.

Software Development Metrics Examples

These metrics enable management to understand the quality of the software, the
productivity of the development team, code complexity, customer satisfaction, agile
process, and operational metrics.
We’ll now take a closer look at the various types of the two most important
categories of metrics – Project Metrics, and Process Metrics.

Schedule Variance: Any difference between the scheduled completion of an


activity and the actual completion is known as Schedule Variance.

Schedule variance = ((Actual calendar days – Planned calendar days) + Start


variance)/ Planned calendar days x 100.

Effort Variance: Difference between the planned outlined effort and the effort
required to actually undertake the task is called Effort variance.

Effort variance = (Actual Effort – Planned Effort)/ Planned Effort x 100.

Size Variance: Difference between the estimated size of the project and the actual
size of the project (normally in KLOC or FP).

Size variance = (Actual size – Estimated size)/ Estimated size x 100.

Requirement Stability Index: Provides visibility to the magnitude and impact of


requirements changes.

RSI = 1- ((Number of changed + Number of deleted + Number of added) / Total


number of initial requirements) x100.

Productivity (Project): Is a measure of output from a related process for a unit of


input.

Project Productivity = Actual Project Size / Actual effort expended in the project.

Productivity (for test case preparation) = Actual number of test cases/ Actual
effort expended in test case preparation.

Productivity (for test case execution) = Actual number of test cases / actual effort
expended in testing.
Productivity (defect detection) = Actual number of defects (review + testing) /
actual effort spent on (review + testing).

Productivity (defect fixation) = actual no of defects fixed/ actual effort spent on


defect fixation.

Schedule variance for a phase: The deviation between planned and actual
schedules for the phases within a project

Schedule variance for a phase = (Actual Calendar days for a phase – Planned
calendar days for a phase + Start variance for a phase)/ (Planned calendar days for a
phase) x 100

Effort variance for a phase: The deviation between a planned and actual effort for
various phases within the project.

Effort variance for a phase = (Actual effort for a phase – a planned effort for a
phase)/ (planned effort for a phase) x 100.

Process Metrics:

Cost of quality: It is a measure of the performance of quality initiatives in an


organization. It’s expressed in monetary terms.

Cost of quality = (review + testing + verification review + verification testing +


QA + configuration management + measurement + training + rework review +
rework testing)/

total effort x 100.

Cost of poor quality: It is the cost of implementing imperfect processes and


products.

Cost of poor quality = rework effort/ total effort x 100.


Defect density: It is the number of defects detected in the software during
development divided by the size of the software (typically in KLOC or FP)

Defect density for a project = Total number of defects/ project size in KLOC or
FP

Review efficiency: defined as the efficiency in harnessing/ detecting review


defects in the verification stage.

Review efficiency = (number of defects caught in review)/ total number of defects


caught) x 100.

Testing Efficiency: Testing efficiency = 1 – ((defects found in acceptance)/ total


number of testing defects) x 100.

Defect removal efficiency: Quantifies the efficiency with which defects were
detected and prevented from reaching the customer.

Defect removal efficiency = (1 – (total defects caught by customer/ total number


of defects)) x 100.

Residual defect density = (total number of defects found by a customer)/ (Total


number of defects including customer found defects) x 100.

Check out our course on Introduction to PMP Certification Training.

2. Software Measurement

A measurement is an manifestation of the size, quantity, amount or dimension of


a particular attributes of a product or process.

Software measurement is a titrate impute of a characteristic of a software product


or the software process. It is an authority within software engineering.

Software measurement process is defined and governed by ISO Standard.


Need of Software Measurement:

Software is measured to: Create the quality of the current product or process.

Anticipate future qualities of the product or process.

Enhance the quality of a product or process.

Regulate the state of the project in relation to budget and schedule.

Classification of Software Measurement:

There are 2 types of software measurement:

Direct Measurement: In direct measurement the product, process or thing is

measured directly using standard scale.

Indirect Measurement:

In indirect measurement the quantity or quality to be measured is measured using


related parameter i.e. by use of reference.

Metrics:

A metrics is a measurement of the level that any impute belongs to a system product
or process.

There are 4 functions related to software metrics:

Planning

Organizing

Controlling
Improving

Characteristics of software Metrics:

Quantitative:

Metrics must possess quantitative nature.It means metrics can be expressed in


values.

Understandable:

Metric computation should be easily understood ,the method of computing metric


should be clearly defined.

Applicability:

Metrics should be applicable in the initial phases of development of the software.

Repeatable:

The metric values should be same when measured repeatedly and consistent in
nature.

Economical:

Computation of metric should be economical.

Language Independent:

Metrics should not depend on any programming language.

Classification of Software Metrics:

There are 2 types of software metrics:


Product Metrics:

Product metrics are used to evaluate the state of the product, tracing risks and
undercovering prospective problem areas. The ability of team to control quality is
evaluated.

Process Metrics:

Process metrics pay particular attention on enhancing the long term process of the
team or organisation.

Project Metrics:

Project matrix is describes the project characteristic and execution process.

Number of software developer Staffing pattern over the life cycle of software

Cost and schedule

Productivity

3. Metrics for Software quality

Software quality metrics are a subset of software metrics that focus on the quality
aspects of the product, process, and project.

These are more closely associated with process and product metrics than with
project metrics.

Software quality metrics can be further divided into three categories −

Product quality metrics

In-process quality metrics


Maintenance quality metrics

Product Quality Metrics

This metrics include the following −

Mean Time to Failure

Defect Density

Customer Problems

Customer Satisfaction

Mean Time to Failure

It is the time between failures. This metric is mostly used with safety critical
systems such as the airline traffic control systems, avionics, and weapons.

Defect Density

It measures the defects relative to the software size expressed as lines of code or
function point, etc. i.e., it measures code quality per unit.

This metric is used in many commercial software systems.

Customer Problems

It measures the problems that customers encounter when using the product.

It contains the customer’s perspective towards the problem space of the software,
which includes the non-defect oriented problems together with the defect problems.

The problems metric is usually expressed in terms of Problems per User-Month


(PUM)
4.Integrating Metrics Within the Software Process

Arguments for Software Metrics

Why is it so important to measure the process of software engineering and the


product (software) that it produces?

The answer is relatively obvious.

If we do not measure, there no real way of determining whether we are improving.

And if we are not improving, we are lost.

By requesting and evaluating productivity and quality measures, senior


management can establish meaningful goals for improvement of the software
engineering process.

If the process through which it is developed can be improved, a direct impact on


the bottom line can result.

But to establish goals for improvement, the current status of software development
must be understood.

Hence, measurement is used to establish a process baseline from which


improvements can be assessed.

The day-to-day rigors of software project work leave little time for strategic
thinking.

Software project managers are concerned with more mundane (but equally
important)

issues: developing meaningful project estimates, producing higher-quality


systems, getting product out the door on time.
By using measurement to establish a project baseline, each of these issues
becomes more manageable.

We have already noted that the baseline serves as a basis for estimation.

Additionally, the collection of quality metrics enables an organization to "tune"


its software process to remove the "vital few" causes of defects that have the greatest
impact on software development.

At the project and technical levels (in the trenches), software metrics provide
immediate benefit.

As the software design is completed, most developers would be anxious to obtain


answers to the questions such as

• Which user requirements are most likely to change?

• Which components in this system are most error prone?

• How much testing should be planned for each component?

• How many errors (of specific types) can I expect when testing commences?

Establishing a Baseline

By establishing a metrics baseline, benefits can be obtained at the process, project,


and product (technical) levels. Yet the information that is collected need not be
fundamentally

different.

The same metrics can serve many masters.


The metrics baseline consists of data collected from past software development
projects To be an effective aid in process improvement and/or cost and effort
estimation,

baseline data must have the following attributes:

data must be reasonably accurate—"guestimates" about past projects are to be


avoided;

data should be collected for as many projects as possible;

measures must be consistent, for example, a line of code must be interpreted


consistently across all projects for which data are collected;

applications should be similar to work that is to be estimated—it makes little sense


to use a baseline for batch information systems work to estimate a realtime,
embedded

application.

Metrics Collection, Computation, and Evaluation

Ideally, data needed to establish a baseline has been collected in an ongoing


manner.

Sadly, this is rarely the case. Therefore, data collection requires a historical
investigation of past projects to reconstruct required data. Once measures have been
collected

(unquestionably the most difficult step), metrics computation is possible.

Depending on the breadth of measures collected, metrics can span a broad range
of LOC or FP metrics as well as other quality- and project-oriented metrics.
Finally, metrics must be evaluated and applied during estimation, technical work,
project control, and process improvement.

Metrics evaluation focuses on the underlying reasons for the results obtained and
produces a set of indicators that guide the project or process.

Quality Management:

Software quality management is concerned with ensuring that developed software


systems are “fit for purpose.” That is, systems should meet the needs of their users,
should perform efficiently and reliably, and should be delivered on time and within
budget.

Formalized quality management (QM) is particularly important in teams that are


developing large, long-lifetime systems that take several years to develop. These
systems are developed for external clients, usually using a plan-based process. For
these systems, quality management is both an organizational and an individual
project issue:

At an organizational level, quality management is concerned with establishing a


framework of organizational processes and standards that will lead to high-quality
software. The QM team should take responsibility for defining the software
development processes to be used and standards that should apply to the software
and related documentation, including the system requirements, design, and code.

At a project level, quality management involves the application of specific quality


processes, checking that these planned processes have been followed, and ensuring
that the project outputs meet the defined project standards. Project quality
management may also involve defining a quality plan for a project. The quality plan
should set out the quality goals for the project and define what processes and
standards are to be used.
Software quality management techniques have their roots in methods and techniques
that were developed in manufacturing industries, where the terms quality assurance
and quality control are widely used.

Quality assurance is the definition of processes and standards that should lead to
high-quality products and the introduction of quality processes into the
manufacturing process.

Quality control is the application of these quality processes to weed out products that
are not of the required level of quality.

Both quality assurance and quality control are part of quality management.

Quality management provides an independent check on the software development


process. The QM team checks the project deliverables to ensure that they are
consistent with organizational standards and goals (Figure.1). They also check
process documentation, which records the tasks that have been completed by each
team working on this project. The QM team uses documentation to check that
important tasks have not been forgotten or that one group has not made incorrect
assumptions about what other groups have done.

The QM team in large companies is usually responsible for managing the release
testing process. They are also responsible for checking that the system tests provide
coverage of the requirements and that proper records of the testing process are
maintained.

The QM team should be independent and not part of the software development group
so that they can take an objective view of the quality of the software. They can report
on software quality without being influenced by software development issues.
Fig. 1 Quality management and software development

Software quality is not just about whether the software functionality has been
correctly implemented, but also depends on non-functional system attributes as
shown in Figure 2. These attributes reflect the software dependability, usability,
efficiency, and maintainability. It is not possible for any system to be optimized for
all of these attributes. For example, improving security may lead to loss of
performance. The quality plan should therefore define the most important quality
attributes for the software that is being developed. The plan should also include a
definition of the quality assessment process.

Fig.2 Software Quality Attributes


Fig.3 Process based quality

One may measure the quality of the product and change the process until you achieve
the quality level that is required. Figure. 3 illustrates this process-based approach to
achieving product quality.

Software Review:

A software review is a systematic inspection of a software and its requirements by


one or more individuals. The purpose of a software review is to find and resolve
errors and defects in the software during the early stages of the software development
life cycle. A software review can help improve the quality and performance of the
software.

Formal Technical Review (FTR) is a software quality control activity performed by


software engineers. It is an organized, methodical procedure for assessing and
raising the standard of any technical paper, including software objects. Finding
flaws, making sure standards are followed, and improving the product or document
under review’s overall quality are the main objectives of a fault tolerance review
(FTR).

Objectives of formal technical review (FTR)

Detect Identification: Identify defects in technical objects by finding and fixing


mistakes, inconsistencies, and deviations.
Quality Assurance: To ensure high-quality deliverables, and confirm compliance
with project specifications and standards.

Risk Mitigation: To stop risks from getting worse, proactively identify and manage
possible threats.

Knowledge Sharing: Encourage team members to work together and build a


common knowledge base.

Consistency and Compliance: Verify that all procedures, coding standards, and
policies are followed.

Learning and Training: Give team members the chance to improve their abilities
through learning opportunities.

In addition, the purpose of FTR is to enable junior engineers to observe the analysis,
design, coding, and testing approach more closely. FTR also works to promote
backup and continuity to become familiar with parts of the software they might not
have seen otherwise. FTR is a class of reviews that include walkthroughs,
inspections, round-robin reviews, and other small-group technical assessments of
software. Each FTR is conducted as a meeting and is considered successful only if
it is properly planned, controlled, and attended.

Example

Suppose during the development of the software without FTR design cost 10 units,
coding cost 15 units and testing cost 10 units then the total cost till now is 35 units
without maintenance but there was a quality issue because of bad design so to fix it
we have to redesign the software and final cost will become 70 units. that is why
FTR is so helpful while developing the software.

Software Quality simply means to measure how well software is designed i.e. the
quality of design, and how well software conforms to that design i.e. quality of
conformance. Software quality describes degree at which component of software
meets specified requirement and user or customers’ needs and expectations.
Software Quality Assurance (SQA) is a planned and systematic pattern of activities
that are necessary to provide a high degree of confidence regarding quality of a
product. It actually provides or gives a quality assessment of quality control
activities and helps in determining validity of data or procedures for determining
quality. It generally monitors software processes and methods that are used in a
project to ensure or assure and maintain quality of software.

Fig 4 Software quality assurance


Goals of Software Quality Assurance:

Quality assurance consists of a set of reporting and auditing functions.

These functions are useful for assessing and controlling effectiveness and
completeness of quality control activities.

It ensures management of data which is important for product quality.

It also ensures that software which is developed, does it meet and compiles with
standard quality assurance.

It ensures that end result or product meets and satisfies user and business
requirements.

It simply finds or identify defects or bugs, and reduces effect of these defects.

Measures of Software Quality Assurance:

There are various measures of software quality. These are given below:

Reliability–
It includes aspects such as availability, accuracy, and recoverability of system to
continue functioning under specific use over a given period of time. For example,
recoverability of system from shut-down failure is a reliability measure.

Performance –
It means to measure throughput of system using system response time, recovery
time, and start up time. It is a type of testing done to measure performance of system
under a heavy workload in terms of responsiveness and stability.
Functionality –
It represents that system is satisfying main functional requirements. It simply refers
to required and specified capabilities of a system.

Supportability –
There are a number of other requirements or attributes that software system must
satisfy. These include- testability, adaptability, maintainability, scalability, and so
on. These requirements generally enhance capability to support software.

Usability –
It is capability or degree to which a software system is easy to understand and used
by its specified users or customers to achieve specified goals with effectiveness,
efficiency, and satisfaction. It includes aesthetics, consistency, documentation, and
responsiveness.

The international set of standards used in the development of quality management


systems in all industries is called ISO 9000. ISO 9000 standards can be applied to a
range of organizations from manufacturing through to service industries. ISO 9001,
the most general of these standards, applies to organizations that design, develop,
and maintain products, including software.

The ISO 9001 standard is a framework for developing software standards. with
general quality principles, describes quality processes and lays out the organizational
standards and procedures that should be defined. These should be documented in an
organizational quality manual. A major revision of the ISO 9001 standard in 2000
reoriented the standard around nine core processes (Figure. 5). If an organization is
to be ISO 9001 conformant, it must document how its processes relate to these core
processes.
Fig. 5 ISO 9001 core processes

Fig.6 ISO 9001 and quality management


The relationship between ISO 9001, organizational quality manuals, and individual
project quality plans are shown in Figure 6

[Refer: CS615 – Software Engineering I (pace.edu)]

Reactive and proactive risk strategies are different ways of dealing with potential
threats and opportunities in the market.

Reactive risk strategies try to reduce the damage of threats and speed up the recovery
from them but assume that they will happen eventually.

Proactive risk strategies identify possible threats and aim to prevent them from
happening in the first place. Proactive strategies are used for future events, while
reactive strategies are used for current events.

What Is Risk Management?


When developing a risk management program, a company typically goes through
the following step. Those steps begin with a strategic planning exercise with senior
management and conclude with a business continuity plan.

Risk Identification
The first step is to identify the risks that might threaten business model or business
continuity. Some of the treats are :-
• Operational risk
• Financial risk
• Reputational risk
• Compliance, legal, or regulatory risk

Risk Monitoring
Risks evolve over time. Risks must monitor become better, worse, or gone away
entirely. There is a blend of internal controls and external providers to enable active
threat monitoring from all risk factors, especially when launching new products or
new markets.

Continuous Improvement
Companies should continuously improve their resilience to risk. Those efforts
gradually reduce risk profile, which assures company stakeholders and customers of
the company’s long-term prospects. As companies improve at risk management,
customer loyalty and retention improve.

All that said, each company needs to find its one approach to risk management. One
of the most fundamental issues is whether to take a proactive or reactive approach
to risk management.

Proactive and Reactive: Difference


• Preventing potential risks from becoming incidents
• Mitigating damage from incidents
• Stopping small threats from worsening
• Continuing critical business functions despite incidents
• Evaluating each incident to solve its root cause

Proactive risk management strategies include:


• Identifying existing risks to the enterprise, business unit, or project
• Developing a risk response
• Prioritizing identified risks according to the magnitude of their threat
• Analyzing risks to determine the best treatment for each
• Implementing controls necessary to prevent hazards from becoming threats or
incidents
• Monitoring the threat environment continuously

Reactive Risk Management


Tin Reactive Risk Management the disaster or threat must occur before management
responds. In contrast, proactive risk management is about taking preventative
measures before the event to decrease its severity.

Organizations should develop reactive risk management plans that can be deployed
after the event – because many times, the unwanted event will happen. If
management hasn’t developed reactive risk management plans, then executives end
up making decisions about how to respond as the event happens; that can be costly
and stressful.

Helping to Withstand Future Risks


The reactive approach learns from past (or current) events and prepares for future
events. For example, businesses can purchase cybersecurity insurance to cover the
costs of a security disruption.

This strategy assumes that a breach will happen at some point. But once that breach
does occur, the business might understand more about how to avoid future violations
and perhaps could even tailor its insurance policies accordingly.

Proactive Risk Management


As the name suggests, proactive risk management means that you identify risks
before they happen and figure out ways to avoid or alleviate the risk. It seeks to
reduce the hazard’s risk potential or (even better) prevent the threat altogether.
A good example is vulnerability testing and remediation. Any organization of
appreciable size is likely to have vulnerabilities in its software that attackers could
find and exploit. So regular testing can find and patch those vulnerabilities to
eliminate that threat.

Allows for More Control Over Risk Management


A proactive management strategy gives you more control over your risk
management. For example, you can decide which issues should be top priorities and
what potential damage you will accept.

Proactive management also involves constantly monitoring your systems, risk


processes, cybersecurity, competition, business trends, and so forth. Understanding
the level of risk before an event allows you to instruct your employees on how to
mitigate them.

A proactive approach, however, implies that each risk is constantly monitored. It


also requires regular risk reviews to update your current risk profile and to identify
new risks affecting the company. This approach drives management to be constantly
aware of the direction of those risks.

What About Predictive Risk Management?


Predictive risk management about predicting future risks, outcomes, and threats.

Predictive risk management attempts to:

• Identify the probability of risk in a situation based on one or more variables


• Anticipate potential future risks and their probability
• Anticipate necessary risk controls

Five Risk Management Strategies with Examples


The two main types of risk management strategies, involves how companies
implement these strategies in the real world. The following real-world examples can
guide your risk management strategy.

1. MVP or experiment development


Instead of launching a full product line or entering a new market, companies
can launch products in a lean, iterative fashion- the ‘minimum viable product’
– to a small market subsection. This way, companies can test their products’
operational and financial elements and mitigate the market-related risks
before they launch to a broader audience.

For example, an airline could test facial recognition technology to make


security checks faster, but might want to validate privacy and data security
concerns first by trying it out at one airport before going nationwide.

2. Risk isolation
Companies can isolate potential threats to their business model by separating
specific parts of their infrastructure to protect them from external threats.

For example, some companies might restrict access to critical parts of their
software ecosystem by requiring engineers to work at a specific location
instead of working remotely (which opens the door to potential cyber threats).

3. Risk-reward analysis
Companies may undertake specific initiatives to understand the opportunity
cost of entering a new market or the risk of possibly gaining market share in
a saturated market. Before taking the initiatives at a broader level, the analysis
would help them understand the market forces and their ability to induce or
reduce risks with what-if scenarios.
For example, a direct-to-consumer delivery company might want to project
the anticipated demand for entering the market with faster medical supply
delivery.

4. Data projection
Companies can analyze data with the help of machine learning techniques to
understand specific behavioral or threat patterns in their ways of working.
These data analysis efforts might also help them understand what second-
order effects are lurking because of inefficient processes or lax attention on
certain parts of the business.

For example, a large retailer might use data analysis to find inefficiencies in
its supply chain to reduce last-mile delivery times and get an edge over the
competition.

5. Certification
To stay relevant and retain customer trust, companies could also obtain safety
and security certifications to prove they are a resilient brand that can sustain
and mitigate significant operational risks.

For example, a new fintech company might get certified for PCI-DSS security
standards before scaling in a new market to build trust with its customers.

Risk Mitigation, Monitoring, and Management (RMMM) plan

A risk management technique is usually seen in the software Project plan. This can
be divided into Risk Mitigation, Monitoring, and Management Plan (RMMM). In
this plan, all works are done as part of risk analysis. As part of the overall project
plan project manager generally uses this RMMM plan.

In some software teams, risk is documented with the help of a Risk Information
Sheet (RIS). This RIS is controlled by using a database system for easier
management of information i.e. creation, priority ordering, searching, and other
analysis. After documentation of RMMM and start of a project, risk mitigation and
monitoring steps will start.

Risk Mitigation:

It is an activity used to avoid problems (Risk Avoidance).


Steps for mitigating the risks as follows.

Finding out the risk.

Removing causes that are the reason for risk creation.

Controlling the corresponding documents from time to time.

Conducting timely reviews to speed up the work.

Risk Monitoring:

It is an activity used for project tracking. It has the following primary


objectives as follows.

To check if predicted risks occur or not.

To ensure proper application of risk aversion steps defined for risk.

To collect data for future risk analysis.

To allocate what problems are caused by which risks throughout the project.

Risk Management and planning:

It assumes that the mitigation activity failed, and the risk is a reality. This task
is done by the Project manager when risk becomes reality and causes severe
problems. If the project manager effectively uses project mitigation to remove
risks successfully then it is easier to manage the risks. This shows the response
that will be taken for each risk by a manager. The main objective of the risk
management plan is the risk register. This risk register describes and focuses
on the predicted threats to a software project.

Example:

Let us understand RMMM with the help of an example of high staff turnover.

Risk Mitigation:

To mitigate this risk, project management must develop a strategy for


reducing turnover. The possible steps to be taken are:

Meet the current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market).

Mitigate those causes that are under our control before the project starts.

Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.

Organize project teams so that information about each development activity


is widely dispersed.

Define documentation standards and establish mechanisms to ensure that


documents are developed in a timely manner.

Assign a backup staff member for every critical technologist.

Risk Monitoring:

As the project proceeds, risk monitoring activities commence. The project


manager monitors factors that may provide an indication of whether the risk
is becoming more or less likely. In the case of high staff turnover, the
following factors can be monitored:

General attitude of team members based on project pressures.

Interpersonal relationships among team members.


Potential problems with compensation and benefits.

The availability of jobs within the company and outside it.

Risk Management:

Risk management and contingency planning assumes that mitigation efforts


have failed and that the risk has become a reality. Continuing the example, the
project is well underway, and a number of people have announced that they
will be leaving. If the mitigation strategy has been followed, backup is
available, information is documented, and knowledge has been dispersed
across the team. In addition, the project manager may temporarily refocus
resources (and readjust the project schedule) to those functions that are fully
staffed, enabling newcomers who must be added to the team to “get up to the
speed“.

Drawbacks of RMMM:

It incurs additional project costs.

It takes additional time.

For larger projects, implementing an RMMM may itself turn out to be another
tedious project.

RMMM does not guarantee a risk-free project, risks may also come up after
the project is delivered.

Software Maintenance:

➢ Software maintenance is an activity in which program is modified after it


has been put in to use. In this usually it is not preferred to apply major
software changes to system’s architecture.
➢ Maintenance is a process in which changes are implemented by either
modifying the existing systems architecture or by adding new components
into the system.

Need for maintenance:

➢ Software maintenance is widely accepted part of SDLC now a day.


➢ It stands for all the modifications and updating done after the delivery of
software product.
➢ There are number of reasons, why modifications are required, some of them
are briefly mentioned below:

1. Market Conditions: Policies, which changes over the time, such as taxation
and newly introduced constraints like, how to maintain bookkeeping, may
trigger need for modification.
2. Client Requirements: Over the time, customer may ask for new features or
functions in the software.
3. Host Modifications: If any of the hardware and/or platform (such as
operating system) of the target host changes, software changes are needed to
keep adaptability.
4. Organization Changes: If there is any business level change at client end,
such as reduction of organization strength, acquiring another company,
organization venturing into new business, need to modify in the original
software may arise.

Types of Maintenance:

1. Corrective Maintenance: Means the maintenance for correcting the software


faults.
2. Adaptive Maintenance: Means maintenance for adapting the change in
environment (different system or operating systems).
3. Perfective Maintenance: Means modifying or enhancing the system to meet
the new requirements.
4. Preventive Maintenance: This includes modifications and updating to prevent
future problems of the software.

Reengineering

➢ Software re-engineering means re-structuring or re-writing part or all the


software engineering system.
➢ It is needed for the application which requires frequent maintenance.
➢ Software re-engineering is a process of software development which is
done to improve the maintainability of a software system.
➢ Re-engineering is the examination and alteration of a system to
reconstitute it in a new form.
➢ This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing etc.

Re-engineering is the reorganizing and modifying existing software systems


to make them more maintainable.

➢ To describe a cost-effective option for system evolution.


➢ To describe the activities involved in the software maintenance process.
➢ To distinguish between software and data re-engineering and to explain
the problems of data re-engineering.

Steps involved in Re-engineering:

✓ Inventory Analysis
✓ Document Reconstruction
✓ Reverse Engineering
✓ Code Reconstruction
✓ Data Reconstruction
✓ Forward Engineering
Re-engineering Cost Factors:

✓ The quality of the software to be re-engineered


✓ The tool support available for re-engineering
✓ The extent of the required data conversion
✓ The availability of expert staff for re-engineering

❑ Advantages of Re-engineering

✓ Reduced Risk: As the software is already existing, the risk is less as compared
to new software development. Development problems, staffing problems and
specification problems are the lots of problems which may arise in new
software development.
✓ Reduced Cost: The cost of re-engineering is less than the costs of developing
new software.
✓ Revelation of Business Rules: As a system is re-engineered , business rules
that are embedded in the system are rediscovered.
✓ Better use of Existing Staff: Existing staff expertise can be maintained and
extended accommodate new skills during re-engineering.

❑ Disadvantages of Re-engineering:

✓ Practical limits to the extent of re-engineering.


✓ Major architectural changes or radical reorganizing of the systems data
management has to be done manually.
✓ Re-engineered system is not likely to be as maintainable as a new system
developed using modern software Re-engineering methods.

❑ Re-engineering process activities:

✓ Source code translation: In this phase code is converted into new language.
✓ Reverse Engineering: Under this activity the program is analyzed and
understood thoroughly.
✓ Program structure improvement: Restructure automatically for
understandability.
✓ Program modularization: The program structure is reorganized.
✓ Data re-engineering: Finally, clean-up and restructure system data.

Business Process Re-engineering


Business Process Re-engineering (BPR) is a management strategy aimed at
improving organizational performance by re-designing and optimizing
business processes. BPR is a systematic and radical approach to change,
focused on transforming and streamlining core business processes to achieve
dramatic improvements in quality, efficiency, and customer satisfaction.

BPR involves a comprehensive analysis of existing business processes,


identifying inefficiencies, bottlenecks, and waste, and then developing new
and improved processes that align with the organization’s strategic objectives.
The objective is to eliminate unnecessary steps, reduce cycle time, and
improve overall efficiency, while maximizing the value delivered to
customers.

BPR requires a fundamental shift in the way an organization thinks about its
business processes, emphasizing a customer-centric approach to process
design and management. It involves a collaborative and cross-functional
approach, involving stakeholders from across the organization to ensure that
process improvements are aligned with the organization’s strategic objectives.

The benefits of BPR can include reduced costs, increased productivity,


improved quality, faster time-to-market, and greater customer satisfaction.
However, implementing BPR can also be a complex and challenging process,
requiring significant investment in resources, time, and expertise.

Reverse Engineering

Software Reverse Engineering is a process of recovering the design,


requirement specifications and functions of a product from an analysis of its
code.

It builds a program database and generates information from this.

The purpose of reverse engineering is to facilitate the maintenance work by


improving the understandability of a system and to produce the necessary
documents for a legacy system.

Goals of Reverse Engineering


• Manage Complexity.
• Recover lost information.
• Detect side effects.
• Produce higher abstraction.
• Facilitate Reuse

Steps of Reverse Engineering

Information Collection: All related information about the software, like


source design documents etc., is collected in this step.
Information Examination: To get familiarity with the system, the
information collected in the first step is studied.

Extraction of the Structure: This step focuses on identifying the structure


of the program, which takes the form of a structure chart and in this chart,
every node consists of some routine.

Functionality Recording: Structured language like decision table etc., are


used to process every module of the structure and charts in this step.

Data Flow Recording: The information extracted in the third step and fourth
step are used to derive the data flow diagrams from showing the flow of data
through the processes.

Control Flow Recording: Recording of the control structure of the software


at a high level is done.

Extracted Design Review: The extracted design document is reviewed


multiple times to make sure it is consistent and correct. The design document
also makes sure that the program is represented by design.

Documenting the Work: The documentation, which includes SRS


documents, design documents, history documents, overview documents, etc.,
are documented for future use

Reverse Engineering Tools

Reverse engineering if done manually would consume lot of time and human
labor and hence must be supported by automated tools. Some of tools are
given below:

CIAO and CIA: A graphical navigator for software and web repositories
along with a collection of Reverse Engineering tools.

Rigi: A visual software understanding tool.

Bunch: A software clustering/modularization tool.


GEN++: An application generator to support development of analysis tools
for the C++ language.

PBS: Software Bookshelf tools for extracting and visualizing the architecture
of programs.

Benefits of Reverse Engineering

Exploring existing products: Backward engineering allows you to explore


products that already exist. Evaluating the existing products in the market can
result in innovation and discovery.

Repairing existing products: Companies can repair an existing product


using this engineering technique. This can also help them identify common
errors in a product's design and learn how to fix those for future projects.

Inspiring innovation: Reverse engineering fosters innovation. It helps


engineers connect projects with previous knowledge and develop innovative
ideas.

Reducing product development costs: By understanding how a competitor


manufactures a product, a company can develop cheaper alternative solutions.

Conducting failure analysis: You can use reverse engineering to analyze


why a product did not work as intended. Examining a faulty product through
backward engineering can help you identify its damaged parts and repair
them.

Where Reverse Engineering Used?

Automobile Industry: The automobile industry uses backward engineering


for:

• Studying and analyzing competitors


• Digitalizing parts of older vehicle models
• Understanding problems and issues with existing automobiles
• Developing replacement parts
Aerospace Industry: The aerospace industry uses this technique for:

• Developing maintenance parts of an aircraft


• Adding, enhancing and fixing aircraft components
• Conducting aerodynamic analysis
• Manufacturing of tools

Consumer Goods Industry: The consumer goods industry uses this


engineering technique for:

• Developing product prototypes


• Analyzing competitor's product
• Testing and validating conceptual designs
• Documenting different design iterations

Forward Engineering

Forward engineering plays a pivotal role in the development and deployment


of high-quality software solutions. It involves the creation of software from
scratch, starting from the initial concept and progressing through the various
stages of design, development, and implementation.

Forward engineering, also known as “code-first” or “model-to-code”


approach, is the traditional process of software development that involves
transforming conceptual, logical, and physical designs into executable code.
It encompasses analyzing requirements, creating system models, designing
software architecture, generating code, and ultimately implementing the
software solution. Forward engineering follows a sequential workflow,
ensuring that the development process adheres to the predefined
specifications.

Benefits of Forward Engineering:


Forward engineering offers numerous advantages, making it an indispensable
practice in software engineering. Some key benefits include:

a. Predictability: By following a structured and sequential approach, forward


engineering ensures that software development progresses in a predictable
manner. This allows for better planning, estimation, and resource allocation.

b. Maintainability: The code generated in forward engineering is typically


well-structured, modular, and easy to understand. This enhances the
maintainability of the software solution, enabling future updates, bug fixes,
and enhancements to be implemented efficiently.

c. Scalability: Forward engineering enables software engineers to design and


develop robust and scalable solutions. By considering future requirements
during the development process, the software can easily accommodate
changes and additions without significant rework.

d. Reusability: Forward engineering promotes the creation of reusable


components and modules. This leads to increased productivity, reduced
development time, and improved code quality.

3. Methodologies in Forward Engineering:

Various methodologies can be employed in forward engineering, each


offering its own set of advantages. Some popular methodologies include:

a. Waterfall Model: The waterfall model is a linear sequential approach, where


each stage of development occurs one after another. This method is ideal for
projects with stable and well-defined requirements.

b. Agile Methodology: Agile methodologies, such as Scrum and Kanban,


focus on iterative and incremental development. They prioritize flexibility,
collaboration, and customer feedback, enabling quick adaptation to changing
requirements.

c. DevOps: DevOps is a combination of development and operations,


emphasizing collaboration and automation throughout the software
development lifecycle. It ensures seamless integration between development,
testing, and deployment, leading to faster delivery and higher quality
software.

4. Challenges in Forward Engineering:

While forward engineering offers numerous benefits, it also presents certain


challenges that software engineers must address. Some common challenges
include:

a. Requirement Elicitation: Gathering accurate and complete requirements


from stakeholders can be a complex and iterative process. Clear
communication and active engagement with stakeholders are essential to
mitigate this challenge.

b. Design Complexity: As software solutions become more intricate,


designing a comprehensive and scalable architecture becomes increasingly
challenging. Adopting industry-standard design patterns and architectural
principles can help overcome this obstacle.

c. Quality Assurance: Ensuring the quality of the generated code is crucial for
the success of forward engineering. Implementing effective testing strategies
and employing continuous integration practices are necessary to identify and
fix potential defects early in the development cycle.

5. Best Practices in Forward Engineering:

To ensure successful forward engineering, software engineers should adhere


to best practices, including:

a. Requirements Validation: Validate requirements to ensure clarity,


completeness, and consistency before proceeding with development. This
helps to reduce rework and enhances customer satisfaction.

b. Modularity and Code Reusability: Design software modules that are


modular, loosely coupled, and highly cohesive. Promote code reuse to
improve efficiency and maintainability.
c. Version Control: Utilize version control systems to track changes, manage
collaboration, and facilitate easy rollbacks. This ensures code stability and
traceability throughout the development process.

d. Continuous Integration and Deployment: Automate the process of code


integration, testing, and deployment using tools like Jenkins or GitLab CI/CD.
This enables faster delivery, improved quality, and better team collaboration.

You might also like