AMS Metrics: Changing The Value Paradigm

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

AMS metrics: Changing the value paradigm

Metrics associated with application management services (AMS)—like the services themselves—often focus on cost reduction and
improvement of back-office services. Especially when it comes to outsourced AMS, metrics often emphasize compliance with service level
agreements (SLAs) that govern the outsourcing relationship and tactical support outcomes.

Now, as organizations undergo technology-driven transformations of business and operating models, IT departments have the
opportunity to elevate the role of AMS through a more strategic approach to using metrics. In fact, effectively designed and executed
metrics can enable transformational shifts, helping CIOs not only manage overall system stability but also free up valuable bandwidth for
IT initiatives aimed at increasing business value.

This point of view takes a closer look at the rationale for and potential benefits of this enhanced approach to AMS metrics.

Many organizations seem to employ metrics just for compliance and not
commitment. Metrics must be used rigorously for review as drivers of positive
change, and as indicators that help identify deviations from an AMS plan before
becoming execution failures.
Ashish Mehta, Director Enterprise Applications
Dolby Laboratories, Inc.
AMS metrics: Changing the value paradigm

A shift in how and why metrics are applied


According to Gartner, annual application development and support investment is roughly 43 percent of total IT
spending. Half (50 percent) of that is for new development projects while the remaining half is for maintenance of
existing systems. By 2023, 40 percent of application modernization work will be contracted under a multiyear AMS
outsourcing arrangement, up from 20 percent in 2019.

Why are these numbers significant? Because while managing and measuring AMS operations has always been
important to the success of CIOs and their teams, it will only grow more so in the future. By reducing the number
of IT resources required for maintenance activities, CIOs can increase emphasis on application enhancements and
enabling new technologies that drive value across the enterprise. Enhanced AMS metrics can provide guideposts
to help an IT organization achieve those objectives.

AMS metrics traditionally have been designed for and consumed by the CIO’s office for decision making. That’s
important, of course, but in an enhanced metrics framework the monitoring of metrics can be shifted to middle
management and tactical teams—the frontline of AMS—so they can interpret the data and directly initiate actions
focused on meeting the transformation objectives established by the business and IT leadership.

A new approach to AMS performance measurement


An enhanced metrics framework aimed at both improving application maintenance and driving development of enabling new capabilities
has four key levers—productivity, performance, quality, and people (Figure 1).

Figure 1. Levers of enhanced AMS measurement

Productivity metrics Performance metrics


• Number of incidents resolved • Incident response time
• Number of incidents outstanding • Mean incident resolution time
• Number of changes released • Mean incident resolution time per person
• Number of change outstanding • Mean change turnaround time
• Number of open problems • Total SLO violations
• Number of open problems with RCA • Total release downtime
• Number of enhancements outstanding • Total application downtime
• Number of enhancements released • Total application degraded performance time
• Service requests and incident volume trends • Total actual vs. budgeted costs
• Percentage of enhancements delivered on time
• Percentage of enhancements delivered within budget
• Effort categorization—Productivity, planned vs. actual hours, etc.

Quality metrics People metrics


• Percentage of failed changes • Customer satisfaction
• Percentage of incidents reopened • Staff turnover
• Percentage of failed releases • Planned vs. unplanned attrition
• Configuration item quality • Average time to fill in positions
• Percentage of incidents or changes without • Onsite offshore headcount
proper documentation • Percentage of offshore co-located resources
• Number or recurring incidents • Bench strength

2
AMS metrics: Changing the value paradigm

In this construct, value to the enterprise increases as attention and resources are shifted from tactical support metrics, such as the
“Number of incidents outstanding” and “Incident response time” in the Productivity Metrics box of Figure 1, toward activities that have
relatable business impact, such as “Number of enhancements outstanding” and “Number of enhancements released” in that same box.

While it’s important to track performance with individual metrics, their impact as a whole is a real measure of AMS maturity. That maturity
can be evaluated across two other dimensions—throughput and stability (Figure 2).

Looking at AMS through this lens enables and IT organization to effectively run and evolve its applications portfolio while maximizing the
overall business value. To this extent, the “run” component of an AMS outsourcing engagement is reflected by program “Stability” on the
X axis in Figure 2, and business value is reflected by “Throughput” on the Y axis. A key decision for CIOs is what outcomes they want to
achieve and then where to target their AMS team efforts to move from their current level of maturity to their desired level.

Figure 2. Value increases with enhanced metrics maturity

High High
value

Strong transformational performance Strong overall performance

• Enhancement count • Enhancement count

• Problem count • Problem count

• Engagement levers— • Engagement levers—


Levers skewed towards throughput Aligned with program goals

Throughput

Limited performance Strong operational performance

• Enhancement count • Enhancement count

• Problem count • Problem count

• Engagement levers— • Engagement levers—Levers


Random selection of metrics skewed towards stability

Low

Low Stability High

3
AMS metrics: Changing the value paradigm

Enhanced metrics framework implementation


Deployment of an enhanced metrics framework is a
multistep process. Once the appropriate metrics for
a specific AMS environment are selected, they should
be implemented first across the IT organization, then
extended to the AMS partner ecosystem. Of course, with
an initiative of this scope, an effective change management
program can help address implementation challenges and
adoption of the changes among all affected stakeholders.

Enhanced metrics in action: Multinational audio and imaging technology company


The client is based in San Francisco, CA, with offices in more than 20 countries. The company’s innovative research and engineering
creates breakthrough experiences for billions of people worldwide through collaborations that span businesses, and consumers.

The client’s IT footprint includes 10 primary technologies and more than 80 different applications. It extends to 35 countries and includes
cloud applications hosted by a service provider, software as a service (SaaS), and custom applications developed in house. The client’s IT
department provides support for business use of IT applications, development of new applications, enhancement of existing applications,
and day-to-day application support.

Since 2010, the client had outsourced application management and development to a service provider. Performance was measured using
sophisticated metrics that the client developed. However, results indicated subpar performance by the service provider, including:

Higher than A growing Higher than Low end Minimal process


acceptable backlog of user acceptable user documentation
incident requests (new team satisfaction and adherence
count enhancements attrition score
and features)

4
AMS metrics: Changing the value paradigm

In 2013, the client engaged Deloitte as its new AMS provider for support across all its software applications, either onsite or through one of
Deloitte’s global delivery centers. Additionally, the engagement included development support services using a mix of functional, technical,
system administration, and testing personnel who work hand in hand with the client to develop, enhance, and support applications.
Altogether, Deloitte assists with three or four major application management and development projects annually, as well as 10 to 15 other
projects of varying scope and duration.

The client’s vision for the program was to optimize day-to-day application maintenance and support so that more resources could be
applied to application development and enhancement as part of the company’s ongoing digital transformation. The client and Deloitte
established a governance program driven by a set of enhanced metrics (Figure 3, checked items) designed to monitor the day-to-day
system performance issues that the client had previously experienced. These metrics were determined in part by the maturity of the
client’s IT organization and tools to capture and report these metrics.

Figure 3. Enhanced metrics chosen by the client (checked items)

Productivity metrics Performance metrics


9 Number of incidents resolved 9 Incident response time
9 Number of incidents outstanding 9 Mean incident resolution time
• Number of changes released • Mean incident resolution time per person
• Number of change outstanding • Mean change turnaround time
9 Number of open problems 9 Total SLO violations
9 Number of open problems with RCA • Total release downtime
9 Number of enhancements outstanding • Total application downtime
9 Number of enhancements released • Total application degraded performance time
9 Service requests and incident volume trends 9 Total actual vs. budgeted costs
9 Percentage of enhancements delivered on time
• Percentage of enhancements delivered within budget
9 Effort categorization—Productivity, planned vs. actual hours, etc.

Quality metrics People metrics


• Percentage of failed changes 9 Customer satisfaction
9 Percentage of incidents reopened 9 Staff turnover
• Percentage of failed releases 9 Planned vs. unplanned attrition
• Configuration item quality • Average time to fill in positions
9 Percentage of incidents or changes without 9 Onsite offshore headcount
proper documentation • Percentage of offshore co-located resources
• Number or recurring incidents • Bench strength
5
AMS metrics: Changing the value paradigm

In less than three years, the client saw significant results across several dimensions:

System stability improvements, including:


• Incident backlog (incidents awaiting action) fell by 96 percent and ticket aging was
reduced by 98 days
• Service level objectives (adherence to incident response and resolution times)
increased to 98 percent adherence

Throughput (resource effort going toward delivery of business enhancements)


increased to a consistent 60 percent of total resource allocation (Figure 4)

Team stability improved to the point that the client now plans for rotation of team
members across a 12- to 18-month timeframe.

Cost efficiency—Operate engagement services were delivered 23 percent under


planned budget for the first three years of engagement.

Figure 4. Throughput improvement

80%

70%
% of resource allocation

60%

50%

40%

30%

20%

10%

0%

2015 2016 2017 2018 2019 2020

Enhancement Operations Linear (Enhancement) Linear (Operations)

6
AMS metrics: Changing the value paradigm

By deploying enhanced metrics, the client was able not only to stabilize its application management and development program, but it was
also able to begin delivering much greater value to the organization (Figure 5).

Figure 5. The client’s value improvement through enhanced metrics

High High
value
Strong transformational performance Strong overall performance

• Enhancement count • Enhancement count

• Problem count • Problem count

• Engagement levers— • Engagement levers—


Levers skewed towards throughput Aligned with program goals

Client after 2 years


of engagement

Throughput

Client was
here in 2013
Limited performance Strong operational performance

• Enhancement count • Enhancement count

• Problem count • Problem count

• Engagement levers— • Engagement levers—Levers


Low Random selection of metrics skewed towards stability

Low Stability High

7
AMS metrics: Changing the value paradigm

Instead of driving change from the top and reviewing the metrics at the leadership
level, we make sure that each individual in the organization has visibility into
those metrics. Then they are empowered to make application management and
development decisions that help drive the metrics in a positive way. Once the
people on the ground become responsible for the metrics, they understand how
every action or every change they make to support the application will drive the
positive turn of those metrics.
Ashish Mehta, Director Enterprise Applications
Dolby Laboratories, Inc.

A new way forward for AMS programs


While AMS is not new, a focus on enhanced metrics can enable
CIOs and those managing IT portfolios to delivery broader
enterprise value through improved application performance and
cost-efficiency. Certainly, new tools and technologies contribute to
this transformational change. But enhanced metrics, if strategically
chosen and effectively applied can provide significant clarity about
the performance of an AMS program. Carefully managed, the four
levers of productivity, performance, quality, and people can enable
AMS programs to deliver transformative value above and beyond
tactical support services.

Contact us:

Ramit Dayal Ashish Mehta


Senior Manager Director Enterprise Applications
Deloitte Consulting LLP Dolby Laboratories, Inc.
[email protected] [email protected]
www.linkedin.com/in/ashmehta/

About Deloitte
Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms,
and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”)
does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that
operate using the “Deloitte” name in the United States and their respective affiliates. Certain services may not be available to attest clients under the
rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms.

As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed
description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting.

Copyright © 2020 Deloitte Development LLC. All rights reserved.

You might also like