iSixSigma-Main Case Study
iSixSigma-Main Case Study
iSixSigma-Main Case Study
IT Call Center
Focusing the Project
IT services is a competitive field populated with companies that all deliver important online and
call center support to a variety of customers. Most IT services businesses come to realize that
their clients have choices and, within the same pricing range, they gravitate to the support
organization where the service is best.
In this case study of an IT services business, benchmarking helped quantify what the business
already knew – its competitive position was not totally secure. There are a number of ways the
company might have responded to the challenge. While the company had built up a reasonable
capability in Six Sigma, its management realized improvement was not as simple as forming a
project team and turning them loose on the problem. Senior managers had learned that an
important part of their responsibility as leaders is to find the issues that are well-enough defined
and of a scope to be suitable for a Six Sigma DMAIC project team to take on.
After working through the benchmarks and other data and with the help of a Black Belt, they
were able to distill enough clues and evidence in the top-level industry figures to select a
DMAIC project they could sponsor with facts and supporting data.
1
Figure 2: Customer Satisfaction for Average Companies, 2001-2003
Figure 4: Relationship of Customer Satisfaction Ratings and New Account Growth in Best-in-
Class Companies
2
Figure 5: Support Costs Per Call
3
The comparison of the company’s customer satisfaction ratings (73 percent on a 100 percent
standardized score), with the “average” companies in the same sector (76 percent) and “best-in-
class” competitors (87 percent) showed management it had work to do.
The evidence also supported the important business contention that customer satisfaction (CSat)
can be a driver of new account growth. Figure 4 illustrates that the range of customer satisfaction
ratings for best-in-class competitors tracked with about 75 percent of the changes in new account
growth. That was evidenced by the R-sq. value in the linear regression that was plotted. Senior
managers knew that the relationship didn’t “prove causality” but, together with their business
sense, they saw this as an indicator that customer satisfaction shows up on the bottom line.
A model was built to check the feasibility of focusing a DMAIC project on call center service
measures (Figure 6). In the figure, the Y, or NewAcct, is new account growth during the
benchmark period (as a percent of sales). The Xs are:
Transfer = Average number of transfers (to different agents and help systems) during a service
call.
Service = Average service time during the call (the time spent getting the answer to the question,
problem solving advice, etc.).
Obviously the company would like to have seen a better model-fit than the 62 percent R-Sq seen
here. Realizing, though, that many factors play into account growth, the senior leadership felt
that the model showed enough linkage to the process factors that pursuit of the project was
feasible.
Since the company’s senior managers had ready access to wait time benchmark data, they
checked their company’s performance against the industry (Figures 7, 8 and 9).
The wait time review indicated that, indeed, the company was behind the industry norms. This,
and the model indication that wait time could be an influential factor in customer satisfaction and
new account growth (Figure 5), helped the senior managers see that a DMAIC team focused on
improvement in this area could be worthwhile.
4
Figure 7: Call Wait Times for the Company (Median 4.5)
The company could see a strong indication that a DMAIC project to reduce support costs should
be quite doable – and should return significant dollars to the bottom line. Management also could
see that the DMAIC team should look for the improved customer experience connected with
reduced wait times and service times to improve new account growth – bringing dollars to the
top line.
5
The company assigned a Champion from its leadership team to take responsibility for the new
project and identify a team leader and key team members. The team was given its top level goals
and scope – to reduce support costs while improving new account growth. The work with the
benchmark data was helpful in orienting the team to the project rationale. The team began
working on their project charter.
6
The Define Phase
The senior leadership of the IT services company completed the important pre-project work and
found an area of the business worthy of attention by a DMAIC (Define, Measure, Analyze,
Improve, Control) project team. The team then began work on understanding and articulating the
project goals, scope and business case.
The DMAIC roadmap called for work in these areas during the Define phase:
D2. Customer Requirements: Identifying all the internal and external customers who depend
on the outputs of the process under study, the deliverables and measures connected with those
outputs, and the process steps, process inputs and (as appropriate) the suppliers of those inputs.
D3. High Level Process Map: Showing the flow of information, materials and resources, from
key process inputs, through process steps and decision points, to create the process outputs. The
map describes the flow of what happens within the scope of the target process and it defines the
boundaries of that scope.
Problem Statement: “Competitors are growing their levels of satisfaction with support
customers, and they are growing their businesses while reducing support costs per call. Our
support costs per call have been level or rising over the past 18 months, and our customer
satisfaction ratings are at or below average. Unless we stop – or better, reverse this trend – we
are likely to see compounded business erosion over the next 18 months.”
Business Case: “Increasing our new business growth from 1 percent to 4 percent (or better)
would increase our gross revenues by about $3 million. If we can do this without increasing our
support costs per call, we should be able to realize a net gain of at least $2 million.”
Goal Statement: “Increase the call center’s industry-measured customer satisfaction rating from
its current level (90th percentile = 75 percent) to the target level (90th percentile = 85 percent) by
end of the fourth quarter without increasing support costs.”
The project team also developed its initial set of milestones, tasks, responsibilities, schedule and
communication plan. After reviewing the charter with their Champion, team members began
work on the next step – customer requirements.
7
there are internal and external customers and dependencies that are not so obvious. A SIPOC
table (suppliers, inputs, process, outputs and customers) develops a detailed view of all the
important customers, their requirements, and the related process step and supplier dependencies.
SIPOC Table: The team developed a SIPOC table to help identify and report what it learned
about key customers and their requirements. While processes flow in the SIPOC direction, the
thought process used to build the table often begins with the customer, suggesting display in the
COPIS (customer, output, process, inputs and suppliers) direction as shown in Figure 1.
8
Voice-of-Customer (VOC) Interviews: The SIPOC table highlighted some areas where more
information about the process – as understood by customers and call center staff – would be
helpful. Representative samples of the company’s customers were involved in group interviews.
The following are excerpts from individual customer responses and are representative of the data
gathered. The answers are to the question: “What influences your level of satisfaction with our
services?”
Group 1
A1. “Well, first and foremost I want the correct answer to my questions or issues. It makes me
nuts when I’m told something that turns out later to be wrong or only part of the story.”
A2. “I really like it when the person who answers the phone has a good attitude. Sometimes you
can just tell they wish you’d just go away and stop bothering them with stupid questions.”
A3. “Well I can sure tell you what I don’t like. That voice response thing where they ask you to
make 46 choices (and none match what you want) is a real pain – ends up taking a lot of my time
and then they seem to ignore it all anyway and ask the same questions again. What’s the point?
Sometimes I wonder if they ever asked a real customer to test these things before they put them
in.”
A4. “I’m happy when my call gets answered quickly and the person I talk to knows their stuff
and gives me an answer on the first call. When I have to wait for a call back and talk to someone
else, repeating some of the same stuff – that’s a real drag!”
A5. “I like it when the person on the phone can actually make a decision without putting me on
hold while they get an ok from a supervisor. Seems like they spend 10 minutes to make a $5
decision. That just doesn’t make sense to me. Seems like some control freak is running the
show.”
A6. “Follow-through is what really matters to me. I don’t necessarily expect you’ll always be
able to resolve my issue immediately, but I do expect you’ll call me back in a reasonable amount
of time.”
A7. “My hot button is getting someone who has enough patience to really solve my problem.
Some of this stuff seems pretty technical to me, and I don’t always know even the right question
to ask. I like it when the person on the phone cares enough to get to the real solution, even when
I can’t explain exactly what I need.”
A8. “I don’t want to be transferred around. Last time I called I got transferred four times and
ended up with the same person I started with. I’m too busy to put up with that!”
9
Group 2
A1. “Our big concern is speed. Our customers demand answers from us, and we in turn rely on
you for some of that. That means you have to be adequately staffed to meet call volume.”
A2. “What we most need from you is people who can answer complicated questions accurately
and quickly – not just the easy stuff, we can do that ourselves.”
A3. “We need you to have immediate access to up-to-date information about all of our accounts
and transactions with all of your branches and locations. It creates huge problems for us when
your records aren’t accurate and timely. We can’t sit on hold for 10 minutes.”
A4. “I call 3 to 4 times a week, and the thing I find most frustrating is the lack of consistency.
Sometimes my call gets answered in 2 minutes, which I can live with, and sometimes it’s 10,
which I can’t. I also notice that there’s a lot of variation in how long it takes to get answers to
very similar issues. I call your competition all the time also, and they’re a lot more consistent.”
10
Figure 2: Process Map
The process map will be helpful during the Measure phase, as the project team considers how
and where to gather data that will shed light on the root cause of the issues most pertinent to the
project’s goals.
11
The Measure phase:
Having developed a good understanding of the project’s business case and customer
requirements (identifying the Y‘s), and the as-is process, the Six Sigma project team of the IT
services business began to focus on the Measure phase. The team identified the measures and
data collection plan for gathering the right amount of the right data to impel their learning about
root causes and drivers that impact the project Y‘s.
M1. Refine the Project Y‘s: Getting even clearer about how the project’s key outcome
measure(s) will be defined, measured and reported.
M2. Define Performance Standards for the Y‘s: Identifying how performance will be
measured – usually somewhere on the continuum from capability measures like Cp and Cpk for
“variables” data that is normally distributed to percentile or other capability metrics for
“attribute” and other data that may be skewed in distribution.
M3. Identify Segmentation Factors for Data Collection Plan: Starting with the natural
segmentation of project Y‘s and moving though consideration of prospective driving factors
(X‘s), segmentation suggests the packets of data that should be collected in order compare and
contrast segments to shed light on Y root causes and drivers.
M4. Apply Measurement Systems Analysis (MSA): In any project, raw data is gathered and
then converted into measures. That process comprises a “measurement system” that should be
characterized and strengthened in terms of its accuracy and repeatability.
M5. Collect the Data: Gathering data, preserving its meaning and noting any departures from
the discipline put in place under MSA.
M6. Describe and Display Variation in Current Performance: Taking an initial look at the
data for its distribution, extreme values and patterns that suggest special variation.
12
M1. Refine the Project Y‘s
During this step the team considered exactly how the project Y‘s would be defined and
measured:
Y‘s Measurement
Primary Customer 1. By industry standard monthly survey
Satisfaction 2. The project will require additional, more frequent, case-by-case
customer satisfaction data. A measurement system that tracks with the
industry survey will be devised and validated.
Secondary Support Cost The staff time connected with each call:
(Per Call) - Call answering and discussion
- Case research
- Callback time
will be loaded with a distribution of benefits and infrastructure costs to
compute overall support cost per call.
Related / of Days to Close Time span from call origination through client indication that the issue
Interest is closed to their satisfaction.
Wait Time Automatically tracked for calls in queue. Summed for calls
encountering multiple transfers.
Transfers Automatically tracked for each call moved to another extension.
Service Time Automatically tracked for time from staff call pickup until hangup or
transfer.
13
M2. Define Performance Standards for the Y(s)
For each project Y, the current baseline and best-estimate target was documented. In some cases,
the team found that good baseline data was unavailable. (Unfortunately that is a common
occurrence in DMAIC projects.)
14
The team’s Y-to-X trees for support cost and customer satisfaction are shown in Figures 1 and 2.
15
16
Input-Output Analysis
Building on the information developed in the SIPOC / COPIS table, the team reviewed process
inputs and outputs, classifying each as “controlled” (with process provisions in place to measure
and influence that input or output, if necessary) or “uncontrolled” (with no such provisions in
place). See Figure 3.
Cause-and-Effect Matrix
To further explore potentially influential factors, the team created a cause-and-effect matrix
(Figure 4). The high scoring items in this analysis were strongly considered for data collection.
The highest, “Available Staff for Transfer,” was included. The next highest scoring, “Staff
Experience/Training” was not readily available in the historic database. (There had been
reluctance to log personal data as part of the ongoing call logging.)
17
Figure 4: Cause-and-Effect Matrix
18
M4. Apply Measurement Systems Analysis (MSA)
To prepare for the measures to be collected in the next step, the team reviewed the measurement
systems. In transactional processes, any activity that gathers raw data and converts it into counts,
classifications, numbers or other forms of measure is a “measurement system.” While the
statistical methodology connected with MSA is beyond the scope of this article, Figure 5 depicts
the four questions that are usually posed for measurement systems in transactional processes.
Viewed simply, the intent of MSA is to strengthen a measurement system so that it is suitably
accurate, repeatable, reproducible and stable. A fifth issue, “linearity” (the accuracy of the
system over the range of measured values), is sometimes considered.
19
M6. Describe and Display Variation in Current Performance
A first look at the data coming in provided the team insights about extreme values and patterns
suggesting problems with the measurement system. With the information, the team began to
forecast what the Analyze phase would reveal. The team’s question at this point was: How is
the Y distributed? The team looked for the layout of measured values collected – for symmetry
and for extreme values. This suggested the kinds of graphical and statistical analysis that would
be appropriate (Figure 7).
Data on the call center X measures was graphed and charted a number of ways. Figure 8 shows
the variation in customer Wait Times on an Xbar-R control chart. Variation above and below the
chart’s control limits suggested that there were “special causes” in play – worth understanding in
more detail by the team in the Analyze phase.
20
Figure 8: Xbar-R Chart of Wait Time
21
The Analyze Phase
Having refined the project’s key outcome measures, defined performance standards for project
Y‘s, identified segmentation factors and defined measurement systems, the Six Sigma project
team of the IT services business began to focus on the Analyze phase. The DMAIC (Define,
Measure, Analyze, Improve, Control) roadmap called for work in these areas during the Analyze
phase:
A1. Measure Process Capability: Before segmenting the data and “peeling the onion” to look
for root causes and drivers, the current performance is compared to standards (established in step
M2 of the Measure phase).
A2. Refine Improvement Goals: If the capability assessment shows a significant departure
from expectations, some adjustment to the project goals may need to be considered. Any such
changes will, of course, be made cautiously, supported with further data, and under full review
with the project Champion and sponsors.
A3. Identify Significant Data Segments and Patterns: By segmenting the Y data based on the
factors (X‘s) identified during the Measure phase– the team looks for patterns that shed light on
what may be causing or driving the observed Y variation.
A4. Identify Possible X’s: Asking why the patterns seen in A3 are as observed highlights some
factors as likely drivers.
A5. Identify and Verify the Critical X’s: To sort out the real drivers from the “likely suspects”
list built in A4, there is generally a shift from graphical analysis to statistical analysis.
A6. Refine the Financial Benefit Forecast: Given the “short list” of the real driving X’s, the
financial model forecasting “how much improvement?” may need to be adjusted.
22
Figure 1: Distribution Check for Support Costs
While the graphical summary in Figure 1 shows median-based capability to the detail of quartiles
(1st quartile: 25 percent, median: 50 percent, and 3rd quartile: 75 percent), the team applied a
macro to generate a more detailed percentile view, summarized in the list below.
The support cost 90th percentile capability is $39.44. Call volume, of course, indicates that this
was a very costly gap. The results of these and other capability checks, as done at the outset of
the Analyze phase, are summarized and compared to established targets in the table below.
23
A2. Refine Improvement Goals
Reviewing the data in the table, the team felt that the project targets were still in line and did not
require a change at that time. Had there been a surprise or a show stopper, that would have been
the time to flag it and determine the right action.
Numerous cuts of the data were reviewed with the goal of shedding light on root causes and
drivers underlying variation in the project Y‘s. A few of those are summarized in Figures 3 and 4.
Figure 3 shows that problems and changes look more expensive to service than other types of
calls. Figure 4 reveals an added signature in the pattern– Mondays and Friday stand out as being
more costly.
24
Figure 3: Multi-Vari for Support Costs by Call Type
Figure 4: Mutli-vari for Support Cost by Call Type and Day of the Week
Why do problems and changes cost more than other call types?
Why are calls processed on Mondays and Fridays more expensive?
Why do transfer rates differ by call type? (higher on Problems and changes, lower on others)
Why are wait times higher on Mondays and Fridays and on Week 13 of each quarter?
The team reviewed the fishbone diagrams, Y-to-X trees, and cause-and-effect matrices that it had
built during the Measure phase. At this step, with the benefit of the data and insight gained
25
during A3, the team was ready to get closer to what was really driving the Y‘s. Figures 5, 6 and 7
trace the team’s thinking as it moved through this step. Figure 5 highlights questions about the
driving influence of staff availability– and why it may vary so widely on Mondays and Fridays.
Figure 6 highlights the issue of staffing/call volume as a ratio. The initial data had looked at
these factors individually. Figure 7 raises questions about several factors that were not measured
initially– but the data may suggest these as realistic lower-level X‘s that should be studied using
a sample of follow-on data.
26
Figure 6: Y-to-X Tree
27
Figure 7: Cause-and-Effect Matrix
The work represented in the previous figures motivated the next round of analysis, step A5, to
check the deeper relationships hypothesized. As is often the case, the team had identified some
new data that could be useful. Further, it had uncovered some new ways to “torture” the current
data to reveal more root cause insights:
Volume to staffing ratio – Call volume and staffing had not revealed much when looked at
separately. Their ratio may be more interesting.
Web-to-phone issue call traffic ratio could be computed from the initial data– potentially
revealing more insight.
28
Figure 8: Multi-Vari with Computed Ranges Overlaid
The team was pleased to see that the key support cost drivers (the delays and interruptions during
call servicing) were the same as those known to drive down customer satisfaction – so a win-win
seemed to be possible.
29
The Improve Phase
Having identified and verified ways that support cost is driven by staffing ratios, process factors
like transfers and callbacks, and the proportion of phone and web traffic, the Six Sigma project
team of the IT services business began identifying and selecting among solutions. It had entered
the Improve phase. The DMAIC (Define, Measure, Analyze, Improve, Control) roadmap called
for work in these areas during the Improve phase:
I1. Identify Solution Alternatives to Address Critical X’s: Consider solution alternatives from
the possibilities identified earlier and decide which ones are worth pursuing further.
I2. Verify the Relationships Between X’s and Y‘s: What are the dynamics connecting the
process X‘s (inputs, KPIVs) with the critical outputs (CTQs, KPOVs)?
I3. Select and Tune the Solution: Using predicted performance and net value, decide what is
the best solution alternative.
I4. Pilot / Implement Solution: If possible, pilot the solution to demonstrate results and to
verify no unintended side effects.
30
I1. Identify Solution Alternatives to Address Critical X‘s
Work done during the Analyze phase identified several areas of prospective improvements that
could deliver project results. The solution alternatives were:
The team began to think through how each of these alternatives would work – how it would
compare/contrast with the current state, and what the costs, benefits and risks are regarding each
CTQ for each of the following stakeholders?
Business: Net value = ($Benefit – $Cost) and other benefits and risks.
Employees (as appropriate): Working conditions, interesting work and growth opportunities.
To understand and compare each solution alternative, the team realized it would need to describe
and characterize each of them with respect to the key requirements. The characterization work is
the core of step I2, and the population and work-through of the selection matrix is the main
activity in I3. A solution selection matrix (Figure 1), empty of evaluations during I1, formed the
basis of the team’s planning for the characterization work. (The matrix is an extract or simplified
view of the “real thing.”)
31
Figure 1: Solution Selection Matrix Drives the Characterization Work in I2
For each solution alternative, a sub-team worked through a series of comparisons and
characterizations in order to check and quantify the key X-Y relationships that could be exploited
for that alternative. Each group began by determining the magnitude of the potential business
benefit. To do that, it was necessary to know the X-Y relationship, known as the “transfer
function.” If the potential benefit appeared to be significant, then the group had to evaluate how
the improvement might be implemented, and what it would cost. Obviously the alternative
passed if benefits meaningfully exceeded the likely cost of the improvement. If not, it was
eliminated.
The staffing option is an illustration of the process used. To examine the other options, the team
followed similar thought processes, but perhaps applied a somewhat different combination of
tools. To evaluate the staffing option, the team asked this series of questions:
1. Which variables will be impacted, and by how much? Wait time, support cost, volume/staff
(v/s) ratio.
32
2. How will the benefit be measured? Value of account growth minus the cost of additional staff.
3. What chain of analysis will be used to show the connection between additional staff and
account growth? By definition, staffing drives v/s ratio. Does v/s ratio drive wait time? Does
wait time drive account growth? How many staff might be added? How would that impact wait
time? How much benefit would that staffing level produce? What would it cost?
Using regression analysis with the data collected, the team saw that the variation in wait time
seemed in part to be related to the v/s ratio. (Figure 2) While this did not prove causality and
there clearly were other influentialing factors, the team suspected a meaningful connection
(subject to stronger proof later) and decided to pursue this clue further.
Next, the team wanted to know if wait time was driving account growth – and, if so, by how
much. The team again applied regression analysis. (Figure 3) It appeared that 61 percent of the
variation in account growth could be attributed to wait time. Again, this was not conclusive
proof, but wait time was a worthy suspect.
33
To understand the number of staff that might be added or reduced, the team considered each day
separately. The team decided to see what would happen, on paper, if the volume/staff ratio for
each day was adjusted to the overall average (i.e., increase staff on Mondays and Fridays to get
to the average v/s ratio, decrease staff to get to the average v/s ratio on Sundays). The team found
that meant adding 14 people to the call center staff on Mondays. Combining what it had learned
about the wait time-v/s ratio connection (Figure 2), the team forecast a 1.18-minute reduction in
wait time on Mondays. The team used the following calculations:
The team then evaluated the likely impact of wait time on new account growth using information
from Figure 3.
The accounting department was asked to provide some of the facts needed to find out the
incremental value of the projected new account growth. They reported that there were 1,484,000
accounts and the average annual profit per account was $630. With this information and what the
team already knew, it could calculate the financial impact.
0.037% New Account Growth x 1,484,000 Existing Accounts = 549 New Accounts
549 Accounts x $680 Average Profit Per Account = $345,870 Incremental Annual Profit
Next the team calculated the additional staffing cost and the net benefit to the business.
Staff Cost = 14 People x 8 Hours x $30 Per Hour = $4,480 x 50 Weeks = $168,000
$345,870 Incremental Profit – $168,000 Staff Cost = $177,870 Project Net Benefit to Business
The team completed a similar quantitative analysis of each of the options. Included among them
were one on web service and one on transfers and callbacks. An improvement summary was
written for each.
34
Web Service Implementation Summary
Approach: Increase client awareness about web service and help clients see how easy it is to
use. (Figure 4)
Risks: Verify that the web system can handle increased volume. Verify that customer
satisfaction does not slip.
Method: Insert in upcoming mailings describing web services and interface. Announcement on
the phone router switch that answers all calls.
Risks: No way to calculate how quickly the training will drive the percentage down. There may
be a learning curve effect in addition to the training. Also making staff available for training is an
issue because training is only done on the first Monday of each month.
Method: Considering risks, the decision was made to train 50 percent of those in need of
training and evaluate the impact in a three-month pilot program. If that worked, the second half
would be trained in the following quarter.
35
Costs: One day of training for approximately 15 people in the pilot program = cost of training
($750 per student x 15) + cost of payroll (8 hours x $50 x 15) = $14,850. If fully effective
immediately, this penciled out to about half of the potential benefit. Discounting for risk, the
team projected a first quarter gross (before costs) benefit of approximately $50,000.
When the team had completed a similar quantitative analysis of each of the options, it prepared a
summary comparison and was ready to make recommendations for the next steps.
The team did not pursue tuning the solution in the context of this project, although it recognized
there might be opportunities to further optimize performance of the web component.
Start with staffing (the “quick fix”). It is the fastest and surest way to stem the erosion of
business growth. (“We recognize it is costly and not highly scalable (to other centers, other
languages, etc.). This should be a first step, with the hope that it can be supplanted as the
solution elements in other recommendations reduce staff needs.)
Web service percent. Begin right away tracking the call volume and customer satisfaction with
this service mode.
Transfer and callback reduction. Start right away. This is a “no brainer” net benefit that should
work well in parallel with the first two solution elements.
Before moving into the pilot phase, the team decided to meet with one of the Master Black Belts
to get a sanity check on its work to that point.
Although the Master Black Belt’s feedback was a bit sobering, the team still felt it wanted to go
ahead with the pilot program. But it decided to do an interim review with the Champion first,
including briefing him on the MBB’s perspective. Here’s a snippet of the Champion’s reactions
to the review:
“First let me say I think this team has done a fine job so far – you’ve found potentially
substantial savings, you’ve got good supporting factual information, and you’ve pointed out the
risks and uncertainties brought out by your MBB.
36
“While I don’t at all dispute the issues brought out by the MBB, my perspective is a little
different than his. The CEO has told me in no uncertain terms that he wants our account growth
to match the competition, and soon. He made it clear this is a strategic imperative. Customers
have lots of choices, and we could lose out big time if we don’t turn this around right away. It
has been let slide for too long as it is. So, in spite of the issues raised by the MBB, I’m prepared
to spend some money to pilot this because if it works, we will get quick results. It is evident that
adding staff will help us more quickly than any of the other options. We can always cut back
staff later as these other improvements take hold. Turnover in this area is high anyway, so
reducing staff will be painless when the time comes.”
Preparation and deployment steps for putting the pilot solution in place.
Measures in place to track results and to detect unintended side effects.
Awareness of people issues.
Details of the plan for the Monday staffing pilot program included the following elements:
X‘s to adjust: Staffing level (add five for pilot, full increment to wait for evidence plan works)
Y‘s to measure for impact and unintended side effects:
o Wait time, v/s ratio, customer satisfaction, transfers, callbacks, service time.
o Compare “new staff” versus “old staff” (hypothesis test).
o Measure monthly to observe learning curve effect, if any.
Measurement system issues: Revise existing sampling plan and data collection process to
distinguish new staff from old staff.
Because the current customer satisfaction sampling gives only 1 data point per month (not
enough to see a change), arrange a special sample – 5 per day for the first 60 days of the pilot
(80 percent from existing staff, 20 percent from new staff).
People and logistics issues: Communicate what is happening and why. Emphasize evaluation is
not of individuals, only overall impact.
The team then conducted the pilot program and evaluated the results. The objective was to do
before/after analysis (hypothesis tests to evaluate significance of outcomes), ask what was
learned, refine the improvement if indicated and confirm or revise the business case.
A number of significant questions needed to be answered in the results of the pilot program.
Among the most important questions and answers were:
1. Did the additional staffing, and the resulting change in v/s ratio, impact wait time as expected?
The team looked at the results month by month to see if there was a learning curve effect with
the new staff. There was an effect, but the new staff nearly caught up by the end of the third
month. During the first month, “new staff” calls took seven minutes longer than “old staff” calls.
37
During the second month, the difference was down to 2.5 minutes. And by the third month, the
difference was about one minute. (Figures 5, 6 and 7)
Figure 5: Two-Sample T-Test for Call Length – Month One New and Old
Figure 6: Two-Sample T-Test for Call Length – Month Two New and Old
Figure 7: Two-Sample T-Test for Call Length – Month Three New and Old
2. Did wait time decrease as expected? Wait time was lower by 10 percent – just what was
expected when the staff was increased by 10 percent.
38
Figure 8: Two-Sample T-Test for Wait Time and New Wait Time
3. Did the new staff have any impact on transfers? New staff had slightly more transfers, but the
number was not statistically significant.
Figure 9: Two-Sample T-Test for Transfers – Month One New and Old
4. Did the new staff have any impact on callbacks? New staff had 1.5 times more callbacks. This
was a concern. The team needed to determine if this was a learning curve issue, and if not, how
the additional callbacks can be controlled.
5. What happened to customer satisfaction? New data on customer satisfaction after the pilot
program confirmed the team’s expectation of improvement. The company moved from less than
73 percent to about 74.5 percent.
39
Figure 11: New Boxplot for Customer Satisfaction
After the pilot program on staffing was complete, the team was ready for the Improve tollgate
review with the project Champion. Here is what the team reported on the staffing issue during
the review:
40