Maintenance Score Card

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24
At a glance
Powered by AI
Some of the key takeaways from the chapter are that asset management solutions should not be oversimplified and generic solutions may not always apply across different companies and industries. The chapter also aims to look past common practices and definitions to the underlying concepts and logic.

The chapter aims to dispel the myths that the objectives and best ways to measure asset management are obvious and that asset management is a 'one-size-fits-all' discipline. It also wants to address the myth that asset management solutions can be developed and decisions taken based on limited understanding.

Some common terms defined in the chapter are performance indicator, key performance indicator, and metric. A performance indicator refers to any measure of performance, a key performance indicator represents the overall performance of a strategy or initiative, and these terms generally refer to how indicators are used, not the specific indicators.

The Maintenance Scorecard

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5 Fundamentals and Myths


Things should be as simple as possible, but not simpler.
- Albert Einstein

Definitions
The intention of this chapter is to try and dispel some of the myths and confusion that surround the discipline of asset management and that of its measurement. There is often the perception that the objectives for asset management and, therefore, the best way to measure the fulfillment of those objectives, are obvious. This perception has been created over many years and is the result of two fundamental forces. First, it has been contributed to by the perception that asset management is a one-size-fits-all discipline, whereby generic solutions can be transferred easily from one company to another or from one industry to another. This perception is behind much of the benchmarking efforts that are carried out throughout the world and is also partly to blame for the reduced level of corporate attention that the area receives. Second, many people working in the consulting areas of asset management do not originate from the field. This is particularly due to the technological revolution of the 1990s and has led to a large number of experts from fields such as IT or supply chain, or even data analysts, offering solutions to asset managers. The resulting impact of these two areas has been that asset management solutions, and at time decisions, are taken based on incorrect premises, premises based on limited understanding that have fed the belief that asset management solutions are obvious. This chapter will focus first on providing some common explanations and definitions of terms commonly used when developing measurement programs, and second on addressing some of the more common myths in the measurement of the asset management function. To paraphrase the caption above, Asset management should be as simple as possible, but not simpler. When the area of asset management is oversimplified, there can be catastrophic results, not the least of which is a failure to achieve the levels of performance that are open to companies by understanding some of the concepts at the base of this managerial discipline. One of the intentions of this chapter, and indeed of this book, is to look past what is accepted as common sense or good practice to the concepts that all of this is based on, and determining the real actions and measures, to manage organizations through a rigorous application of engineering logic. Throughout the world there are a range of terms and definitions that are used to refer to measurements of performance. This is often a cause of confusion and misunderstanding. As such it is necessary to define the various terms

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

99

that are used within the context of the MSC. An important aspect of these definitions is that they do not refer to particular indicators or measures; they refer to the way in which the indicators are used. Performance Indicator- This term refers to any indicator measuring the performance of a business process, work team, individual, piece of equipment or plant in terms of its ability to meet its desired levels of performance. This term is often inter-changed with other terms such as metric or measure. Key Performance Indicator (KPI)- As denoted by its name, a key performance indicator is the indicator that represents the overall performance of a particular strategy theme or improvement initiative. For example, a strategic theme of regulatory compliance may have a range of indicators through each of the perspectives of the MSC. However, one of these indicators will be able to be used as a guide to the overall performance of the company within this strategic theme. This term is often used, incorrectly, to describe all indicators. This distinction is important as it highlights, clearly, the different weighting that needs to be given to differing indicators. Leading and Lagging Indicators Understanding which indicators are leading and which are lagging can provide managers with the ability to calibrate their measurement systems to achieve best results. For example, when measuring compliance with safety objectives, there is a tendency to use indicators such as LTIFR, Lost Time Injury Frequency Rate, or other similar measures. This indicator gives management an immediate view of the frequency of incidents that have caused lost time injuries it is used to evaluate the performance of safety initiatives and for targeting safety improvement efforts. This measure, although both valid and useful, is reactive in its focus and is an example of a lagging indicator. It is based on waiting for events to happen that signal a call to action. There are some particular dangers in using this style of indication on its own, in particular because a lack of incidents does not necessarily indicate that safety management processes are adequate. An example of a leading indicator in this context can be developed using principles of functional measurement, first mentioned in Chapter 4 of this book. When developing maintenance strategy, there is invariably a range of routine activities and maintenance tasks that are put in place to manage the risk of failure to within tolerable levels. The frequency and type of these tasks is determined by the consequences and probability of the failure mode occurring. Therefore a measure that monitors the compliance of these tasks proves that the company is managing risk to within its pre-determined tolerable levels. It also shows forethought in the management of safety incidents in physical assets. This leading indicator forecasts possibilities rather than merely responding to them and is often used alongside reactive safety indicators.

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

100 The Maintenance Scorecard

Opposing Indicators This form of measurement is extremely useful in ensuring that actions and reactions are occurring according to plan. Opposing indicators represent the two sides of any improvement initiative. A classical example of opposing indicators is found in the measurement of production. When a plant raises its production levels, it needs to do so while keeping maintenance costs at either the same level or at a reduced level. Failure to do this can result in reduced profit margins. In this case we see one indicator, production rates, rising, and we see the second indicator, maintenance unit costs, lowering. If this is not the case, then there may be cause for concern. This obviously depends on a range of external factors such as client demand and the need to clear backlog orders; however, generally this is a true relationship. Another example of opposing indicators may be those of unavailability, the inverse of availability, and machinery utilization. If unavailability is reducing and utilization is not rising then there are issues regarding the manner in which the equipment is being operated. While this could be related to a lack of production requirement it could also indicate a need for optimizing operational procedures to take advantage of increased uptime. Time Based Indicators- All performance measurement is related, in some manner, to time. A great many of the performance indicators, which are used in day-to-day management of maintenance, are based on differing interpretations of the use of time. Mean Time Between Failure, Availability, Utilization and all of their derivative measures are all ways of determining how time was used as it applies to the equipment. If recorded and calculated separately we can find ourselves with a disparity in measurement, that is, having a range of performance indications that do not necessarily agree with each other. The intention of this brief section is to detail how data capture could most effectively be done for time-based indicators. The measurement of time-based indicators is based on the concept that the equipment is only available for a set period of time, during which it can be used in a range of differing ways. Accurate and consistent time-based measurement relies on being able to create a structure of possible ways that a piece of equipment can be used that will be able to give us the information we require to produce the measures that we need to produce. While this can be done manually, using either paper forms or disconnected spreadsheets, it is far easier if there is a central means of electronically capturing this information in order to apply it through the structure as required. Many of the EAM-level systems on todays market have this or similar style capabilities. However there are also niche availability recording systems that have similar capabilities. While having the software in place to do this sort of information recording the most vital part of this solution is the structure itself. In Figure 5.1 the structure shown is a generic example of how this could be applied. The initial step

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

101

Total Time Scheduled Time Unscheduled Time

Maintenance Time
Scheduled Modifications

Operation Time

Other

Maintenance Time

Operation Time

Other

Scheduled Corrective

Scheduled Routine

Opportune Works

Non-Utilized

Non-Utilized

Breakdown

Utilized

Utilized

Rework

Scheduled Replacement Scheduled Overhaul Failure Finding On-Condition

Idle

Opportune preventive / Predictive works Opportune Corrective Works

Figure 5.1 An Example of a Structure for Capturing Time-Based Indicators

is that of total time, followed by a separation into scheduled and unscheduled usages of time. The depth of the structure is totally variable, driven only by the requirements of each corporation. Some things that may change the structure include contract arrangements, operating environment and industry type. At all times, the lowest levels that are defined are the levels that are used to enter information. For example, if a piece of equipment was out of service for a scheduled replacement then this would be recorded using the code scheduled replacement above. In some instances there is the ability, and desire, to go to greater levels of granularity; for example there may exist a code under this level called Replace Gearbox. This is particularly effective in cases where standard equipment types are used. This code would feed up the hierarchical structure, through the code Scheduled Routine, through Maintenance Time and form a part of the code Scheduled Time. As referred to elsewhere in this book, modern business intelligence software is easily able to drill-down through layers of data to reveal greater levels of detail. In this case the structure facilitates the drill-down functionality. Using this sort of approach, if the underlying structure has been adequately defined, there can be a large number of time-based measures derived from the same data capture exercise. However, one of the key misconceptions regarding time-based indicators is the fact that they, by themselves, are able to represent the effectiveness of equipment performance.

Copyright 2005, Industrial Press, Inc., New York, NY

Idle

The Maintenance Scorecard

102 The Maintenance Scorecard

This highlights an area where there has been considerable confusion, that of the difference between under-utilization and under-performance. Although similar, these two are distinctly different and each one points to a different area where improvement initiatives could be made. Under-utilization is almost entirely under the control of the operational part of any organization, and refers to the ability of operations to utilize the equipment during the time that it is available. Although this measure is out of the control of asset management, it does reflect how well available time is being used and provides an insight into how to make this better. Under-utilization is almost always due to a conscious management decision of some nature. At times it highlights the need for reduced equipment numbers or it may indicate a need for better operations or handover processes. Under-performance, on the other hand, refers to how well the equipment is producing during the time that it is being utilized. This is not a measure of time and needs to specifically take into account rates of production. Under-performance can be at the heart of many issues regarding corporate performance. Unlike under-utilization, it may not be associated with a conscious decision to perform in this manner. Under-performance may be the result of poor equipment reliability, reduced levels of throughput, poor product quality or a range of other factors. The remainder of this chapter is to point out some of the myths and misconceptions regarding the measurement of the maintenance function that have invaded the asset management discipline during the past two decades. These myths are currently found in many organizations and are directly related to the application and success of advanced asset management principles. Many of these go to the heart of machine maintenance itself, while others merely help people better understand how to apply a performance measurement regime.

Myth 1- Metrics as Purely Lagging Activities


____________________________________________________________ Myth Metrics are useful for monitoring asset and human performance as a continuous improvement tool (Lagging After the event) Reality Metrics are a management tool for implementing corporate strategy and ensuring its execution as well as highlighting continuous improvement opportunities (Leading and Lagging) ____________________________________________________________ There is an often-quoted maxim within management that what can be measured can be managed. While this is a true statement, it supports the belief

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

103

that management is a reactive art. As a result, measurement is often used in a reactive manner, that is, measuring recent performance and then using this as a base for improvement in that particular area. In this manner indicators are lagging management tools. They are used to determine performance after the event. This is common practice and is, without a doubt, a good practice to pursue in any arena of professional activity. Yet in every field of human activity there is always a need to first determine where it is that we wish to go, then plan our means of getting there. If we want to get there in the best manner possible, in terms of time, cost and comfort, then we will put in the effort to plan the journey upfront. This level of planning is done in the majority of organizations. Although when it comes to asset management, it is often not done at a corporate level and is left to lower strategic and tactical level operators to determine strategy. The MSC provides a tool and structure for integrating the corporate planning process for maintenance efforts. It also focuses these strategies into measures and targets that need to be obtained. This allows us to use indicators as a proactive rather than a reactive tool. By tying these to corporate objectives, targets and strategy initiatives, we are able to use them to forecast where a company wishes to go rather then merely using them as performance monitoring tools.

Myth 2- Availability as Effectiveness


____________________________________________________________ Myth Availability is a good measure of maintenance effectiveness Reality Maintenance Effectiveness is a concept that is not, and cannot be, represented by only one indicator. ____________________________________________________________ Availability is a measure of the amount of time a piece of equipment is available for operations. It is possibly the most widely used and most common performance measure in the asset managers arsenal throughout the world today. It is used to understand equipment performance, set production targets, justify equipment purchases or other expenditure and use as the base for a range of other derived measures and practices. However the true mechanics of this measure are not often understood by those making decisions based on it. The most common misunderstanding is thinking that availability is a measure of the ability of equipment to perform to the standards required of it. When this area of understanding is examined, it becomes clear that availability by itself is not an adequate measure of the effectiveness of the equipments per-

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

104 The Maintenance Scorecard

formance. In order to understand the misunderstandings of this area it is first necessary to understand what is meant by the term effectiveness. Effectiveness, in terms of asset management, refers to the equipments ability to do what the users require of it. As stated elsewhere in this book, this takes in a wide range of areas and factors. However principal among these is that users fully understand what it is that they require of the machinery. At first glance this question is often treated with the disdain people associate with simplicity. Users generally understand the reason why they purchased the machine and the productivity that is expected of it in order to meet annualized production plans. In a mining organization, for example, the haulage fleet may require an annual availability of 90% in order to achieve the production goals for that year. However availability in itself is no guarantee of good performance. Take, for example, the performance of an electrical motor. It is not important to understand anything else about its operating context for the sake of this example. The motor is required to operate for a period of 10 hours continuously. During this time it is out of service for 1 hour. As measured by availability this would give a measure of 90%Time available for production (9 hours) ----------------------------------------------------------------------------------------------- x 100 Time required for production (10 hours)

In some industries this may be considered an adequate measure of performance, in other it could be woefully inadequate. If this were the result of only a few failure events, then this figure would be an adequate guide. However if the 1-hour of downtime was caused by 20 failures, each lasting 3 minutes each, then the performance of the motor, in terms of failure rate, would have been less than adequate. Failure rate (often referred to as reliability or mean time between failures) of the motor in this case would be 0.5 hours Time in operation (10 hours) ---------------------------------------------------------------------Number of failure events (20)

Therefore, even though the equipment had an availability of 90%, a failure rate of half an hour indicates the poor level of performance of the motor. It indicates that, on average, the motor could be relied upon to run for half an hour before failing. This relationship is illustrated in Figure 5.2. There are further areas where availability fails to be a good measure of equipment effectiveness. As stated previously effectiveness needs to be based around what users expect their equipment to do. This is often expressed as the production rate primarily; yet this is far from the entire range of expectations that users have of their equipment. For example, a pump that is used to pump hydraulic fluid may have several key expectations of it-

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

105

Figure 5.2 Comparing Availability and Reliability

To pump hydraulic fluid at a predetermined rate and pressure To retain all hydraulic fluid so as not to breach environmental regulations To relieve the pump when fluid pressure reaches a predetermined dangerous level In this scenario we see the measure of availability falling short in a number of areas. The first area is that of the primary function of the pump. If the pump is not operating at all it is almost always registered as unavailable. However if the pump is operating and available but pumping at a reduced rate and/or pressure, then it may not be registered as unavailable. It also would not represent times of reduced availability such as run-up or run-down times or other times where performance, hence effectiveness, would be as per primary function requirements. The other area relates to the secondary functions of the pump. The measure does not reflect the fact that the pump may be leaking, or that the relief valve may not be working in a manner that is required, not until the pump is taken out of service and no longer able to perform its primary function. However it may have been functioning at less than required performance for some time due to leakages.

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

106 The Maintenance Scorecard

In summary availability remains a good measure of equipment performance. However, like every other measurement it is not a sole source of information. The limitations of this measure need to be understood in order for it to be used to its maximum effectiveness. Some quick guides to the limitations of availability are listed below. 1. Availability is a partial measure of how equipment fulfills only its primary function. It does not necessarily measure partial availability or reduced availability. 2. Availability and failure rate together provide a more accurate guide to the performance of machinery to fulfill its primary function. 3. Availability does not give any sort of indication of effectiveness of the equipment to fulfill user requirements, particularly secondary functional requirements, until such time as the equipment has been removed from service to repair these. These limitations have profound implications for other areas where availability is used as a part of, or as a base for measurement regimes. In particular this relates to the widely used Overall Equipment Effectiveness measure, OEE, and the use of availability modeling techniques. In applying both techniques it is necessary to bear in mind what the measure is showing the user. In the case of OEE this means that one of the base components of this measure is, at best, a measure of the effectiveness of equipment to measure a part of its primary function. The effects of partial availability are often not captured by the remaining elements of the equation.

Myth 3- Levels Where Metrics are Used


____________________________________________________________ Myth Management uses indicators to monitor the performance of the organizations human and physical resources Reality Indicators are used by all levels of the maintenance organization as a tool to assist in carrying out their daily tasks ____________________________________________________________ Within various fields of asset management, indicators are only seen by senior and higher level managers. Often indicators are used by middle management to understand trends and present information to higher management regarding what is occurring and the performance of various operational or maintenance initiatives. At times companies take communications initiatives such as posting Key

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

107

Performance graphics in various places within the plant in order to inform people about performance and to hopefully create interest. However, if the indicators that are used are not of any use to the functional and tactical levels of the operation, then they will be ignored or at best reviewed without understanding the full significance of them. This poses some interesting challenges to companies. If people at the tactical levels are able to find value in daily or weekly indicator reports, and are able to relate these in their personal efforts within the organization, then the capacity to achieve improvements will be multiplied. For example, if weekly toolbox meetings focus on the mean time to repair of a particular failure mode on a particular piece of equipment, then this is something that the tactical level staff have a strong understanding of. As such the toolbox meeting can be used to drive out initiatives to reduce the MTTR or even begin initiatives to eliminate the problem entirely.

Myth 4- Performance Measures are One Dimensional Only


____________________________________________________________ Myth Performance measurement is only done via key performance indicators and standard metrics Reality Modern technology has enabled the use of sophisticated graphical analysis as well as one-dimensional indicators ____________________________________________________________ Indicators, or metrics, are usually expressed as one-dimensional tools that are used to give a numeric representation of performance in some form or other. This is a traditional viewpoint and one that has its roots in the way that information has previously been managed. Prior to the 1980s graphical displays were not common and as such were not often used in creative ways. Since the beginning of the 1990s graphing and information analysis software has become ubiquitous. It is on every desktop and most people have at least a glancing understanding of how to use it or access it. However with the advent of business intelligence tools this has been taken to another level entirely. Companies now have the ability not only to graph the results of the performance, but also to develop a range of other tools for analytical assistance (see Figure 5.3). These include the ability to drill-down into graphical displays to present further layers of detail, to immediately compare measures and results, and to configure executive alarms or stoplight indications. This is a continually evolving area and one that has resulted in companies finding a large number of new applications for these forms of representation. One of

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

108 The Maintenance Scorecard

Poor Availability
Utilization

Good

Figure 5.3 Example of Stoplight Indications

the key benefits of business intelligence solutions in industry is the ability to reduce large volumes of data down to easily understandable measures and graphics. This is often referred to as the sweet spot within the data, the cross-section of the captured information that best represents whatever is being measured. This area has opened up the field of performance measurement to the application of a great deal of creativity, using data sets and sub-sets to represent improvement opportunities and performance, and comparing these sets in a way that was previously not imagined. In Figure 5.4, time-based measurements have been represented in a way that makes large volumes of information easily able to be understood. At a glance an operator is able to see how his equipment is being utilized, for what reasons and where there are gaps in the performance. These principles can be applied

100 %

75%

50%

25%

0%
Hours Utilised Productive Hours Utilised - Unproductive Hours Idle Hours Breakdown Hours Scheduled Maintenance Hours Unscheduled Maintenance

Figure 5.4 Example of Sophisticated Analysis Techniques

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

109

to virtually any area of performance, limited only by the imagination of those developing the indicators. If the underlying information sets are accurate, these sorts of principles can be used to present almost real-time comparisons for internal benchmarking purposes. The advent of this capacity has enabled asset managers to be able to react and make decisions in vastly reduced timeframes and with a previously unheard of level of confidence in the decision-making information. For example, a mine site in Asia had configured its CMMS system, reporting systems and business processes to be able to produce almost real-time unit costing information with regards to how the mine was operating from day to day. By recording production tonnages, times, costs and activities as they were occurring the company was able to produce end of shift reports that clearly indicated the unit costs of production during the night, the variance from the previous shift and the overall tendencies over time. Due to the inter-related nature of the data they were also immediately able to pinpoint where costs had ballooned and the underlying reason for this. This could have reduced production for some reason or increased maintenance costs for some reason. However, this capacity enabled the company to react almost immediately to correct early signs of rising unit costs, or at least to understand them, based on factual understanding rather than on purely experience and gut-feel.

Myth 5- Proactivity as a Measurable Element


____________________________________________________________ Myth Proactivity leads to more efficient maintenance activities; therefore, measuring proactivity provides a snapshot view of how well the maintenance effort is performing Reality Proactivity is a source of efficiency improvement; however, it is not easily measured and represents a form of thinking rather than measurable actions ____________________________________________________________ As the discipline of asset management continues to grow there is an increasing level of importance given to proactive actions. This has become somewhat of a mantra to managers and consultants alike and has led to great leaps in productivity in various organizations. The area of proactivity in asset management is complex and could be the basis for an entire book by itself; however, there is no doubt that a forward-looking philosophy of asset management will achieve more than a backward-looking philosophy.

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

110 The Maintenance Scorecard

This rising level of interest and application of proactive thinking has raised the question of measurement. Often the measure applied is a simple one as detailed below: Proactive maintenance vs. Reactive maintenance At face value this appear to be a good measure and one that can easily be used to give a high level representation of the level of proactivity within the organization. However, when this indicator is reviewed in further detail, there are several gaps in this logic. First, there is a need to understand what truly makes up proactive and reactive maintenance activities. Proactive maintenance is often defined as routine maintenance, that is, the maintenance that we have decided to do to maintain the asset base. Reactive maintenance is often made up of the corrective actions that are performed in response to something occurring. So far so good, the indicator is giving the company a ratio of routine work and correctively performed work. However, when formulating maintenance strategy, there is often, depending on the consequences of failure and the configuration of the physical assets, the decision taken not to perform routine maintenance activities at all. (Run-to-fail) The decision to not apply routine maintenance also represents a form of proactivity through the use of analytical methods that determine the most costeffective form of maintaining the asset base. Therefore when failures of these pieces of equipment or components occur, they cannot be taken as purely reactive actions. Rather they are planned-for failures that have been determined as the best strategy for that piece of equipment under its current operating circumstances. However, when reviewing the indicator above, this is not immediately obvious. In fact it creates the image that asset management is out of control when it may not be the case. Also, seeing a lowering unit cost of maintenance and a rising level of reactive work would cause immense confusion, as it doesnt seem to follow what is commonly believed. Another area where confusion can arise in this measurement is in the recording of actions taken in response to a fault being noticed. When performing predictive maintenance and detective maintenance, there is always a possibility of finding a fault or a condition that indicates a fault is not far away. Therefore, how is the resulting corrective action recorded? Is it a reactive task, because we are responding to something that is failed or failing, or is it a proactive task because we are acting on information from our maintenance regimes? Within the MSC these forms of repairs are referred to as proactive repairs and represent part of the proactive approach to asset management. Failure to recognize these maintenance actions has long been a failing of RCM and of the implementation and monitoring of RCM initiatives.

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

111

Proactivity is a desirable attribute of any functional area, however, this is particularly true for asset management where the company carries substantial economic and risk management implications. However, measuring proactivity is not easily achieved and will depend on things such as: Coding structures for work orders The ability of the CMMS to record the types of strategies that have been applied to certain failure modes The amount of additional strategic initiatives and the measures applied to these. Proactivity is more than a measurable group of activities. It is a way of thinking and acting that will have immense results when it becomes an integral part of the way that business is conducted. Its measurement is complex and requires levels of functionality that are not found in many enterprise-level management systems.

Myth 6- OEE as a Measure of Overall Equipment Performance


____________________________________________________________ Myth Overall Equipment Effectiveness (OEE) is a measure of the overall effectiveness of equipment, plant or processes Reality OEE is a useful measure if taken for what it is; however, if misused it can provide indications of performance that are inaccurate and at times dangerous ____________________________________________________________ Overall equipment effectiveness is one of the more widely-used indicators in todays maintenance environment. It is also claimed as one of the leading methods for optimizing equipment performance and for providing a monitoring mechanism for continuous improvement initiatives. There have been volumes of work, books and other technical information dedicated to this measure and its application throughout industry, and there can be no doubt regarding the benefits that it has assisted companies to achieve in many industries over recent time. This apparent success has led many maintenance professionals and corporations to adopt this indicator into their operations without question; it is the most prominent example of influenced metrics existing in the world today. Yet, the measure itself, if wrongly applied or wrongly interpreted, can lead corporations to make poor reliability decisions, or worse, dangerous decisions.

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

112 The Maintenance Scorecard

OEE is a product of three aspects of equipment performance and normally consists of the following parameters: Availablity x Production Rate x Quality The thinking behind this measure is to have an indicator that represents the three key aspects of machine performance in a format that is easy to view and to access, and one that provides information for use in continuous improvement initiatives. However, in the world of reliability engineering, these three aspects of machine performance are not the only aspects that are important to either the maintenance function or the operational function. This goes to the heart of the issues regarding OEE, the claim to be an overall indicator of performance or even of organizational capabilities. If any one indicator is truly going to represent the aspects of performance that are of importance to capital intensive organizations, or any maintenance function, then there is a need for it to include, in some fashion, the ability to measure and indicate how the corporation is managing risk, both safety and environmental risk. Lack of the correct amount of perspectives is one issue. However, using the existing perspectives within the OEE could also lead to issues regarding reliability and safety. The indicator is often applied without any limitations on the indicators. For example, equipment may not need to be available 100% of the time; rather, it may only be needed 90% of the time. Another issue, and more important, is that within reliability engineering there is a distinct difference between what equipment is required to do, and what its design capacity permits it to do. There are a range of reasons for this. One of the more common reasons is that most equipment contains items which will wear over time, often in a random manner (as is the case with complex equipment) but wear is still a consideration. By operating equipment at, or near, its design capacity, companies are reducing the time that is available for equipment to wear. Therefore, a piece of equipment with a production rate of, at, or near its design capacity will have a positive OEE reading, even though this may be reducing the level of reliability of the machine and, therefore, reducing the overall effectiveness of it (over time). In worst case scenarios, equipment may be operated at beyond its design capacities due to deliberate or accidental overloading of the system. This will produce a yet higher level of OEE and therefore create the perception that the equipment is operating at better than previous levels of performance. However, what is really happening is that temporary or continual over-loading of equipment reduces its ability to perform in a reliable manner. This is a fundamental principle of reliability engineering that, at best, could lead to early equip-

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

113

ment failures or, at worst, could lead to safety incidents. Over-loading equipment within a manufacturing plant could be an example of where reliability and economic life of equipment is the principal result. When the same principles are applied to a petroleum refinery, then the results could be far worse. This leads to another issue in the areas of performance management. Often the three areas that are monitored within the OEE indicator do not have an equal value to the organization as a whole. That is, all three aspects of this measure are treated as equal under the normal way that this measure is implemented. Therefore an increase in production rates is considered to be equally as important as an increase in quality. This does not reflect the individual nature of each of the organizations that use this measure and could lead to decisions taken to focus resources on areas when they could be better utilized elsewhere. Asset management is not a one-size-fits all management discipline. In fact the complexities of each company, combined with the complexities of each base of installed assets, makes asset management uniquely challenging. OEE is a good measure when it is taken for what it is, that is, a measure of the performance of the equipment in three fundamental areas. In some applications this figure is extremely useful and, if the method of applying it is adapted, can provide a good indication of equipment performance. However, in other industries, such as electrical distribution, this indicator will provide extremely limited and almost non-consequential results. Within this particular industry the combination of availability, production rate and quality will result in a consistently high figure,23 so much so that the figure will only really change when there has been some form of interruption to services. It is the opinion of the author that there is, to this date, still no one measure of organizational effectiveness or of overall equipment effectiveness. Issues such as these can only truly be represented by a range of indicators, each of which has been implemented in a logical manner, with a clear understanding of the behaviors that the indicator will drive. This takes us back to the point made in the first chapter of this book. When operating a car, drivers use a range of gauges and indications to ensure that the car is operating in a manner that it should be. They do not use a combined metric or resort only to one indicator. If this concept is true at the level of operation of a simple automobile, how could it be dramatically different when operating complex and large-scale machinery installations?

Benchmarking and Best Practice


Benchmarking is a term that is often used in the field of asset management; however, at times the application of this method is not in keeping with the
________________

23 The only real variable in these sorts of equipment is production rate, which is controlled by customer demand.

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

114 The Maintenance Scorecard

requirements of capital-intensive industries. This section will attempt to clarify some of the myths and misunderstanding around this area. There can be no doubt that, when applied correctly, the principals of benchmarking can add substantial value to a corporation. However, there can also be no doubt that it can also be a large waste of time.
Benchmarking and Best Practices

Xerox Corporation first coined the term benchmarking in 1977-1979. They used it to define a management tool for the monitoring and ranking of their products against the products of their competitors. It is a continuous process of measurement and comparison of business processes and performance against processes and performance in other organizations. The goal of benchmarking is twofold. Identify the strengths and weaknesses of current performance. Determine best practices that have contributed to the success of market leaders. At its heart, benchmarking is a tool for increasing the ability of an organization to compete within its chosen markets. As such, it fits in with the stated intention of the MSC. While it is important to be aware of what competitors may be doing in the area, merely repeating the performance or actions of competitors is more of a competitive necessity than a competitive advantage. Although the original intent of benchmarking was for comparison with other organizations, it can also be used as a tool for the analysis and comparison of different areas within the same organization. It is also used by various industry regulators throughout the world to provide a guide to pricing and efficiency improvements that are required by companies under their remit. The goal of benchmarking initiatives needs to be highlighting of unique practices that provide some substantial competitive or strategic benefits. These are often referred to as best practice. A benchmarking survey conducted by the European Centre for Total Quality Management provided an interesting insight into the results gained from benchmarking implementations. This study involved 227 corporations in 32 different countries and is a good representation of the types of benefits an organization can expect from a benchmarking project. The survey was aimed at evaluating the level of benchmarking maturity reached across different industry fields. A surprising result was the relatively low percentage of companies that reported innovative approaches to business improvement or improvements in quality. The highest benefits were achieved in the areas of Influencing the strategic decision making process

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

115

Allowing more effective deployment of resources Process Improvement Regardless of the strategic benefits that the technique offers, benchmarking cannot be easily and randomly applied. There are several factors within asset management that restrict, severely, the application of the method. These factors are best grouped under the following headings Determining what to measure and understanding the operational environment Determining best practices based solely on economic benefits gained Determining what best practices are generic and are able to be applied in any industry
What to Measure and Operational Environment

This is by far the most important point to consider when embarking on a benchmarking project. It is also a key contributing factor to one of the other key issues, finding benchmarking partners. We now operate within an open market. Our competitors may come from the United States, Europe, Asia or Latin America. As such, if we are looking to define practices that are driving extraordinary performance, we need to look outside of our immediate areas. Even within similar markets, there are often vast differences in the way that organizations do business. While it may be fantastic to realize that we are among the best maintenance practitioners in the manufacturing sector of Ohio, that does not assist us at all when our main competitors come from China. This opens the issue up to a range of potential problems and difficulties in applying benchmarking. First, we need to know what to measure. For example: A common benchmarking measure in the areas of asset management is percent of overtime worked. For the sake of the example we will look at this measure based on a weekly period. The basis of this measure is to be able to point out that corporations that are market leaders are those that are able to operate with a percent of overtime worked of 5% (for example). However, under closer scrutiny it can be found that this measure is not a good comparative guide at all. Almost every corporation in the world today operates with differing contracts of employment. Some companies may pay overtime and may use a timesheet system to do so. Other corporations may use a salaried option whereby overtime is not measured nor is it recorded. In the second company they have found that by offering higher salaries to their employees, they can expect a reasonable amount of overtime from their employees without additional pay. This becomes even more complicated once we look outside of familiar territory. In many organizations in the developing world, standard wages are not adequate for

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

116 The Maintenance Scorecard

most people to live comfortably. As such they are regularly offered overtime as a means of rewarding good performance and other initiatives. In other organizations, where labor is not a high cost, the cost of overtime is rarely even considered. Labor is cheap, so if it needs to be done just do it! This also extends to other measures such as supervisor to technician ratios. In some industries this ratio can easily be kept low. In the water distribution industry there is often not a high level of supervision required, particularly when a company manages potentially hundreds of treatment works over hundreds of miles. Yet, in the gas extraction industry a higher level of supervision is required for reasons such as safety assurance or managing the flow of workers in reaction to unexpected events that are still common in some capital-intensive industries. All of these facilities operate in vastly different operational contexts. Some have higher levels of technology to be maintained, some have to deal with higher customer expectations and others have to deal with highly corrosive materials that require special attention. When we look at these simplistic examples, where does that leave some of the standard benchmarking measures offered up as a means of benchmarking in a generic fashion? It quickly becomes obvious that many of these measures are not adequate to assist companies in determining any form of competitive advantage. In fact they are only ever relevant when a host of other factors coincide to enable a direct comparison. This also applies to a range of other benchmarking type measures that do not take into account the differences in operational environments of differing companies. Determining what to measure is heavily dependent on which partner the company is able to find to do a benchmarking survey. Another example of misplaced application of benchmarking can be found in financial indicators. As has been discussed, competition within the global marketplace is becoming more and more open. As such, any benchmarking initiative that uses financial costs as its base can be misleading in the results that it provides. As a side issue it is important, when comparing performance, not to confuse low cost with low quality. At the time of writing, India produced the greatest number of engineering graduates, per capita, of any country in the world. This has given them the resource of highly skilled professionals available at lower costs than many developed nations. An example of erroneous application of benchmarking initiatives is in the cost per foot2 (or meter2) maintained measure that is used in facilities management. This measure seems to be applied despite the fact that the facility in question may be An urban shopping mall A specialty store in Beverly Hills

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

117

A sports stadium A factory A plant with corrosive substances (such as an ammonia plant) If one market were to be focused on, within which all corporations had similar technician costs, then this could be an applicable measure. Once we go outside of familiar borders this indicator can point to a vast difference in Cost per foot2 maintained between facilities managers of the United States and Mexico (for example).This goes for all price comparisons; all types of unit costs comparisons will have to take into account lower labor costs. However there is a need for an objective view here. If corporations are truly after increased profitability over all else, which is debatable, then a comparison such as this will show that their wages costs are far too high compared with up-and-coming competitors. As such it may lead to a conclusion that the best option for low cost production is toa) Move the operations to economies where labor costs are lower b) Try and reduce the labor costs within their current place of business In fact, benchmarking comparisons such as these contribute to the justifications of many of the movements of capital throughout the world, initially from the United States to low-cost Mexico, then from Mexico to China and so on. If cost comparisons are to be applied then they need to be applied in a manner that either restricts them to companies within similar local cost factors or manages them in a manner that eliminates local cost factors from any comparison. A possible way is to compare the cost benefits percentage of cost reductions as a measure of improvement initiatives. Even here there are issues. A company may operate in a remote location, thus exposing itself to high transport costs for parts. After conducting a costs/benefits analysis it determines that it is more efficient for it to continue paying high transport costs rather than try to manage with an extensive maintenance store. As such this company has a built-in and accepted higher threshold of maintenance costs. This will also distort any benchmarking effort. Another element of cost comparisons is ensuring that the same definitions are used throughout the group of companies that are undergoing the benchmarking process. Some firms apply maintenance costs to only the costs arising from day to day maintenance activities; others may choose to include the costs of modifications or renewals in this category. This issue in particular often raises its head in regulated industries. In these industry types, government bodies invariably use some form of benchmarking process to establish best practice unit cost operations. However, the problem comes in the way that different companies assign and manage their costs. In some instances what may be deemed as capital expenses are treated as a part of operational expenditure. In

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

118 The Maintenance Scorecard

other cases, it may be that operational maintenance expenditure includes building and facilities maintenance also. The final factor in understanding operational environments of different companies relates to how the equipment is used and operated. Here a vast amount of work needs to be done. This can cover a range of areas from the environmental conditions where companies operate to the regulatory environment that they operate in. Some regulatory frameworks have been rather ill conceived and actually encourage high costs in operational management to fit in with regulatory timetables. For instance, we will take the example of three electricity transmission companies. To negate the effects of regulation, all use overhead power lines as a means of transmission and all operate within the United States. Company 1- Transmission Company based in Arizona. The environment is dry, dusty and hot. The principal effect to them from the environment is in the build up of dust on the lines. As such they are required to frequently employ live-line washing techniques in order to reduce the risk of power loss to their clients. Company 2 - A transmission company based in Minnesota. A concern to this firm is the build up of snow and ice around the insulators and lines. In order to minimize their risks of power loss for their clients, this company is forced to take measures substantially different from those of their counterparts in Arizona. Company 3 - A transmission company based around the forested areas of Raleigh in North Carolina. In this scenario the main gnawing failure mode to this organization is the growth of trees close to the power lines. As such their costs of prevention are on tree trimming services. Those three companies perform fundamentally the same tasks, in three vastly different environments within the same general market. In these cases comparisons of costs would be inaccurate because all have different factors to contend with. In a similar fashion, reliability measures may also be difficult to compare. The Raleigh company may be suffering high fault incidence due to unusually high rainfall that has enabled rapid growth around its transmission lines, for example. There can also be no understatement of the effects of regulation, or the lack of it, when comparing two companies. The effect of this changes from country to country. In some countries large-scale utility companies are still owned by the state, reducing the requirement for cost-effective operation and focusing resources instead on safety and continuity of supply issues. Yet, they still may have some practices that are ahead of the market in terms of the results they achieve, this is particularly the case as these sorts of corporations generally have large-scale internal reliability departments.

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

Chapter 5

Fundamentals and Myths

119

Looking for Purely Financial Benefits

At a corporate level it is realized that while profits are paramount, they cannot be achieved at the cost of other areas of risk management. This is the essence of responsible asset management and is a driving factor in the changes that we are seeing across the globe in areas of asset management. As such any best practices that are noted in a benchmarking survey must first be analyzed prior to adopting them. For example, a practice may have become lowest the cost option by neglecting safety considerations. Alternatively, there may be several practices that have lowered the costs of effective safety management and can provide an equal level of benefits to a corporation.
Recognizing Generic as Opposed to Specific Best practices

Defining best practices often falls into two specific areas. First there are new ways for us to go about what we are currently doing. These are generally process specific practices that can change the efficiency or effectiveness of day-to-day activities. However from time to time there are paradigm-shifting practices noted that have the ability to revolutionize the way that business is done within an industry. These can often be applied across all industries regardless of the sector or operating environment. In the recent past in engineering there have emerged two generic best practices. One is the best practice of RCM, originally from the U.S. airlines industry and now used across the world in all types of industries. The other, Total Productive Maintenance, originated from the Japanese auto manufacturing industry and is now applied to a wide range of industries throughout the world. Recognition of such practices is often not easy, as is underlined by the rarity with which they have appeared. Many organizations will protect their intellectual capital when it comes to the development of new approaches such as these. However, they can exist on a smaller scale. They may include sending diesel engines off-site as a means of controlling overhaul quality and reducing direct labor costs, potentially outsourcing the planning of plant turnarounds. They may even include routine detective activities to electrical switchgear as a cost effective means of managing risk exposure. In summary, the areas of benchmarking and searching for best practices are areas that can help corporations to achieve substantial strategic advantages. However, it can be seen that attempts to execute wide-ranging benchmarking studies are often flawed because they do not take into account the operational environments of each individual business. In these efforts there is generally a lot of interesting information developed, but the benefits are doubtful at best. If a benchmarking project is to be successful, then it needs to follow some basic guidelines1) What to measure and how it should be measured are heavily dependent on the operational environments of each company in the benchmarking

Copyright 2005, Industrial Press, Inc., New York, NY

The Maintenance Scorecard

120 The Maintenance Scorecard

2)

3)

4)

5)

efforts. Questions should focus on issues such as innovative practices in productivity improvement; safety and risk management improvement, and efficiency improvements. A focus on costs can only truly be effective within companies that share similar local cost factors. Comparisons with companies from low-cost labor economies will distort the results. What may be acceptable is measuring the percentage of cost reduction as a result of applying certain initiatives. This steers away from total costs and focuses on improvement only. However if a true benchmark of lowest cost production is required, this may be an option. All benchmarking projects, whether internal, external or defining best in class, need to have at their core the ability to define unique practices that are the enablers for exemplary performance in a specific process or area. These need to be generic enough to be applied from one industry to another without major alteration. All results of benchmarking projects need to be further analyzed prior to implementation. An irresponsible neglect of safety may be acceptable in some economies, while it may not be in others. All best practices need to be understood for their full effects, not just the economic effects. Although benchmarking continues to gain ground as a strategic tool for many organizations it is worth recalling that mimicking our competitors does not necessarily provide us with competitive advantages.

Benchmarking is supported by the MSC. By finding companies that work in the same operational environment, with similar processes and local factors, we can compare our performance with others to determine what are the best practices that have contributed to higher levels of performance. The measures used within the MSC facilitate this and allow us to see outside of our current environment. The MSC also adds an additional perspective to this exercise. By having worked through the corporate planning and strategy sessions, companies are already aware of the types of performance that they need to achieve, the types of initiatives that they need to take and the types of timeframes that they believe are able to be achieved. Implementation of these aspects provides companies with a pre-existing structure within which best practices can be integrated rapidly, and corporate measures and strategies changed to suit almost immediately.

Copyright 2005, Industrial Press, Inc., New York, NY

You might also like