Asset Management: Measuring Best Practices, Performance

by Scott Sidney, PA Consulting Group

In the current economy, many utilities are struggling to improve company performance and reduce costs. Limited access to funding combined with reduced revenue present challenges, but these challenges can present opportunities. This is the optimum time for utilities to rethink internal business processes and how they can become successful.

An effective asset-management business model coupled with best-practice implementation helps turn chaos into stability. It allows companies to come out of survival mode, become successful and even thrive. Asset management is still evolving as a viable business model. Some companies that have adopted portions of its core principles are having difficulty realizing significant performance-improvement benefits. It might be a result of not understanding basic asset-management principles, improper application of core practices, a lack of technology or the not-invented-here factor. Conversely, companies that follow the core principles of asset-management—fact-based decision-making integrated with risk management—are successful in balancing limited spending with company performance.


To help companies realize improved performance from an asset-management business model, PA Consulting Group defined the relationship between asset-management best practices and company performance. The goal included determining correlations between the implementation of specific best practices and company performance measured through capital and operations-and-maintenance spend, reliability, safety, operations and others.

To do this, PA used its benchmarking standard, the Polaris Transmission & Distribution (T&D) Program. It provided access to data from numerous clients and allowed PA to incorporate optimal best-practice questions into the 2008 data questionnaire, giving the link between implementation and performance. Analyzing client responses and combining the benchmarking team’s more than 100 years of experience, PA captured more than 150 industry asset-management best practices that maximize the benefits of an asset-management organization and business model regardless of internal structure and external issues.

The team aligned each best practice with the PA asset-management model that the practice supports. Figure 1, developed after working with numerous clients and asset-management experts, defines the core processes necessary for an effective business model. It is not an organization chart, and PA has seen numerous variations in organization alignment within good asset-management organizations. Regardless where people fall within the structure, core processes and responsibilities remain consistent, as do the best practices within each process.

To collect the appropriate implementation information, PA requested that clients participating in the 2008 Polaris benchmarking and performance-improvement program rate themselves on the implementation level for each best practice based on the following scale: fully implemented (90 to 100 percent implementation), partially implemented: (30 to 90 percent implementation), implementation underway: (0 to 30 percent implementation) and not implemented (0 percent implementation).


Results took several forms. To portray results in a useful form to clients, two charting methodologies were used: spider, or radar, charts to show implementation level and bar charts to show the relationship between best-practice implementation and asset-management performance. The spider charts allowed PA to show best-practice implementation levels by asset-management process with those of the overall benchmarking panel.

For the spider chart not shown (reliability, specifically the System Average Interruption Frequency Index, otherwise known as SAIFI), blue indicates the implementation level for best performers while red shows the same for the overall panel. The scores correspond with PA quartile performance: A score of 4 means not implemented (Q4= lowest quartile), a score of 1 means fully implemented (Q1=first quartile). The tighter the spider pattern, the higher level of best-practice implementation. Spider charts also let PA superimpose individual company best-practice implementation. The numbers surrounding the spider correspond to specific best practices.

The PA team also wanted to evaluate the entire benchmarking panel to determine the overall correlation between best-practice implementation and performance (see Figure 2). Bar charts that match the spider diagram data were used. There was varying correlation between best-practice implementation and reliability performance. After interviewing participants, the team discovered that top-performing companies (A through C) tended to underrate themselves in best-practice implementation, while poor performers (J through M) tended to overrate themselves. The rest of the panel (average performers D through I) tended to be more even in best-practice implementation and performance. Top-performing companies, however, have implemented more best practices than the panel average.

While quartile placement and best-practice linkages are good indicators, the real correlation proof is in the numbers. PA undertook additional analysis to compare actual benchmarking results with best-practice implementation. This analysis evaluated top-performing companies that have implemented a majority of best practices with a variety of numerical metrics from the Polaris T&D Program. The average of the top-performing companies was compared with the average of all benchmarking participants (a variety of municipal and investor-owned utility clients) and percent variance determined across a range of metrics. The results confirmed the correlation: Implementation of best practices improves company performance.

Related Findings

In addition to the analysis, the team discovered:


  • Technology is critical. Virtually every best-practice company has the technology infrastructure—hardware and software—that integrates with and supports the practices. For example, every top performer in reliability has integrated an outage-management system, supervisory control and data acquisition, geographic information system, customer information system, etc. They also tend to use mobile data terminals and have integrated work-management processes or systems. Technology for the sake of technology, however, does not guarantee top performance. Careful and planned integration of technology with business processes and activities help drive performance improvement. Improperly implemented technology simply allows a user to make more mistakes faster.
  • Performance management is a key component. Most top performers have comprehensive performance-management systems including regular dashboard reporting to measure a variety of metrics that support corporate goals. The old axiom “what gets measured gets done” is true.
  • Top performers tend to perform well across most benchmarked processes—cost, reliability, safety, materials and staffing—although there are differences among business units—transmission, distribution and substations. This tends to indicate the benefits of using a corporate-balanced scorecard for performance management and an internal best-practice sharing process.



Scott Sidney is a managing consultant with PA specializing in asset-management assessment and implementation. He has worked with more than 80 global utilities to improve their internal asset-management infrastructure and decision-making processes. Reach him at

Previous articleLinking SCADA, AMI Pinpoints Outages
Next articlePay It Forward: Prepaid Metering as a Customer Service

No posts to display