Maximizing grid reliability and minimizing costs with the optimal number of automated switches

Automated Feeder Switch

A two-part series of articles describing the design fundamentals that utilities can employ to identify the optimal number of automated devices by utilizing existing analysis techniques. 


This article provides the engineering fundamentals required to answer the question many utility engineers are asking, “What is the optimal number and location of automated switches that should be installed to maximize reliability and minimize costs?”

Part One of the article (Do you know the optimal number and location of automated switches in your distribution system?) addressed methods of evaluating reliability improvements as a function of the automated devices installed on a feeder.

Part Two focuses on comparing the lifetime reliability costs with the installation costs of the system in year zero. The first task in identifying lifetime costs consists of identifying an acceptable failure rate for the life of the project, which can vary depending upon system attributes. Upon identifying the monetary impact of lifetime failures, the optimal number of devices is identified by summing reliability costs, installation expenditures, and restoration costs associated with demand-side management and customer owned distributed energy resources.

Failure Rate

Utility companies with large quantities of feeders can reasonably predict the number of annual failures on a feeder based upon historical outage data. As part of standard operating procedures, outage causes are recorded and utilized to analyze system reliability. Recording outage causes allows for the identification of failure trends that significantly impact reliability. The process also allows for the identification of “poor performing” feeders, which notifies the utility company of circuits whose reliability performance are much lower than the system average. 

A feeder’s failure rate is determined by analyzing the frequency and cause of outages on the system.  Utilities typically possess numerous years (decades in some cases) of outage data they can analyze. Outage causes include environmental conditions, system age, equipment type (underground versus overhead), wildlife activities (birds, squirrels, raccoons, snakes, etc.), and vegetation. Frequency is typically dictated by the company’s policy regarding system upgrades to improve reliability.

Sporadic outages due to faults that do not contribute to an overall failure pattern are restored but may not be completely repaired or upgraded to prevent future outages. Typically, these random events do not impact reliability significantly and are therefore typically not utilized in predicting future failure rates. Even if an outage is large enough to impact reliability for a specific year, its random nature prevents it from being considered for a future failure rate. 

Significant weather events that cause an unusually high number of outages throughout a system are usually considered major events and are excluded from reported reliability indices. These events include blizzards, tornados, hurricanes, and floods. As such, major events are typically excluded from determining future failure rates. The exception to this rule is in cases where severe weather events become routine.    

Many outages that are due to repeated causes, even if occurring on different parts of the system, indicate a system-wide deficiency that impacts reliability and customer satisfaction. Not implementing permanent solutions can lead to further outages and possibly customer complaints. Permanent solutions for these problems, many times in the form of system upgrades, are implemented rapidly to improve reliability indices and to prevent mandatory oversite by the public utility commission.

The decision to perform a complete system upgrade is determined by evaluating the risk assessment, which compares the monetary impact of future outages with the costs associated with a complete system upgrade. This assessment, which employs the company’s standard rate of return (ROR), ultimately dictates the company’s acceptable failure rate for the system. Although failure rates may vary during the lifespan of the automated equipment due to aging equipment and varying environmental conditions, utilities should employ the company’s average failure rates based upon risk thresholds.

The total reliability cost is calculated by multiplying the annual failure rate by average cost per outage, which results in an average annual cost for outages as a function of automated device number, as illustrated in Figure 4.

Figure 4: Average annual outage cost due to outages and installation costs graphed as a function of automated device quantity.

Installation Cost of Devices

The installation costs associated with n devices are defined by (cost per device) n, which linearly increases with each additional device.  The resulting installation cost function possesses linear characteristics that climb as each automated device is added to the system. Although some manufacturers may provide discounts for large quantity purchases, the cost characteristic is essentially linear when the overall cost is normalized to individual devices.  Figure 4 illustrates the linear growth of installing an additional automated device along with the average annual cost of outages.

Lifetime Costs

Unlike most approaches for determining optimal device number, evaluation of installing automated devices should incorporate net present-day cost analysis for the life of the project. Automated devices reduce the number of customers affected by outages for the entire lifespan of the equipment, which can range between 10 and 20 years depending upon type and manufacturer. For an accurate assessment of how automated devices offset the reliability costs, the lifetime reliability monetary impact must be compared to the installation costs in year zero.

Lifetime expenditures are calculated by summing the annual reliability costs calculated utilizing the average system failure rate for the life of the equipment. To equally compare reliability costs with the monetary impact of installing automated devices in year zero, the annual reliability costs for each year of the expected life of the equipment is converted to the net present-day value utilizing the utility company’s accepted ROR. Annual expenditures, such as the cost of CMOs, should reflect natural inflation growth rate for each year.

Optimal Location

Ideally, the optimal number of automated devices on the mainline is identified by solving the intersection of lifetime reliability costs and installation costs, as illustrated in Figure 4. The goal of utility companies should be to maximize reliability while minimizing costs. Installing more than the optimal number of devices results in installation costs that far exceed the reliability cost benefits. Installing fewer than the optimal number of devices results in reliability costs that could be lowered. The overall cost function is developed by summing the cost function (present day value) for reliability and the cost function for installation results.


Consider a distribution circuit that possesses an annual failure rate of 0.5 outages per year with an average outage duration of 90 minutes. Assume the project spans 20 years for the circuit that serves 6000 customers. Given a CMO cost of $1.10, the optimal number of installed automated devices should be 7 to minimize the lifetime costs of the system, as illustrated in Figure 5A. However, if the failure rate is reduced to 0.1 outages per year, the adjusted cost function identifies an optimal quantity of only 3 devices, as illustrated in Figure 5B. Failure to identify the optimal number of devices can result in expenditures that do not significantly improve reliability.

Identifying a non-optimal number of devices can result in additional expenditures for the life of the automated system. For the given circuit characteristics in the previous example, reversing the number of automated devices installed on each circuit would result in additional lifetime costs of $190,000 for the 0.5 failures per year circuit and $372,000 for the 0.1 failures per year circuit.

Figure 5: A) Cost function assuming a failure rate of 0.5 outages per year. B) Cost function assuming a failure rate of 0.1 outages per year.

Even if the difference between device numbers was averaged to 5 devices per circuit, the first circuit would endure $70k of additional costs, while the other would endure $64k when compared to the optimal design. For an urban area with over 100 distribution feeders, if most circuits are similar to the two in the example, applying the average device number of 5 per circuit could result in additional lifetime expenditures of $6,400,000 – $7,000,000.  


Unfortunately, the cost functions illustrated in Figure 5A and 5B ignore the effects of lateral “Fault Location, Isolation, Service Restoration” (FLISR) devices, whose effects can possibly minimize outage costs if they are connected to adjacent circuits through feeder ties. For these automated laterals, the number of customers connected to the mainline would no longer be considered “impacted customers” in the customer function. As illustrated in Figure 6, accounting for the number of lateral devices and their corresponding functionality (interrupt, sectionalize, reclose) results in several variations of the lifetime cost function. Also, because the goal is to identify the number of devices on the mainline and laterals, the resulting cost function becomes multi-dimensional. In attempting to graph this scenario, the resulting figure would be comprised of one plane representing the mainline and other planes representing each evaluated lateral. Therefore, determining the optimal solution graphically may be impossible depending upon the number of laterals.

Figure 6: Lifetime cost function with two options for installing FLISR devices on laterals that are connected to adjacent circuits.  

Ancillary Costs

The total cost function should reflect ancillary expenditures required to support the automation system throughout its life. Ancillary costs include, but are not limited to, annual maintenance costs for the devices, distributed energy resource support, and demand side management support. Costs should be adjusted annually to reflect inflation.

Distributed energy resources (DER) in the form of energy storage (electric vehicles included) applications and photovoltaic systems can assist utility companies in restoring power more efficiently during an outage. Relying completely upon utility energy sources for customer restoration may place an operational strain upon the system that can lead to future outages. DER devices can provide support to local customers during outage events. The cost to support customer owned DER equipment during an outage must be accounted for within the annual average outage cost function.

Demand side management (DSM) efforts in the form of energy reduction can also aid in power restoration efforts. Just like DER contributions, the annual DSM expenditures to support outage restorations should be accounted for in the annual outage cost function. Be sure that the annual costs reflect changes in energy payments to customers over the lifetime of the project.     

Configuration Considerations

Feeders who’s mainline consists of significant amounts of overhead (OH) and underground (UG) construction should be analyzed in pieces. First of all, outages occurring on the underground system may require longer restoration times depending on the conduit system. Second, automated devices designed for overhead systems can be 30% cheaper than those designed for underground systems. 

If sections of mainline repeatedly go from OH to UG multiple times, consider only installing overhead automated devices due to the costs.   

If the mainline consists of a long OH section followed by a long UG section, or vice versa, the lifetime cost of each should be determined separately and then combined into a single cost function.


Most utility companies employ talented power engineers with the skill and intelligence to develop customized algorithms to identify the optimal number of automated devices on a feeder. Most approaches described in journals present complex and cumbersome approaches that neglect lifetime costs while utilizing unusual data. It appears most authors were more concerned with developing publication worthy mathematical techniques for solving combinatorial optimization problems instead of providing a useful tool engineers can employ with ease. The fundamentals presented within this article provide guidelines for developing a custom approach that relies upon routinely calculated data and is solvable utilizing excel or MATLAB.

Distribution Automation is a key track at DISTRIBUTECH International each year and we are accepting abstracts for next year’s event, set for San Diego, February 9-11, 2021. Learn how to submit your abstract to possibly speak at the event by visiting this link.

Previous articleSecuring cyber-physical systems: Overcoming 3 hurdles to a holistic approach
Next articleExamining thorough resilience for electrical distribution
Chris Pardington is a utility engineer with over 30 years of experience with companies such as Xcel Energy and Schweitzer Engineering Laboratory. After graduating from New Mexico State University, he focused his career on designing and maintaining distribution systems, transmission systems, and substations. During this period he managed utility departments focusing on reliability, power quality, substations maintenance, and meters.

No posts to display