A two-part series of articles describing the design fundamentals that utilities can employ to identify the optimal number of automated devices by utilizing existing analysis techniques.
PART 1: THE BASICS
The traditional approach for restoring power during an outage event consists of (1) identifying the location of the fault, (2) isolating the faulted equipment from the rest of the system, and (3) operating manual switches located throughout the system to restore power to as many customers as possible. Traditionally, these steps are accomplished by dispatching field personnel to investigate, identify, and restore power as quickly as possible.
Installing automated switches can reduce the overall restoration time from over an hour to periods less than 10 seconds for some customers. A typical distribution circuit can possess more than twenty switches, most of which are replaceable by an automated switch. With the cost of a single automated switch ranging between $50k and $100k, the question many utilities are trying to answer is “What is the optimal number and location of automated switches that should be installed to maximize reliability and minimize costs?”
A general approach for identifying the optimal number of devices presented in this article is based upon fundamental operational research theories, historical system planning approaches, engineering economics, and system information that is routinely recorded by utility companies. In order to allow utilities the flexibility to implement an approach based upon their existing design and operational “standards,” we are presenting a general approach here. Specific details on how to accomplish individual tasks are left up to the utility companies to determine based upon their existing practices. This article is meant to provide the philosophies necessary to develop a custom approach.
Establishing a cost function
Unlike the optimal approaches identified in journals such as IEEE Transactions, our approach does not require knowledge of exotic programming approaches, heuristic optimization techniques, or complex power flow calculations. The approach identifies general procedures for establishing a cost function representative of the lifetime costs of installing automated devices. Solving the cost function identifies the number of customers that should be located between each automated device, which indirectly identifies the number of devices to install on a circuit.
The cost function, which is an optimization objective function, is developed based upon lifetime reliability costs and installation costs as a function of automated devices. By utilizing software such as Excel or MATLAB, utilities can calculate all possible solutions and identify the global extreme that produces the minimal lifetime costs. I am always amazed when I read a journal paper focusing on optimal AD design and it appears the author was more concerned with creating a complex scientific achievement instead of a practical tool that engineers can apply with basic knowledge.
This article is split into two parts, with each addressing the major analysis areas associated with identifying optimal AD. The remainder of Part One describes the reliability impact of installing “one more” automated device. This section addresses how automated devices affect the costs associated with each outage, which includes customer impact and duration.
Part Two, which will be published next month, describes procedures for comparing levelized lifetime reliability costs to the initial installation costs. This section also accounts for the impact of utilizing demand-side management and customer-owned renewable energy devices to assist in outage restoration. To illustrate the benefits of identifying the optimal number of devices, this section illustrates an example of utilizing multiple restoration and failure attributes.
Impact of installing “one more” device
Installing the first automated device on a circuit results in the largest impact to reliability. Unfortunately, due to diminishing returns, incremental reliability improvements decrease with each added device until eventually the cost of adding “one more” device is greater than the reliability cost reduction itself. Continued addition of devices beyond this “marginal device number” results in unnecessary installation and maintenance costs. The main goal of an optimal DA algorithm should be to identify the marginal device numbers on the mainline and laterals.
Consider the two feeder circuits illustrated in Figure 1. The addition of n automated devices equally spaced (regarding the number of customers) reduces the number of customers affected by a permanent outage by a factor of 1/(n + 1). Therefore the first device reduces the number of end-users affected by an outage by 1/2 of the original quantity. Installing a second device reduces the effected customers to a value of 1/3.
As each device is added, the reliability improvement becomes smaller and smaller, as illustrated in Figure 2A. With the number of impacted customers quantified as a function of device number, the next step is to determine the monetary impact of a single outage.
Cost of a single outage
The monetary impact of an outage is a function of the number of customers experiencing the outage and the duration of said outage, which is commonly referred to as “customer minutes out” (CMO) or “customer minutes interrupted (CMI).” Utility companies routinely calculate the average cost of a single CMO or CMI to measure the economic risk associated with system upgrades or repairs directed at improving reliability.
Although the US government provides a calculator for estimating the impact of a CMO, many utilities develop their own custom models for determining the cost of a single CMO, which reflect system attributes such as:
- Energy not served
- Customers interrupted
- Customer type (residential, commercial, critical)
- Costs associated with restoration and repair based upon system characteristics
- Reliability indices impact from perspective of regulatory agencies
Most traditional optimization approaches for identifying optimal DA directly employ the “energy not served” due to an outage. “Energy not served” is an attribute that most utilities do not routinely calculate; therefore, application of such an approach may be difficult. The cost of CMOs typically captures the effects of energy not served. Therefore, applying this routinely calculated monetary value simplifies identifying the optimal number of devices. The cost of a single CMO typically ranges between $0.75 and $1.20, depending upon utility and system conditions.
The utility impact of a single outage is a product of the quantity of CMOs and the cost of a CMO. Assuming a cost per CMO of $1.10, Figure 2B illustrates the per minute outage cost for the sample feeder as a function of device number. As illustrated, diminishing returns associated with installing an additional device continue to impact reliability improvements.
The per minute outage cost is multiplied by the outage duration to identify the total cost of an outage.
Most utility companies record the average duration of system outages so that reliability indices can be calculated. SAIDI (System Average Interruption Duration Index), along with several other reliability indices (SAIFI, CAIDI, and EENS) are typically reported to governing agencies overseeing the utility. Most existing optimization algorithms utilize SAIDI values to represent system outage duration. However, applying this index for duration can lead to inaccurate estimations depending upon existing restoration practices.
SAIDI normalizes the outage duration experienced by “some” customers to the total number of customers on the system or circuit. A more accurate representation would be CAIDI (Customer Average Interruption Duration Index), which measures the average duration of outages experienced by those customers actually affected. However, utilizing a company’s average SAIDI or CAIDI values does not account for the reduction in restoration time due to automated devices.
Outage duration is dependent upon the time required to dispatch field crews, time required for crews to identify the fault location, and the time it takes crews to actually restore power once the problem is identified. The installation of automated devices reduces the fault identification time. Therefore, the average duration experienced by customers is equivalent to the original CAIDI value minus fault identification efforts, as illustrated in Figure 3.
Customers who are automatically restored, even if they experience a 30-second outage before restoration, typically do not contribute to system CAIDI or SAIDI values. For identifying the optimal DAA for a specific feeder, a utility company should utilize the feeder’s average CAIDI value based upon sustained outages.
The remaining time, referred to as average outage duration, is multiplied by the per minute outage cost to determine the total monetary impact of the outage. To calculate the annual cost of outages on the circuit, this product is multiplied by the annual failure rate. The annual cost of outages for the life of the project must be compared to the installation cost incurred at the start of the project.
Part Two of the article describes methods to compare lifetime reliability costs to the expenditures required to build and support the system.
Distribution Automation is an educational track at DISTRIBUTECH International, set to take place next February 9-11, 2021 in San Diego, California. Click here to learn about all of our tracks and submit an abstract to speak at the event.