By Steve Eckles, El Paso Electric Co.
With increasing concerns about energy efficiency, distribution loss reduction is again assuming a role of prominence in the utility industry. For most North American utilities, the 1 percent or 2 percent of additional peak capacity that could be derived through loss reduction would provide significant savings.

Besides the financial implications, lowloss design also improves reliability, lessens power quality concerns, and better accommodates customer load growth. And, there’s an environmental twist to lowering distribution losses: Lowering distribution losses reduces the amount of pollutants and greenhouse gasses from hydrocarbon based power plants.
Utilities can reduce distribution losses by concentrating on the design and operation of the distribution system. However, some of these lossreduction measures require financial commitment, forcing energyconscious utilities to balance loss savings with capital investment.
Distribution Loss Modeling and Reduction Strategies
Distribution technical losses, i.e. losses resulting from operational inefficiencies, are generally divided into two types: load and noload. For simplicity of modeling, both load and noload real power losses in a distribution system can be modeled as resistors RL and RNL in a shunt circuit as shown in Figure 1.

Load losses represented by RL vary quadratically with system load. They are also referred to as resistive, conductor, copper, or I2R losses. Constant noload losses, represented by RNL, are driven by the presence or application of AC voltage. The amount of load has no appreciable effect on noload losses. The vast majority of these losses are from distribution transformer excitation. Underground cable dielectric losses are also a small component of noload losses.
In a wellrun, highly capitalized distribution system, losses are approximately 3 percent to 5 percent. In neglected systems with poor planning criteria and design standards, losses may increase to 5 percent to 7 percent or higher. Peak losses occur during peak load when generation pricing is at its highest and distribution equipment thermal capacity is often stressed.
Main Causes of Distribution Losses
Physicists may study phenomena such as dielectric and induction losses, but the most consequential distribution losses are due to resistive copper losses and transformer excitation.
Conductor power losses result from electron flow (current) through resistance in primary, secondary, and transformer conductors regardless of conductor material (e.g. copper, aluminum, aluminumalloy, etc.). Feeder voltage, conductor length and size, power factor, load factor, loading, and phase current balance all determine copper losses.
The conductor current contribution to technical power losses is higher than that for conductor resistance as shown in the resistive power loss equation Ploss = I2 x R, where: Ploss is power loss in watts; I is current measured in amperes, and R is resistance measured in ohms.
Conductor heating is proportional to the square of the current; therefore, conductor power losses will double if the current increases by only 41 percent as represented in Figure 2, next page.

null
Benefits of Higher Primary Operating Voltage
To minimize copper power loss is to minimize circuit resistance and current. As is evident by the current squared term in the resistive power loss equation (Ploss = I2R), reducing conductor current results in dramatic savings. This equation reveals that two halfloaded feeders each have onefourth the copper losses of one fully loaded feeder, given all the feeders are the same length and use the same conductor size. This suggests that copper losses could be halved by building doublecircuit distribution primary; however, capital costs would drastically increase.
Therefore, the recommended design practice to economically decrease conductor losses through current reduction is to increase primary voltage. Apparent power (kVA) in a conductor is proportional to voltage and current (kVA = kV x I); doubling primary operating voltage will cut the conductor current in half for the same feeder power flow. Hence, by the resistive power loss equation, the resulting copper loss is 25 percent that of the original voltage using the same feeder conductor and length (see Figure 3). Increased costs for higher voltage class insulation are relatively small compared to total feeder costs. Additionally, the rated feeder power capacity also doubles when doubling the voltage. It is beyond the scope of this article to delve into voltage drop and power flow equations and calculations; however, simple power flow programs show that doubling operating voltage will not only reduce power loss but will also enable more customers to be served on longer feeders with less power loss and less percentage voltage drop. This results in lower overall capital and operating costs.

The second variable in the resistive power equation, conductor resistance, is inversely proportional to its crosssectional area. Hence, larger diameter conductors produce less loss than smaller conductors of the same material for the same current. Voltage drop will also be less with larger diameter conductors; however, construction costs are higher. Larger overhead conductor often brings with it the added expense of stronger poles, crossarms, or shorter span lengths (more poles per mile). The additional cost makes it hard to justify larger distribution conductor on the basis of lowering power losses. More capacity and less voltage drop are better drivers for larger conductor.
Legacy 4160Volt Systems
Many utilities have pockets of 4160 voltage systems in older parts of town surrounded by higher voltage feeders. At this lower voltage, more conductor current flows for the same power delivered, resulting in higher I2R losses. Conversely, converting old 4160volt feeders to higher voltage is capitalintensive and often not economically justifiable unless the line is already in poor condition and needs major improvements. If parts of the 4160volt primary are in relatively good condition, installing multiple stepdown power transformers at the periphery of the 4160volt area will reduce copper losses by injecting load current at more points (i.e., reducing overall conductor current and the distance traveled by the current to serve the load).
Reactive Power Compensation
Both customer inductive loads and current through inductive utility conductors require that reactive power be supplied to the distribution system. This reactive power current lags the real power current by 90 degrees (4.2 ms in 60 Hz systems, 5 ms in 50 Hz systems). A feeder with 113 amps of current at a lagging power factor of 85 percent has 53 amps of inductive current added vectorally at 90 degrees to 100 amps of real power current. Both real power and inductive current are supplied from utility generation through transmission lines, substation transformers, and finally distribution feeders unless other sources of reactive power are added. Fortunately, shunt (phasetoground) distribution capacitors economically supply inductive current. Adding feeder capacitors to supply the 53 amps of inductive current reduces total feeder current to its real power component of 100 amps. The 13 percent feeder current reduction translates into approximately 28 percent lower I2R losses. Additionally, reducing total current frees up system capacity and reduces feeder voltage drop resulting in a “flatter” feeder voltage profile.
Installing multiple stepdown power transformers at the periphery of the 4160volt area will reduce copper losses by injecting load current at more points. 
A number of utilities have adopted the “twothirds rule,” or some variant, for distribution capacitor placement. It calls for installing a quantity of capacitive voltamperes reactive (VARs) equal to twothirds of the total feeder peak inductive VARs at a distance of twothirds of the overall feeder length from the substation. The rule works best for a feeder of constant load that is uniformly distributed along the feeder’s length. These two conditions are more theoretical than realistic and are mostly found in textbooks and technical papers.
Feeder reactive power varies with load throughout the day and throughout the year. If reactive power compensation were only supplied with fixed capacitor banks, it would likely result in overcompensation (too much capacitive current) during light feeder loading and undercompensation (not enough capacitive current) during peak load. This increases total currentleading to increased I2R lossand possible overvoltage (steadystate capacitive current raises voltage over conductor inductance) during light feeder loading.
By using a combination of fixed (left) and switched (right) capacitor banks, reactive power compensation will better track load and minimize losses. 
Placing fixed capacitors according to the twothirds rule (or a similar variation) can be performed relatively quickly with minimal engineering time and will likely produce immediate power and energy savings. However, it does not meet full reactive power compensation (i.e. unity power factor where real power current equals total current) to produce the least feeder I2R losses at peak. By using a combination of fixed and switched capacitor banks, reactive power compensation will better track the load and minimize losses. This has proven to be economical and well worth the extra effort.
To save the most energy annually through capacitive reactive power compensation, the amount of fixed capacitance (VARs) should approximately equal the feeder’s reactive power requirements at minimum annual load. Switched capacitor banks should then be added to the fixed capacitor banks on the feeder until the total peak feeder reactive power requirements are met. Modeling distribution feeders in a power flow computer program will yield the most economical fixed and switched capacitor placement. Without the luxury of power flow modeling, capacitor bank placement can be determined by placing capacitors near concentrated or lumped feeder loads using an 85 percent load power factor approximation. It is worth noting that capacitor bank current can flow both downstream and upstream of the bank itselfideally half going each way. Hence, capacitors should rarely be placed close to the feeder breaker unless a fair amount of load is concentrated there. Likewise, it is best not to place capacitors at the end of distribution feeders unless load or voltage needs dictate it. Lowvoltage concerns during contingency backfeeding may also call for capacitors near the feeder breaker.
Capacitors may be switched according to a variety of factors, so selecting a capacitor switching method warrants a more detailed discussion. Switching determination methods (i.e. trip or close decisions) include: VAR requirements, voltage, ambient temperature and timeofday. Switching on reactive power requirements with voltage override is best for reducing power losses while maintaining proper operating voltage. The simplest way to switch by VAR requirements is installing a local switch controller that uses a singlephase voltage transformer and a singlephase current transformer (CT) to determine the power factor and current magnitude immediately downstream of the capacitor bank and closes it on adjustable settings. The same local capacitor controller can house voltage override controls to close the capacitor bank for low voltage and trip it for high voltage.
Unfortunately, it can be difficult and timeconsuming to evaluate whether local capacitor bank switch controllers are working properly after several years of service. Receiving a graph of a feeder’s VARs (reactive power) over time should show capacitors switching on and off as needed. If VAR information is available from SCADA to a local control center, it may prove more economical to employ a more widespread approach of switching feeder capacitors remotely based on feeder VAR data. A limited number of software and hardware companies (such as Cannon Technologies and RCCS) offer systems that initiate capacitor switching commands based on feeder SCADA.
For simplicity, signals can be sent oneway via utility radio, local paging, and cellular control channel (such as Telemetric). Software monitors feeder VAR response after a switching command is sent to confirm capacitor operation and logs suspected switching failures. Capacitor control receivers may be equipped with voltage override. Software allows adjustments in feeder VAR requirements and also permits capacitor banks to be manually switched to respond to abnormal feeder voltage or transmission system needs. Operating one distribution capacitor may be imperceptible on the transmission system, but a global command to switch all distribution capacitors on or off should be evident. For widescale implementation, SCADA VAR control capacitor switching is recommended due to its response to reactive power compensation needs with the flexibility of global or individual capacitor manual override. Using such a system, one utility reports replacing all fixed capacitors with switched ones to automatically flag cases of blown fuses as the software systematically cycles capacitors during early morning hours to test them.
Transformer Sizing and Selection
Typically, distribution transformers use copper conductor windings to induce a magnetic field into a grainoriented silicon steel core to step feeder voltage down for customer use. Therefore, transformers have both load loss and noload core loss. As in other conductors, transformer copper losses vary with load based on the resistive power loss equation (Ploss = I2R). For some utilities, economic transformer loading means loading distribution transformers to capacityor slightly above capacity for a short timein an effort to minimize capital costs and still maintain long transformer life. However, since peak generation is usually the most expensive, total cost of ownership (TCO) studies should take into account the cost of peak transformer losses. Increasing distribution transformer capacity during peak by one size will often result in lower total peak power dissipationmore so if it is overloaded.
Transformer noload excitation loss, also known as core or iron loss, occurs from a changing magnetic field in the transformer core whenever it is energized. Core loss varies slightly with voltage but is essentially considered constant. Fixed iron loss depends on transformer core design and steel lamination molecular structure. Improved manufacturing of steel cores and introducing amorphous metals (such as metallic glass) have reduced core losses. Through faster material cooling, massproduced metallic glass (or MetGlass ) ribbons were developed by Allied Signal (now Honeywell) in the 1990s that reduced core loss by 60 percent compared to conventional grainoriented silicon steel cores. Copper losses can be slightly higher in metallic glass core transformers but overall loss at rated capacity is less.
Utilities must determine if reduced energy losses more than offset the price premium for more efficient transformers. In general, early transformer replacement programs are not economically warranted.
Feeder Phase Current and Load Balancing
Once a distribution system has been built, some of the easiest loss savings comes from balancing current along threephase circuits. Feeder phase balancing also tends to balance voltage drop among phases giving threephase customers less voltage unbalance. Amperage magnitude at the substation doesn’t guarantee load is balanced throughout the feeder length. Feeder phase unbalance may vary during the day and with different seasons. Feeders are usually considered “balanced” when phase current magnitudes are within 10 percent (based on the average current among phases). That is: [(highest phase current – lowest phase current)/average phase current]< 0.1.
Similarly, balancing load among distribution feeders will also lower losses assuming similar conductor resistance. This may require installing additional switches between feeders to allow for appropriate load transfer.
Load Factor Effect on Losses
Typical customer power consumption varies throughout the day and over seasons. Residential customers generally draw their highest power demand in the evening hours when they arrive home from work and school and activate the heating or cooling system, turn on lights, and prepare dinner. Conversely, commercial customer load profiles generally peak in the early afternoon. Because current level (hence, load) is the primary driver in distribution power losses, keeping power consumption more level throughout the day will lower peak power loss and overall energy losses. Ideally, peaks should be “shaved” to fill in troughs. A common measurement of load variation is “load factor.” It ranges between zero and one and is defined as the ratio of average load in a specified time period to peak load during that time period. For example, over a 30day month (720 hours) peak feeder power supplied is 10 MW. If the feeder supplied a total energy of 5,000 MWh, the load factor for that month is 0.69 (5,000 MWh/(10MW x 720 hours).
Lower power and energy losses are achieved by raising the load factor, which, evens out feeder demand variation throughout the feeder. Increasing the load factor has been met with limited success by offering customers “timeofuse” rates. That is, companies use pricing power to influence consumers to shift electricintensive activities (such as, electric water and space heating, air conditioning, irrigating, and pool filter pumping) to offpeak times. With financial incentives, some electric customers are also allowing utilities to interrupt large electric loads remotely through radio frequency or powerline carrier during periods of peak use.
Utilities can try to design in higher load factors by running the same feeders through residential and commercial areas.
Conclusion
With increased concern for generation and transmission efficiencies, utilities will find an increasingly compelling business case to adopt distribution loss reduction strategies. The core of these strategies involve reduction in both circuit current and resistance. Distribution feeder design philosophy may be adapted to reduce feeder current by using higher operating voltage and a higher quantity of feeders that are shorter in length. In addition, modifying standards to specify larger conductor sizes will reduce resistance. Current may also be reduced in existing feeders by adding fixed and switched shunt capacitor banks.
Increasing new transformer sizes may reduce peak power loss but may result in higher annual energy loss due to increases in noload losses. Some utilities may benefit by paying more up front for higherefficiency transformers and recapture those costs through loss savings over the next 20plus years. Balancing current on existing feeder phases and redistributing load among feeders will also improve economic operation of existing systems.
Distribution loss reduction takes some engineering analysis, but it will pay off for years to come.
Steve Eckles has been a distribution engineer for 14 years at El Paso Electric Company and is a licensed PE in New Mexico and Texas.