Intelligent Electric Meters: Gateways for Advanced Utility Applications

By Gerritt Lee, P.E., Hawaiian Electric Company

Electric utilities today are harvesting the operational benefits of automated meter reading (AMR) systems to improve the bottom line in an ever-demanding market. In addition to the positive economic benefits derived from a mechanized read process, AMR has evolved from standalone functionality into a gateway for intelligent utility applications. With the capability to capture detailed energy consumption profiles and real-time event notifications from the meter along with two-way command and control, AMR technology has evolved into an advanced metering infrastructure, or AMI. Demand-side management, outage notification, distribution planning and critical-peak pricing are a few of the services collectively made possible by AMI.

Click here to enlarge image

AMR/AMI systems today offer vast amounts of information in a truly automated fashion. Instead of simply giving up a single read when polled for it, automated meters in contemporary fixed wireless networks can automatically broadcast consumption readings at 15-minute or hourly intervals. Event-notification circuitry and sensors within the meter can detect power outages and meter tampering and alert the system to such events. Two-way command and control allows a system operator to remotely disconnect service to delinquent customers or control appliances attached to a home area network.

The technology has evolved beyond simple meter reading into a suite of new utility applications with operational benefits enjoyed by those involved with generation and distribution resource planning, demand-side management and system outage response. Collectively, these applications form the basis of an advanced metering infrastructure. With energy consumption readings being reported by the meter every hour or 15 minutes, a load profile history normally reserved for commercial customers can now be easily assembled for residential customers as well. Such granularity can easily be aggregated across entire residential neighborhoods to provide resource planners with precise information concerning present growth trends to aid in the allocation of power-generation resources and distribution facilities. This is a significant improvement on older heuristical models whose rough estimations may be outdated.

Power demands placed during peak hours or seasonally hot weather can put a critical strain on the system causing demand to exceed supply and result in rolling blackouts or brownouts. The difficulty with building new power plants to meet the growing need stems from a plethora of stringent environmental and government regulations that have a very lengthy approval timeline as well as community reaction that is generally opposed to such construction. Because of these challenges, utilities must resort to a process called demand-side management whereby large customers are asked to voluntarily reduce their demand during critical peak hours in exchange for certain conservation program incentives. A meter with two-way command and control functionality allows the utility to “shed” load by temporarily disconnecting non-essential customer equipment for residential customers as well until the critical peak event has safely passed. This task can be executed with the meter serving as a gateway to a network to which the equipment is attached or by attaching an endpoint control device to the equipment itself. Similarly, critical peak pricing signals can be sent to air conditioner thermostats and other devices on a home-area network that are programmed to automatically respond by reducing their demand during critical periods.

A utility’s power outage and restoration group is another that can realize operational benefits. Very often, customers themselves form the basis of a company’s power outage response system. This is a crude process because of the inherent latency between the time when an outage occurs and when it is reported with further analysis required to pinpoint the location. With real-time outage notification from the meter, outage reporting is both automatic and instantaneous and allows the trouble dispatcher to quickly and precisely home in on the affected area(s) and dispatch field crews.

Wireless communication technology has gained popularity with AMR/AMI systems because it offers untethered flexibility and generally is more cost-effective than its wire line counterparts, such as power line carrier and BPL. A wireless system relies on an RF backbone for the meter data collection and represents a significant investment. With a host of competing RF technologies to choose from, how does one comparison-shop and evaluate before embarking on a full-fledged RF survey and pilot deployment? The remainder of this article sets out to help the planner or engineer navigate the vagaries of the wireless RF landscape to get a sense of infrastructure cost, scalability, and RF performance and interference issues.

RF Infrastructure Characteristics

Infrastructure cost is proportional to the size of the communications “footprint,” or the number of meters in communication range of a single tower or radio device collecting AMR data. Obviously, a large footprint is desired because total cost on a per meter basis for initial installation and maintenance is lowered. A meter licensed to operate with two watts of RF power has four times the free space communication range as one with only 500 milliwatts (half watt), typical of unlicensed devices. The choice of system carrier frequency also plays a key role because RF signals operating at higher frequencies suffer more attenuation loss per unit distance than those at lower frequencies (e.g. 5 Ghz vs. 900 Mhz).

In the U.S., the Federal Communications Commission (FCC) regulates both the effective power that a radio device can emit as well as the frequency allocation of the spectrum. Unlicensed spectrum is made available in the public domain, but permissible power output is low to mitigate interference. Licensed spectrum with higher power allowances can be “primary use” that carries exclusive rights or “secondary use” that leases bandwidth from the primary operator. FCC spectrum allocation charts map spectrum bandwidth to the various permitted uses such as the ISM band for industrial, scientific and medical services that pertain to AMR.

Wireless systems can be classified by their use of spectrum as either narrowband or broadband. Narrowband systems have a high power density concentrated within a small frequency range whereas broadband distributes its power over a much wider swath of the spectrum. Broadband systems using unlicensed spectrum such as 800/900 Mhz spread spectrum or 802.11 OFDM Wi-Fi, have the advantage of avoiding the high cost of purchasing an FCC license to operate. Disadvantages include more hardware-intensive network architecture to compensate for the meter’s lower power and greater scalability challenges to accommodate future growth. Siting issues for new antennas or radio sites regarding real estate acquisition/lease, securing easements, obtaining FCC, government agency and community approvals becomes a greater burden for unlicensed or low-powered systems that have more network elements to contend with. Unlicensed spectrum operators must all contend with the increased “noise floor” they each contribute to and which their signals must compete against to be distinguished from noise.

Licensed spectrum is typically narrowband because usable spectrum is a limited physical commodity. Benefits include a higher power output, greater communication range and fewer network elements to provision. More importantly the exclusive use provision and enforcement offered by the FCC keeps interference and the noise floor to minimum levels. This increases the effective communication footprint without a corresponding increase in power to compensate for a noisy channel.

RF System Throughput Performance

Fortunately, wireless AMR/AMI systems do not require blazing speeds. But what factors affect speed performance? The classic Shannon-Hartley information theory holds that the model of a communications system consists of an encoded message source, a receiver and a noisy channel through which the message is transmitted. The maximum rate of error-free transmission, also known as channel capacity (bits per second), is a function of both the channel (or system) bandwidth and the power of the message relative to noise (S/N ratio). By this principle, impaired channel performance can be improved by throwing more bandwidth or more power into the equation, within permissible FCC limits. Although a fast data rate is usually desirable, this can lead to the introduction of digital data errors in some cases, as we shall see.

The Multipath Phenomena: Demystifying the Urban Landscape

Urban landscapes present the greatest challenge to reliable digital wireless communications because the mechanism for signal propagation between an RF source and receiver is complex. Anyone who has witnessed the common occurrence of static on a car radio suddenly disappearing when moving only a short distance has experienced the multipath-fading phenomenon.

The mechanisms that create multipath fading are collectively known as shadowing, reflection, diffraction and scattering. Shadowing occurs when a propagating RF wave passes through a media (e.g. rain) that absorbs some of its energy so that its amplitude becomes attenuated upon egress. Reflection takes place when a propagating RF wave impinges upon an object that is large compared to its wavelength (e.g., the surface of the Earth, buildings, walls, etc.). This causes the wave to be redirected in a new direction relative to the angle of contact with the object. Diffraction arises when the radio path between transmitter and receiver is obstructed by surfaces with sharp irregular edges causing the wave to bend around the obstacle. Scattering occurs when the propagating wave strikes objects smaller than the wavelength of the wave (e.g., foliage, street signs, lamp posts), and the wave is scattered in many different directions. Incidentally, pine needles cause severe scattering at 800 Mhz because of similarity in wavelength dimension.

Reflection, diffraction and scattering explain how communication is possible even without a direct line-of-sight (LOS) path between source and receiver. These mechanisms can generate secondary “wavelets” when the energy contained in the primary wave front strikes and “bounces off” buildings and objects. Each wavelet takes a different oblique path to the receiver like how a pinball snakes its way through an obstacle course. The wavelets also have dissimilar arrival times at the receiver and when they recombine and superimpose, constructive or destructive interference results due to the interaction of their respective amplitudes and phase angles. This explains the peaks and nulls that occur over short distances such as the radio static described earlier.

Intersymbol Interference

Digital RF signals are especially susceptible to intersymbol interference, or ISI for short. ISI occurs when the multipath phenomenon causes several strong “echoes” of the message to be received in time succession and in which a previously transmitted message runs into the current one causing symbol bits to “crash” into each other out of turn and “smear the message.” By definition, the total time interval during which echoes with significant energy arrive is called the maximum delay spread time. More often, the standard deviation, or Trms value is used to characterize the delay profile. For digital signals with a high bit rate, no serious ISI is likely to occur if the symbol duration is, say, 10 times longer that the Trms delay spread.

The concept of delay spread can be illustrated by imagining two lanes of cars driving side by side down the highway. Cars in one lane represent the symbols of the message while those in the other represent its time-delayed echo. When both sets of cars are exactly aligned side-by-side, the delay spread is zero. As the cars in the second lane start pulling ahead, a delay spread starts to occur and is indicative of a fading path. As the delay spread becomes increasingly larger, the echo will eventually eclipse the message and cause intersymbol interference. The typical delay spread in an urban environment is 3 microseconds. Suburban environments and open spaces typically experience delay spreads of 0.5 microseconds and 0.2 microseconds respectively.

The concept of coherence bandwidth is intimately related to delay spread and defined as the range of frequencies over which the attenuation is relatively constant. Effectively, this puts an upper limit on how fast you can transmit data (channel capacity) before the onset of ISI becomes a problem. Recall from Shannon-Hartley the mutual relationship between system bandwidth and data rate. Citing the previous example again, as the data rate climbs, more symbols must be stuffed into the message per unit time to match the increase. This is represented by placing more cars on the road with smaller gaps between them. But since delay spread time has not changed (fixed latency), the timing of the echo will eventually coincide with the appearance of the next message symbol and cause ISI to occur. Since data rate is directly proportional to system bandwidth, capping system bandwidth to be less than the coherence bandwidth can mitigate ISI.

[Coherence bandwidth, Bc = 1/2 Π * Trms]

Given typical values for delay spread, the coherence bandwidth for urban, suburban and open spaces are Bc urban = 53 Khz, Bc suburban = 318 Khz and Bc open = 796 Khz respectively. The coherence bandwidth limitations apply to narrowband systems more acutely than broadband because its message is confined to a smaller spectral footprint. In the case of IS-95 CDMA the system bandwidth is 1.25 Mhz. But because fading does not occur simultaneously over the entire system bandwidth, there is sufficient power in the unaffected (non-fade) portions to compensate.

Multipath and ISI Mitigation Design Techniques

Digital RF systems incorporate design techniques to mitigate the adverse effects of multipath interference. Conventional system designs include spatial diversity, time diversity and frequency diversity. Spatial diversity involves placing two or more receivers at least a quarter wavelength apart. This leverages the probability that both receivers will not simultaneously be in a fading path. This spatial difference compensates for a weak signal received at one location that can be discarded in favor a stronger one at another a quarter wavelength or more separated.

At a rudimentary level, time diversity can be implemented by simply repeating a message over the course of several hours to resolve temporal events that may have interfered with the message, such as a large van blocking a meter. A more efficient approach to time diversity is a technique called bit interleaving. With bit interleaving, individual messages queued for transmission are dissected into tiny little bits, each of which are interleaved with bits from all other messages to form contiguous blocks. This is topped off with forward-error correction schemes such as the Viterbi algorithm that increases immunity to data errors. Effectively, this ensures that if a data block is somehow damaged or lost, only a small portion of each individual message will be destroyed and can be recovered with the use of error correction schemes.

Frequency diversity uses multiple carrier frequencies to transmit the message since multipath fading is frequency selective. OFDM or orthogonal frequency division modulation systems offer very high data rates. Instead or utilizing a single carrier however, OFDM makes use of literally hundreds of orthogonal uncorrelated subcarriers each occupying a narrow bandwidth. Although OFDM systems, such as 802.11 Wi-Fi have a large aggregate bandwidth, the coherence bandwidth can be evaluated separately for each orthogonal (uncorrelated) subcarrier that falls within the limits of the typical delay spread constraints.

For narrowband systems, the potential for ISI can be avoided by ensuring that the system bandwidth does not exceed coherence bandwidth as stated earlier. For broadband systems that employ direct sequence spread spectrum (DSSS) with a pseudo-random number (PRN) “chip” code sequence, such as IS-95 CDMA, rake receivers are commonly used to mitigate ISI. Rake receivers are made up of individual subreceivers, each tuned to lock onto a different multipath signal element to resolve the delay time spreads for recombination into a coherent whole. Rake receivers get their name because the subreceiver functions resemble a rake’s fingers.

To summarize with an illustrative example, the AMR system offered by a leading meter manufacturer is a narrowband system operating in the 900 Mhz primary licensed band. Its network access topology is contention-based somewhat similar to CSMA-CD Ethernet with overlapping tower coverage that provides spatial diversity for fading signals. Time diversity is achieved by convolutional bit interleaving of the message along with a “depth of history” feature that repeats each meter-read message over the course of several hours to resolve temporary signal interference. The inbound and outbound channels for two-way communications each have a bandwidth of 25 Khz and this provides ample coherent bandwidth margin in urban environments and even more is suburban and open spaces.


AMI illustrates a prime example of the merger between technologies in the electric power and communication industries and the synergies between them. Global warming and the industrial world’s dependence on fossil fuels have become controversial and geopolitical issues of the day. At a high level, the service suite afforded by an effective AMR/AMI platform and strategy provides key intelligence needed today to support energy conservation measures. It makes this possible by providing system planners and operators with the precise information needed to efficiently match energy consumption demands with generation peak capacity and allocate those resources accordingly.

Gerritt Lee is a meter engineer with Hawaiian Electric Company and served as project engineer for the company’s technical field trial of a fixed wireless AMR/AMI network. Prior to this, Gerritt had 23 years telecom and engineering experience with Verizon/GTE.

Previous articlePOWERGRID_INTERNATIONAL Volume 12 Issue 10
Next articleUtilities must avoid lost marketing opportunities that create ‘doubtcomes’ with customers

No posts to display