Home Blog Page 2329

No posts to display

1995 Utility Studies Identify Automation Technology Trends

1995 Utility Studies Identify Automation Technology Trends

By Alison Fowler, Utility Automation Research

Utility Automation (UA) Research, formerly CSR Market Data Services, recently completed a series of 1995 utility studies. These studies focus on supervisory control and data acquisition (SCADA) and automated mapping (AM)/facilities management (FM)/geographic information systems (GIS) technologies for the electric, gas and water utilities. Throughout 1995, market analysts interviewed utility managers at more than 3,120 U.S. and Canadian utilities to identify specific system purchasing plans. The projects identified in each study would be awarded within the 30 months following the publication of each study`s report, according to the utility managers interviewed.

SCADA

A comparison of electric, gas and water/wastewater SCADA studies show some interesting trends. Compared to 1994, the 1995 electric SCADA study showed a 14.4-percent decrease in the number of planned SCADA/energy management systems (EMS) projects. A 35-percent decrease was also seen in dollar value of the projects. The same results were seen in the gas utility/transmission pipeline SCADA study. A significant drop of 39 percent in dollar value was noted, with only a slight decrease in the number of projects.

These large decreases in both the number of planned projects and dollar value did not carry over into the water/wastewater studies. This study identified nearly the same number of projects as the 1994 study but reflected a 22-percent increase in planned project dollars.

Figure 1 illustrates the comparison of SCADA projects by dollar value. According to the 1995 studies, 54 percent of all SCADA project dollars were being spent by electric utilities, followed by 29 percent by water/wastewater utilities and 17 percent by gas utilities/pipeline companies.

Approximately 45 percent of the total number of projects identified in 1995 were electric SCADA/EMS, again followed by water/wastewater with 37 percent and gas/pipeline with 18 percent.

When reviewing the planned systems, there seems to be increasing usage of PC-based operating platforms, as illustrated in Figure 2. The majority of planned PC-based operating platforms are identified in the water/wastewater studies. Out of all the projects identified in the water and wastewater studies, approximately 74 percent plan to use a PC-based operating platform. Gas and electric utilities plan to use PC-based systems considerably less. More electric utilities plan to use workstation-based operating platforms compared to the other utilities. Gas utilities plan to use mainframe based systems in nearly 10 percent of the projects identified, while electric and water utilities planning to use mainframes did not even register on the graph. UA Research has followed the decline of mainframe-based computers and has monitored the narrowing area between PC-based and UNIX-based workstation environment.

In the three utilities, radio is the most popular method of communication either by itself or in combination with other communication methods. In the electric study, radio communication comprised 40 percent of the total planned systems, with fiber-optic communication following with 11 percent. Water/wastewater utilities plan to use radio communication in 38 percent of the projects and gas/pipeline utilities in 18 percent of their projects.

AM/FM/GIS

A comparison of SCADA and AM/FM/GIS projects by utility type is illustrated in Figure 3. This graph shows that according to UA Research, electric utilities plan to spend significantly more on SCADA and AM/FM/GIS systems than water/wastewater and gas utilities. The electric AM/FM/GIS study identifies 272 system and add-on projects valued at $92,251,000. Water/wastewater utilities and gas/pipeline utilities plan to spend approximately $60 million on AM/FM/GIS systems and add-on projects. These AM/FM/GIS system project comparisons are shown in Figure 4. As mentioned earlier, electric utilities will spend more on AM/FM/GIS projects with 43 percent of the total dollars of all the 1995 studies. Gas/pipeline utilities make up the smallest segment with 19 percent.

The 1995 studies show a decline in the number of conversion projects being contracted out to conversion/mapping vendors. According to the research gathered in 1995, more utilities are handling the map conversion in-house. From those utilities contracting out conversion, 46 percent were electric utilities, 36 percent were water/wastewater utilities and 18 percent were gas/pipeline utilities. Digitization is the leading method of conversion and in many cases is used in combination with other conversion methods

AM/FM/GIS and SCADA Integration

The information gathered finds that approximately 80 percent of electric utility projects plan to integrate both SCADA and AM/FM/GIS, followed by water/wastewater utilities with 12 percent and gas/pipeline utilities with 8 percent. According to UA Research`s data, there will continue to be a general trend toward greater integration and data sharing among departments to increase the integrity of the facilities and operations network and to reduce the cost of collecting and maintaining redundant data, all the while improving customer service. Utilities need these automated tools to become more competitive in their ever-changing operating environment.

Click here to enlarge image

Click here to enlarge image

Click here to enlarge image

Assuring Continuous Operation at CONVEX Control Center

Assuring Continuous Operation at CONVEX Control Center

By Mike Meagher, SL Corp.

The key to understanding the operations of the Connecticut Valley Electric Exchange (CONVEX) is the word “exchange.” An operations center of Northeast Utilities, United Illuminating and several smaller utilities, CONVEX operates a control center that links the systems of several operating companies into a single efficient grid, serving approximately more than two million customers in Connecticut and western Massachusetts. One control room governs the movement of power at 69,000 V and above and the generation of nearly 10,100 MW of electricity, including four nuclear power plants. CONVEX can never close. Operators at the facility continually monitor the status of every device in this vast power grid–from individual circuit breakers to telemetry data from more than 200 substations–more than 120,000 supervisory control and data acquision (SCADA) and state estimated data points in all. It is the CONVEX mission to use this complex system to assure a continuous flow of high quality, efficient power for customers and the safety of the maintenance workers in the field.

An energy management system (EMS) based on VAX computers monitors and controls the substations, transmission lines and power plants under the charge of CONVEX. However, the EMS had two major drawbacks–lack of high quality graphics and real-time mapboard capability. First, status information was reported to operators in the form of overwhelming displays of character-based graphics and tabular data. The operator had to wade through many displays in order to conduct any analysis and initiate action.

Operators also reviewed a 50-foot static pegboard, which covered an entire wall in the control room, carrying summary status information. The wall was also an important safety device, holding the safety tags which showed where parts of the system had been temporarily shut down by maintenance workers. The static mapboard was cumbersome, it was difficult to update in a timely manner, and it had the potential for error.

Recently, the static pegboard was replaced by a 10-foot-high by 52-foot-wide video wall employing SL Corp.`s graphic modeling system software to display a dynamic, interactive, near real-time view of the system.

Hardware and Software Protects Workers, Enhances Customer Service

The video wall presents a dynamic schematic of the CONVEX transmission system. The objects in the schematic are linked to variables in the real-time EMS database so the graphical objects change with the values in the database in near real-time. This allows an operator to determine the status of a device or any entire area at a glance. For example, graphic modeling system objects that represent lines and transformers include limited information. In that way, if limits are exceeded, the object changes color to inform operators of the problem and its severity (such as turning red as a warning). Objects representing circuit breakers have the ability to open or close to match the state of the field device.

Operators also open windows on the video display with detailed data on a single substation, or a window with several stations in a subsystem.

The display also employs path function of the graphic modeling system software to animate other components in the system. When a line is overloaded, an icon begins traveling from one end of the line to the other until an operator resolves the situation. Safety is always a key concern and the graphic modeling system software objects include special tagging palettes used to mark devices when people are working in the field. Information on the video display is easy to understand and reliable, and it is presented as soon as an event occurs.

The display is an inventive combination of modified X technology, a UNIX workstation, a new video projection system and the graphic modeling system software. The workstation is linked to the EMS database via a fiber-optic network. The video display system utilizes four high-resolution and luminosity projectors, driven by video cards in the workstation`s Sbus. An extended X-Server combines the output of the four video cards into a single 10-by-52 foot X-Window. In this way, the graphic modeling system software, which is also X-based, provides a dynamic display of more than 100,000 graphical objects, updated every four seconds. The operators interact with the EMS and the video image via their VAXstations in front of the video wall.

It is the high performance nature of the graphic modeling system that allows for such a large amount of graphical data to be displayed in a single X-Window. In addition, multiple instances of the same application can be run off of the same workstation. This allows multiple users access to the same workstation with excellent performance benchmarks.

Operators can now easily share information and assist each other with troubleshooting. An operator on one end of the screen (dynamic mapboard) can open a detail window, ask a colleague at the other end of the room to review the data, and then use a mouse to drag the window to the opposite end of the display. The mapboard application can also be displayed across the network onto an inexpensive PC-based X-Server.

On each operator`s VAXstation there is a small portal window. When the operator moves the cursor into the portal window, the mouse and keyboard input are directed to the workstation driving the projection screen. In addition to the graphic modeling system-based application, the VAX EMS displays are also integrated with the video display. Now the comparison of numeric from the EMS to graphical state data is straightforward, requiring only mouse movements, resulting in better analysis by operators and smoother operations.

Graphical Drawing Editor Used to Build Realistic, Animated Display

Each of the 100,000 objects on the display was created using the graphic modeling system software`s graphical modeling editor application. The graphical drawing editor was used to create each object, and also to specify its dynamic properties and link it to one of the 120,000 variables in the EMS database. All of the substation models were initially generated in software, automatically providing all needed attributes and database binding information.

The editor proved to be an easy-to-use, intuitive tool. A team consisting of an engineer, a CAD designer and an assistant built the entire display in less than four months. The short development cycle was possible due to the object-oriented approach of the graphical modeling system software. Once an object is created, it can be instanced as many times as needed, and each new instance retains all of the properties of the original. Also, complex portions of the display, such as substations, were generated by grouping collections of devices. Each device retains its individual dynamic properties, and properties can also be assigned to the group object. Finally, objects can be tested using simulated data without leaving the drawing tool, further cutting development time without sacrificing quality.

As soon as the video display was installed, CONVEX began plans to extend the application and capitalize on the flexibility of the graphic modeling system software. The first is the use of Motif widgets that can be seamlessly integrated with graphic modeling system generated displays. Beyond that, data visualization tools (graphs, charts and other trending information) are now being developed to provide the operator with a better way to analyze power flows over time. In addition, the graphic modeling system application will be fitted with artificial intelligence software that will suggest a course of action to the operator. The graphic-modeling-system-generated objects will display these suggestions in an animated format.

Author Bio

Mike Meagher is a software engineer for SL Corp., working in various engineering, training and support capacities. Prior experience includes time at NASA`s Ames Research Center, concentrating on graphics applications for computational fluid dynamics.

Click here to enlarge image

Dispatch operators view a dynamic schematic of the CONVEX transmission system.

Open Systems: The Vendors Perspective

Open Systems: The Vendor`s Perspective

By Allen R. Skopp, Bailey Network Management

Open system architecture has had a profound effect on all facets of utility automation. Now, for the first time a mechanism exists that provides for the orderly migration of existing systems rather than completely replacing them every 10 to 15 years as has historically been the case. For most users, especially those electric utilities with complex and expensive supervisory control and data acquisition (SCADA)/emergency management system (EMS) installations, the cost of purchasing new equipment has been dwarfed by the inherent labor and system downtime cost associated with a large-scale system replacement taking years to complete. By contrast, open systems allow users to make their own choice of hardware and software in order to serve the real-time information needs of the company rather than being forced into ever larger and more costly replacement programs.

Benefits vs. Challenges

The benefits of open systems are well understood by most users. But, what about the many challenges that open systems present to the supplier community? The ability to operate on a variety of hardware platforms, interoperability, adherence to accepted standards, the ability to incorporate a wide range of third-party software, and upward performance mobility–though clearly advantageous to the user–present significant challenges to the system vendor.

What it Is(n`t)

So what is an open system? Some people think that a system is open if it is merely UNIX based. Although UNIX is certainly a software environment that is conducive to open system architecture, “openness” is not implicitly guaranteed. Likewise, a system can be POSIX compliant and/or offer open interoperability and still not be an open system. Therefore, let us begin with a working definition for what open really means from a functional standpoint.

Open architecture system–A system that can be migrated, offers open interoperability, adheres to applicable standards, consists of a configuration of hardware and software elements available from multiple sources and that can be integrated, upgraded or replaced without any undue legal or technical constraints.

Fortunately (for the industry) several of the key automation system suppliers now offer systems that generally meet this definition–at least to the extent that is important to the end users. But “open” is much more than just a definition. Indeed, it implies a comprehensive business philosophy that must permeate all levels of a company and dictates an entirely different way of doing business. Moreover, if management doesn`t understand this broader meaning–and support it wholeheartedly–a truly open solution will not likely be realized.

An “Open” Commitment

Both philosophically and practically, this is no small commitment. What it means is that suppliers must be willing to remove all of the traditional barriers to true openness. They must be willing to share application program interfaces (API) with other software suppliers, allow other suppliers to put not invented here applications on their platform, provide source code, eliminate proprietary protocols and avoid proprietary hardware, and perhaps use another suppliers–possibly a competitor`s–software package if it is truly a better fit for the intended application.

Obviously, a fully open posture puts the system supplier at some risk but, like any other business risk, the benefit to the customer must be a primary consideration. In fact, most suppliers don`t really have a choice. Companies will survive and grow only when their goods and services are provided in a way that directly benefits their intended customers. In order to maintain a viable operation, some suppliers will have to accept the fact that significant restructuring may be necessary in order to preserve that objective. Companies that insist on operating in today`s environment the same way they did 25 years ago will likely find themselves badly out of step with their customers` cost/benefits analysis.

In order to more fully appreciate the impact on suppliers it is helpful to revisit circumstances during the late 1960s when computer-based SCADA master stations (now we call them host computers) first began to appear. Remember, in those days most suppliers were “vertically integrated” with some companies even building their own computers. Programming was in assembly language and almost all vendors made their own remote terminal units (RTU). Consider the variety of expertise and labor intensity required to support all of those activities!

Now, jump ahead to the mid-1990s, to modern open systems with increased performance and greatly enhanced operator-tunable functionality utilizing standard operating systems, high-level programming languages, advanced software tools, standard hardware and RTUs available from multiple sources. Besides the fact that the need for personnel has been reduced, the average selling price of systems (i.gif>., with equivalent performance characteristics) is drastically lower now than it was then and delivery times are shorter. It is no wonder then that suppliers must restructure in order to remain competitive.

Open Support

To some, open also means that the system can operate on several computer platforms. Yet even when this portability dimension is not a major development task, the underlying challenge for the system supplier is one of support since the supplier will need specially trained people for each type of computer platform that is placed in service. There are also other considerations: proposal preparation, marketing, training, sales development, volume commitments and documentation. These are all issues that must be dealt with and that come with a price tag for the vendor–in some cases, a big one. And, although the vendor may still have favored computer platform, the ability to support other platforms must also be preserved.

Proprietary Hardware

Proprietary hardware is another taboo for open systems since only the vendor can provide spare parts and effective service. By contrast, open system guidelines dictate that any hardware used should be available from multiple vendors. Again, the benefits to the user are significant. Open system vendors must be willing to provide standard products that can easily be maintained by other organizations, not only by the system supplier`s own staff. Systems that offer a hardware configuration with all elements available from multiple sources offer a distinct advantage to the user. The user is not only free to choose the type of maintenance support activity that is most cost effective and compatible with specific needs (e.g.., the system vendor, third-party service provider or in-house staff), but it is also possible that the same equipment/components may be used in other parts of their facilities as well.

Open Solutions

Open systems also allow third-party solutions to be integrated into the system, not only as an application, but also as fundamental part of the supplier`s native system offering. This may be a quick way for vendors to acquire needed functionality with minimum risk. But, although there are some clear advantages to this approach–and it may in fact be the best solution–it also presents still more challenges to the vendor. A supplier`s decision about whether to make or buy the user interface provides an insightful example.

In the early 1980s when full graphics became a requirement for EMSs, most vendors chose to design and build their own graphical user interface (GUI) since there were few viable off-the-shelf alternatives. But, over the course of several years some commercially available GUIs were finally developed. The feature content of these standard offerings was very robust and allowed faster development time, albeit at a price. However, many companies found that the cost of the modifications necessary to meet the needs of SCADA/EMS systems was quite significant.

Substantial additional costs were also incurred when the GUI supplier introduced new releases or revisions of the product that contained new features that the system vendor wanted to include. Moreover, some GUI suppliers did not have adequate documentation to facilitate easily making modifications. This meant that every release required a major software effort by the system supplier. And finally, GUI suppliers usually develop products on some native platform and then “port” the product to the other platforms, thus preventing the system supplier from offering consistent features until such time as all platforms offered are supported by the enhanced GUI product.

Third-party Software

Virtually all open systems contain several purchased software packages. The benefit to the user (and to the vendor) gained by using third-party software is one of the important strengths of an open system. But here to, there are underlying challenges for the system supplier.

The first of these deals is the software configuration control. This problem–i.gif>., keeping track of the latest software revision–applies to both internal and third-party software, the level of difficulty of incorporating these revision, and the number of systems deployed. Then, at some point, the third-party software supplier will invariably discontinue support for older revisions, necessitating that the software be brought to a currently supported release.

Software license management can also present problems for the open system supplier. The various third-party software licenses that exist in an open system are customarily passed on to the user. It is, however, the implicit obligation of the system supplier to provide some type of license tracking system to the user. A properly designed tracking system can prevent operational interruptions due to possible license expirations or even litigation stemming from inappropriate use of licensed software.

Another fundamental benefit of an open system is the ability to easily add third-party application programs. The user should have the freedom to integrate these applications using the third-party supplier, an independent contractor, the system vendor or internal staff as needs and preferences dictate. In order to do this, the system vendor must provide suitable APIs and make them available for unrestricted use on the system. This can–and often does–mean sharing information with current or potential competitors, and again, this is a risk that the system vendor must be prepared to take along with supporting the add-on application software in a way that allows the customer to realize the full potential of the system.

Standards

Standards are rudimentary to open systems, but strict adherence to standards inherently limits system performance and flexibility in practically all cases. Open system vendors are routinely faced with the challenge of designing and providing a system that not only meets the necessary performance criteria but that does so while adhering to a constantly changing set of hardware, software, communications and other relevant standards. “Where do these standards come from?” you might wonder. They are a result of people working on a multitude of technical society committees, councils and working groups. The challenge to the vendor is to commit the time, money and resources needed to support these standardization efforts.

Buyer-Seller Partnerships

In order for a partnership between a buyer and a seller to be a real value, the seller must provide a system that is truly open. This means that the system must be designed in such a way that anyone–whether the seller, another vendor or the user–can do the necessary system support/migration. This type of partnership continues because the vendor provides exemplary goods and/or services, not because the vendor has a technical (or legal) “lock” on the user. If the latter is true, it probably means that the vendor didn`t have an open system in the first place. A partnership must be based mutual trust, and the vendor must always have customer benefit as the foremost objective.

Use of PCs

The proliferation of personal computers (PCs) has put a great deal of pressure on system suppliers in several ways. The widespread use of PCs has generally heightened performance expectations, especially by the operating staff, for improved user interface methods, data access, data presentation and so forth. The PC-literate user rightfully believes that a system that costs several million dollars should at least be able to do what a few thousand dollar PC can do. Although this is not as much an open system issue as it is a 90`s issue, it presents yet another challenge that system suppliers must face.

A more directly related PC challenge is brought about by the proliferation of data interfaces to the real-time data in the system. The system must be able to serve hundreds, or perhaps even thousands, of users without reduced performance. Despite the ease with which a PC can be connected to almost any open system, the overhead burden must still be taken into account and properly serviced. Many users fail to appreciate the rising burden that such additions place on their system until a crisis results from the aggregated drain on system resources.

Open Access

Open systems usually allow corporate access to the real-time data collected by the system. While this greatly enhances the value of the system for its users, it creates situations that were not typically encountered in older systems. For example, as the number of users on the system increases, system problems become general knowledge very quickly, sometimes causing a potential panic or crisis reaction when even minor problems occur. Widespread access to the system may also present a security problem. Again, responsibility for providing a design that protects the integrity of the system often falls on the shoulders of the system supplier.

System Migration

When making changes to a system, the most expedient way (at least from the vendor`s perspective) would be to ignore the past and just design a new system. However, this obviously would be detrimental to existing customers and would certainly violate the spirit of open concepts. Today`s utilities cannot afford, nor will they tolerate, so-called “fork lift” replacements of their now considerable investments in automation equipment. Instead, systems must have the capability to be upgraded gradually so that the additional investment is incremental and the system is essentially “always new.”

This capability to migrate the system presents a challenge to the system vendor in that the user must be free to choose which portion(s) of the system to upgrade and when. But, since it is inevitable that each user will pick and choose differently, it is quite likely that each system in the field will eventually, if not initially, become unique. Yet despite the differences, each individual system element, irrespective of vintage, must be capable of being upgraded.

Conclusion

System vendors must recognize that their primary challenge is to understand how open systems have changed the way business is being done. System suppliers must adopt an open attitude if they are going to provide truly open system`s solutions that allow multi-vendor support, easy integration of third-party hardware and software and upward mobility.

Author Bio

Allen Skopp is vice president of sales and marketing for Bailey Network Management, Sugar Land, Texas.

North Jersey Develops Computerized Reservoir Management Program

North Jersey Develops Computerized Reservoir Management Program

By Donald Distante, Lawler Matusky & Skelly Engineers, William Goble, North Jersey District Water Supply Commission, and Pen Tao, United Water New Jersey

The North Jersey District Water Supply Commission (NJDWSC) operates two reservoirs, as well as two pump stations, that are used to ensure sufficient supplies of water to customers in northern New Jersey. The Wanaque and Monksville reservoirs have storage capacities of 29.6 and 7 billion gallons, respectively. The Ramapo pump station has four 37.5-MGD pumps that can deliver up to 150 MGD, and the Two Bridges pump station uses up to five 50-MGD pumps that can deliver up to 250 MGD. An inter- watershed connection exists to supply water from these systems in the Passaic watershed to the Oradell reservoir, which is located in the Hackensack River watershed. United Water New Jersey (UWNJ) operates four reservoirs in the Hackensack River watershed and uses this interconnection to supplement its supplies.

Both NJDWSC and UWNJ saw the need to develop a computerized management tool to help minimize the costs of operating the two pump stations, project future supplies during drought conditions, and provide a means to evaluate safe yield given the various operating scenarios for pumping. They envisioned a user-friendly computer program that could provide graphic results that could be presented to regulatory agencies, the public and in-house staff to show and discuss projected supplies, anticipated pumping costs and the effects of various pumping alternatives on cost and supply.

To meet this need, NJDWSC and UWNJ retained Lawler, Matusky & Skelly Engineers (LMS) to develop the required model. LMS developed the model using the database and programming software Paradox for Windows and named it the Wanaque South Management Program, or WSMP. The following are summaries of WSMP`s primary features.

Data Management: The program provides a simple means to access, update and evaluate historical data. Data incorporated into the WSMP includes the daily reconstituted flows (i.e., natural flows) from 1919 to 1993, monthly rainfall data, reservoir stage-volume curves, reservoir rule curves (i.e., monthly reservoir storage objectives below which pumping may occur), system water use demands and electrical energy costs. Daily reconstituted flows from 1919 to 1993 form the basis for performing reservoir and pump station simulations. Daily data were used because minimum reservoir release requirements and minimum passing flow requirements at the two pump stations are specified by the New Jersey Department of Environmental Protection (NJDEP) as daily flows. A statistical distribution that fits the distribution of the observed data is used so that drought recurrences beyond those represented by the data can be used to simulate future supplies. An extreme value distribution commonly known as the Gumbel distribution was used.

Drought Prediction: By entering current monthly rainfall and runoff data the user can ascertain the drought recurrence interval up to the present period by considering single or cumulative periods (e.g., November or June through November). A rainfall/runoff forecast module is also included that uses historical trends to estimate the next month`s rainfall and runoff.

Reservoir Storage Simulation: Reservoir supplies can be simulated for up to 12 months ahead of the current month. The user chooses daily flows based on either an anticipated recurrence interval (e.g., 1/100 years) or a historical period (e.g., 1963) as direct inputs to the reservoirs and as flows at the two pump stations. System drafts, initial storage, sewage treatment plant contributions and pumping alternatives are chosen and the simulation model executed.

Pumping Costs: Two levels of electrical costs are included: a rough estimate based on historical costs per million gallons pumped for each pump station and a refined costing module that incorporates the hourly changes in electrical charges (e.g., off-peak, intermediate peak and on-peak), as well as the pump rating curves, which are a function of the reservoir elevation and the interaction between pumps from the two pump stations, which share a common connection.

Case Study

As of the end of September 1995, the Northeast was in a severe drought and NJDEP had declared a drought emergency. NJDWSC and UWNJ used WSMP to simulate possible future supplies. Runoff data from August 1995 back through January 1995 indicated that the area could be headed for the worst drought on record. The runoff was the lowest on record.

Based on the assessment it was considered reasonable to simulate future supplies using a 12-month, 1/100-year drought running from Sept. 1 through Aug. 31. Using all available pumping and drafts equal to the average draft over the last five years, a simulation of possible supplies was run. This simulation took about one minute to set up and about 30 seconds to run on a 100-MHz Pentium PC. As can be seen, water supplies under the 1/100-year assumption are shown to be above the drought warning line. Planning on the part of NJDWSC and the availability of pumping to replenish supplies resulted in the Wanaque Reservoir being almost full at the beginning of June. In July and August, pumping is not allowed due to water quality and ecological considerations, but the September starting reservoir volume was a result of this planning. As a result, even with the projection of a serious drought, NJDWSC projected that it could maintain the reservoir supply above the drought warning line, assuming that a 1/100-year drought occurred for the subsequent 12-month period.

Recently, there has been substantial rainfall in northern New Jersey and the drought emergency has been lifted. The ability of WSMP to simulate storage under these new conditions resulted in a cost-effective decision to reduce pumping.

In the months of September and October 1995, rainfall and runoff increased substantially (12.8 inches of rain fell at the Wanaque reservoir gauge during the period). WSMP was used to evaluate how much pumping would be necessary to ensure that the reservoirs are full in spring 1996 (as a general rule, NJDWSC tries to have full reservoirs going into the summer and early fall dry seasons) assuming that future flows represent a 1/10-year recurrence. Several pumping alternatives were tested by simulating reservoir storage from Nov. 1, 1995 to Oct. 31, 1996. The following table summarizes some of the key results of these model runs:

As indicated, alternative 3, Only Ramapo Pumps Available, meets the storage objective and would save approximately $206,000 compared to alternative 1, Pumping From Both Stations, which also meets the objective. These estimated costs are based on average costs per MG pumped and are used for general planning. WSMP also includes more refined projected pumping cost calculations which are intended for one-month-ahead cost planning. This example demonstrates how WSMP is used to minimize the cost of pumping and still meet storage objectives. This process of using WSMP to evaluate future pumping needs is continually updated to reflect the previous month`s rainfall and runoff patterns as well as actual reservoir storage. If actual rainfall is lower than anticipated and storage falls below the reservoir rule curves, then a re-run of WSMP would indicate the increased pumpage required to meet storage objectives.

LMS has made several real-time presentations of WSMP by projecting the computer screen images to a larger projection screen. The program has been very well received and one of its key assets is that it can immediately involve the audience. For example, the audience can ask such “what-if” type questions as, “What happens if only three pumps are operated at the Two Bridges Pump Station or if the system drafts are increased?” With a couple clicks of the mouse, the answers are displayed to the audience. This provides a powerful way to present results and to make persuasive cases.

The development of WSMP has provided LMS with a general framework that can readily be applied to other watersheds. A very important element in the development of WSMP was the input received from NJDWSC and UWNJ, who have the knowledge essential to successfully operate their systems. Their practical input was incorporated into the WSMP code, and this type of input is considered crucial for application to any watershed.

The Semantics of Not Puttingthe Ferrari in the Back of the Truck

The Semantics of Not Puttingthe Ferrari in the Back of the Truck

By Lee Margaret Ayers, Oil Systems Inc.

As of late, there has been much talk about moving supervisory control and data acquisition (SCADA) and automatic meter reading (AMR) data out into the corporation. Utilities are experimenting with many different options to do this. Some options include writing a subset of the data to a relational database, or routing SCADA data directly through the WAN, or creating an application/database to temporarily store and utilize the data for a specific application. These solutions are often limited in functionality and are cumbersome to use. In the long haul, home-grown systems tend to perpetuate the nightmare of island-automation within the corporation. What utilities and their research organizations do not realize is that there is already a robust technology that complies to standards. This technology will also exceed any of the results they wish to achieve a hundred times fold for less cost and in less time. This technology is known as a “Process Monitor,” or what I have come to call a “Real-Time Data Historian (RTDH).”

So what is a RTDH? The historian is basically a flight recorder for real-time or temporal data. It archives data, as well as serves out real-time data to the corporation. Data can be accessed from some of these systems in a variety of ways. An Applications Programmer`s Interface (API) allows mission critical applications to be developed using real-time data; a compliance to ODBC will allow the archive to respond to structured query language queries; client applications that are OLE compliant can be very powerful tools for the corporation. Ad hoc trending and calculations within desktop applications, such as Excel, represent complete applications for engineers designing new applications.

There are many business reasons why utilities invest resources to integrate their SCADA/AMR data into the corporation; the primary reason is the market. As corporations become more conscious of costs and competition, they will begin to use the data related to their history of operation more strategically–to monitor performance and minimize maintenance; to monitor usage and develop time-of-use pricing; to simultaneously monitor the usage and evaluate the market, and then decide whether or not to shed load for resale. Other industries have been optimizing the business-engineering process for years. Millions of dollars have been saved in the pulp and paper, oil and gas, refining and petrochemical industries because the history of operation could be analyzed, the margins of operations understood and the processes optimized.

SCADA and AMR systems are not alone in the menagerie of time-based databases that exist or are being created daily. Other types of systems on-line may include, and not be limited to: smart relays, remote terminal units, programmable logic controllers, substation equipment, distribution automation, large industrial meters, etc. I have coined these as “temporal” databases. Unlike corporate asset and business data that are generally transaction-based, temporal data is a continuous stream of information related to the plant or asset; it is simply a measurement and time-stamp. Information at the asset and business levels changes on a daily to yearly basis. Information at the temporal level changes on a sub-second to sub-minute basis. While there is a convenience in having all data in a single vessel, and thereby reducing redundancy and management to another database, what has been overlooked is the nature of temporal data and how it fits into systems designed for transactions. I liken it to the difference between a Ferrari and a 16-wheeler.

If you were in a race from New York to San Francisco and were given a choice of a fire-red Ferrari or a smoke-black, fully stocked 18-wheel trick tractor trailer, which would you choose? The Ferrari goes to speeds in excess of 170 mph. The 18-wheeler will certainly go 100 mph on the “straights.” If you pick the 18-wheeler, you must also deliver and pick up many postal packages along the way. The Ferrari seems like the obvious choice in a race. Would it make sense to pick both vehicles and put the Ferrari in the back of the 18-wheeler? It would if we were not planning to drive the Ferrari, and we were in a postal race–that way the Ferrari would be there if we ever wanted to drive it. Loading and unloading it would be a bit cumbersome, but we could do it.

In the example above, the Ferrari represents a temporal database (SCADA possibly) and the 18-wheeler represents a relational database (RDB). The decisions between which vehicle to choose seems so apparent, yet in the utility world, systems are being designed with a Ferrari in the back of the 18-wheeler. This is because those fine moving vans are a corporate standard. For transaction-based management of the asset, RDBs work perfectly well. But for time-based analysis of operational information, such as producing a trend for a piece of equipment for a two-year period, access and retrieval of data is much slower in a RDB. In addition to speed and performance issues, there are storage issues related to the RDB. Because the quantity of information from temporal systems is so great, some of the information must be dropped in order to store it in the RDB. This would be like arbitrarily throwing away some of the packages that need to be delivered–not knowing which packages contained a check and which simply contained a greeting. If we were monitoring a piece of equipment at a two-second scan rate and storing one piece of information into the RDB every 15 minutes, this would be equivalent to throwing out 450 pieces of mail for every one piece that was delivered. The post office would not do that. Why do we do that as utilities? Because we have yet to understand the value of the data we spend so much time and money to collect.

An RTDH is a cross-breed of the Ferrari and the 18-wheeler. A Ferrari engine is in the front and a minivan is on the back. This high-powered vehicle has special equipment to shrink the packages to one-ninth of their original size–so more packages can be stored. There is also another special device that allows the packages to be scanned for importance, leaving all the possible checks and expensive packages to be delivered. Out of 450 pieces of mail, it is likely that there will be anywhere from five to 15 valuable pieces of mail, for an average of 10 pieces. A machine scans each package for value. The resulting 10 pieces of valuable mail will take roughly the same space in the Ferrari/Mini Van as one piece of mail in the Ferrari/18-wheeler scenario–only that one piece of mail in the Ferrari/ 18-wheeler is not likely to contain a check. In RTDH terms, data are reduced and compressed for efficient storage, access and retrieval. The difference in access speeds between a RDB and a RTDH is several orders of magnitude. The data are stored in a way that the value of the data are maintained. This will be important for the cost conscious, competitive utility.

If 20 engineers were asked to compute the load for 24 hours, or the peak demand for a day in the future, there would likely be 20 different answers. Imagine all the current projects within a single corporation that have some aspect of temporal data associated with them. What data will be used as a common input to a myriad of models? If different data sets are used, what is the value of the result of one model compared to another?

To be a competitive entity, the corporation will have to find cost-effective business models and applications where the margins of operation are consistent and well understood. As utilities move more toward a competitive arena, knowing the exact margins of operation will be critical for many business applications, including the buying and selling of power. The RTDH will be a key component of most models.

So what is meant by understanding the “margins of operations?” Much like the earlier process control systems, current methods of load management resemble a crude management of setpoint values. Setpoint is a desired value that has been determined to produce an optimal outcome–usually cost savings or revenue producing. Advanced control in the process industry represents many optimization loops where the setpoint values for each process are determined through trial and analysis. While generation is easily considered a process, transmission and especially distribution have been overlooked in this regard.

Let`s look at an example. Load management and control have largely occurred with historical loading information from the substations. However, the load distribution is generally unknown. In some locations, previously negotiated contracts allow the utility to interrupt the service to customer locations when a peak load is anticipated. When the need arises, a customer`s power is cycled off. The load information used to forecast the usage is typically crude, and the customers that are cycled off are cycled off whether their portion of the network is overloaded or not. As crude as it may seem, this represents the first step in optimizing the process of load reduction. The desired load represents the initial setpoint value in load control systems–maintaining the load to that setpoint value has financial benefits. After that, there are many more levels of optimization which can occur through advanced control–again like process-oriented systems.

To move to a more competitive stance of operation, or active load control, would require managing loads beyond emergencies. Response to emergencies such as shaving the peak load to prevent it from exceeding capacity produces a cost savings, but additional savings and opportunities are available. What if the algorithm for shaving the load could be improved by one-half percent or more? A half a percent over time would be beneficial if more power were available to be sold. What other algorithms could be added to this model to reach one-percent or more improvement? The following components of load management have their own setpoint values associated with a rate of return:

Equipment optimization for a reduction in equipment failure;

Equipment optimization for load losses;

More efficient peak shaping practices;

More accurate forecasting models for existing and future construction;

More accurate forecasting models for power purchasing;

Real-time, time series forecasting models for load reduction;

Minimize revenues lost by cycling off customers in areas of the network that need load reduction;

Load shedding for sale on the secondary market; and

Guarantee a fixed level of service for a specified rate.

Most people who are making decisions about the design and usage of temporal data will often consider their application alone. An upgrade to the system may be to have three months of data on-line instead of one. A company I know originally had six months of data on line instead of one. Through trial and error it realized that seven years of data on-line better met its needs. The tendency in the industry is to design “better than what we already are”–a “click” to replace function key, vector graphics instead of character graphics, three months of data on-line rather than one. It is not that these kind of upgrades are not valuable, because they are. It is that it is much different designing for the organization you want to become rather than designing for the organization you already are. Most systems implemented today are outdated shortly after implementation.

Current technological advancements in data collection and dispersal to the desktop will allow engineers to completely reshape the way current analysis is performed. Managing load (and other applications) will no longer be an engineering function, but a rigorous business decision. Maintaining the network will evolve into the management of the network. RTDH will play a key role in getting data to the desktop and as input for business-engineering applications. With all this said, remember what is most important–“the check`s in the mail.”

Author Bio

Lee Margaret Ayers is a T&D systems architect for Oil Systems Inc., acting as a consultant and educator to the utility industry on real-time historian technology.