By Prithpal Khajuria and Mike Smith, Contributors
In the USA, there are approximately 70,000 substations (as per the US DOE). SCADA (Supervisory Control and Data Acquisition) systems that are common today have been around – at least, in a basic format – since the 1960s.
Ruggedized microprocessor-based RTUs (Remote Terminal Units) entered the market in the 1970s, primarily tied into switches and breakers. The 1980s saw the ruggedized microprocessor applied to these and other substation equipment such as protective relays, meters, and other devices within the substation fence. The 1990s saw the introduction of IEDs (Intelligent Electronic Devices), which became the de facto name for any equipment in the substation with a microprocessor and a communications port; these were typically interfaced directly to an RTU.
More recent years have seen several other developments that have created what is today an opportunity to leverage technology to bring substation control and monitoring into the 21st century. First among these is the ever-decreasing price and size of computing power. This alone creates opportunities to push more computing power and intelligence to the edge of the grid, in this case into the substation. Second is the development and widespread installation of the smart grid infrastructure. This robust two-way communications infrastructure paired with the availability of relatively low-cost computing power is driving new innovations across the grid and inside of substations.
At the same time, new dynamics have been introduced into the grid that are driving a need for more intelligence across the grid. The introduction of DERs (Distributed Energy Resources), EVs (Electric Vehicles), Storage, and deeper predictive analytics capabilities all combine to create an opportunity for utility leaders to drive computing and intelligence to the edge of the grid. Think for a moment about the grid implications of this new operating paradigm: demand and generation imbalances, dynamic power grid overloading, increasing grid imbalance-driven outages, grid equipment failure, and power quality issues, to name a few.
Enter the virtualized world
In the computing world, virtualization is becoming increasingly common. Examples include virtual versions of a device or resource, such as a server, storage device, network or even an operating system where the framework divides the resource into one or more execution environments. We are now seeing how virtualization can be applied to the grid.
Substation Control & Monitoring System (SCMS) applications and functions that would be well served in a virtualized environment include:
- Automatic Voltage Control
- Tap Position Monitoring
- Load & Bus Transfer
- Load Curtailment
- Capacitor Control Algorithm
- Substation Maintenance Mode
- Sequence of Event Recorder
- Fault Detection & Event Recorder
- RTU and HMI functions
- Firewall, Anomaly Detection and Security
- Analytics & Asset Monitoring
- Business Intelligence and Autonomous Control
- Database/Data Historian functions
- Protocol Translation @ Edge
- Thermal Scanning of Assets
- PMU (Phasor Measurement Unit) applications
Figure 1 below is a high-level view of how this virtualization is architected in a substation.
Figure 1. High-level Virtualized Substation Architecture
Substations today typically have multiple compute devices running specific applications, driving significant costs for security and manageability of these devices. Using rugged servers certified to IEC 61850-3/ANSI 1613 standards with a stack of 3 servers can be used to virtualize these substation applications.
The benefits of this virtualized approach include aggregating all the data from the various devices in the substation and normalizing this into a single data model that can be accessed by multiple applications. This approach also empowers utilities to extract additional information from data using the analytics for applications around power quality, asset management, and more, in addition to enabling faster, less expensive security and operations.
Not just a cool science project
The virtualized substation concept is not one to fall into the category of cool science project of the year; rather, this is a solution that is deployable today. The technology exists, the business functions are readily identified, and the benefits are many, including:
- Ease of updating on a standardized hardware platform: Imagine the ease of updating applications remotely
- Lower capital and O & M expenses: the entire equipment planning paradigm changes and becomes less arduous
- Increased reliability: less equipment to fail, plus the use of standardized hardware
- Redundancy: easier to achieve in a virtualized environment
- Security: application of international security standards on standardized hardware
The virtualized substation is being piloted at two large US investor-owned utilities. While it is still early in the game for the virtualized substation concept, this is part of a logical evolution of how substations are managed as technology advances and needs change.
Figure 2. Substation Virtualization Evolution
As noted in Figure 2 above, substation virtualization will be evolving in the coming months and years; key trends enabling this evolution include:
- Aggregation of data from various IED’s to a single data base and making to available to various applications via a common data bus.
- Deploying the analytics (typically at the edge) to extract more valuable information from data to get better insights
- Deploying deep learning technology to optimize the various control algorithms
As managing the grid becomes more complex, having a secure, scalable platform that provides intelligence at the edge and across the enterprise will become increasingly critical. Virtualization is a big step towards enabling this more reliable, secure operating environment.
About the Authors
Prithpal Khajuria is global business lead, artificial intelligence, industrial automation and autonomous controls with Intel. Mike Smith is a principal Industry consultant at SAS Institute.