OpenFMB Advances Interoperability Gains
Deep in the heart of Texas, there’s a microgrid powering a library on a military base. Connected to it, you’ll find several pieces of equipment that were never designed to talk or play nice together. And yet, they do because the gear is linked via an open field message bus called OpenFMBTM. It’s an architecture for connecting devices to facilitate real-time exchange of data and interoperability. The technology was conceived at Duke Energy and developed through industry-staffed working groups assembled by the Smart Grid Interoperability Panel (SGIP), which recently merged with the Smart Electric Power Association (SEPA). The National Renewable Energy Lab (NREL) had a hand in development of the framework, too.
Following is a look at OpenFMB origins and how this solution is enabling grid-edge intelligence and control.
Initially the brain child of engineers at Duke Energy, OpenFMB was expanded and developed by SEPA’s (formerly SGIP’s) OpenFMB Technical Working Group. The first OpenFMB reference implementation, which utilized the working group’s microgrid use cases as implemented at a Duke Energy’s test center, was demonstrated at the 2016 DistribuTECH Conference by Duke Energy’s Coalition of the Willing II (COW-II) vendor partners.
In March 2016, OpenFMB was ratified as a model business practice by the North American Energy Standards Board (NAESB). By November 2016, the OpenFMB Technical Working Group had launched an OpenFMB Collaboration Site, www.openfmb.io, and it is available to any industry player wanting to leverage OpenFMB open-sourced software and implementation tools for grid modernization efforts.
“What OpenFMB does is provide a framework with a set of common protocols, including DDS, MQTT and AMQP. That allows us to take different proprietary communication protocols and convert them into a selected common protocol, which for our implementation is DDS,” said James Boston, manager of market intelligence and grid modernization for CPS Energy. “As long as the software you’re running uses pub/sub (publish-and-subscribe) message structures based on CIM semantics, you can pull data from any of the sources.”
Boston explains that this allows utilities to do publish-and-subscribe data applications, where each device has a certain set of data and other devices can subscribe to obtain only the information they need. “That allows you to pick and choose the data you want, which lowers the amount of bandwidth you need,” he said.
OpenFMB also allows utilities to control devices that use different communications protocols at grid-edge.
The Goal: INTEGRATE
While OpenFMB was a Duke Energy brainchild and developed through industry participants, Stuart Laval, technology development director at Duke Energy, said much of the work to get the framework operating in real life began with a 2014 U.S. Department of Energy (DOE) funding opportunity. His utility pursued DOE support along with the OMNETRIC Group, which is a joint venture between Siemens AG and Accenture.
According to Laval, the lab had disparate distributed energy resources (DER) and sought a way to stitch those assets together easily.
“They asked for an open-source, interoperable platform” to support integration efforts, Laval recalls. INTEGRATE was the name of the project, and it was implemented with the National Renewable Energy Laboratory (NREL).
“INTEGRATE stands for Integrated Network Testbed for Energy Grid Research and Technology Experimentation,” said Andrew Hudgins, project leader at NREL’s Energy Systems Integration Facility (ESIF). “The overall objective for INTEGRATE is to enable clean energy technologies to increase the hosting capacity of the grid by providing grid services in a holistic manner using an open source or open-standard, interoperable platform,” he said. Hudgins adds that OpenFMB offers a “more plug-and-play approach” to connecting DER such as solar, wind, and battery and energy storage to the grid.
To validate the OpenFMB framework, OMNETRIC Group used it with the Siemens Microgrid Management System (MGMS). That, plus a 500-kW microgrid that used simulated wind and solar power, was set up at the ESIF. “We have a 1 MW grid simulator, PV and wind simulators, and we were able to utilize the solar profiles from the NREL campus for this project,” Hudgins said. Through its Research Electrical Distribution Bus (RedB) the ESIF can connect real hardware and equipment to a grid, PV and real-time digital simulators, replicating real-world grid conditions and the various devices connected to it.
“We wanted to show how OpenFMB can enable a microgrid management system to control decisions in real time,” Hudgins said. “What we learned in the ESIF helped inform Duke Energy and CPS Energy about how to structure and manage their microgrids.”
All and nothing
Duke Energy’s microgrid, which is located at the utility’s Mount Holly Research and Development Center, tested the ability for OpenFMB to make grid-edge control truly viable.
For the INTEGRATE project demonstration, the Duke Energy team didn’t export energy onto the grid and they targeted net zero consumption when the PV was producing during the day. The utility’s Dr. Laval explains why: “OpenFMB enabled us to implement and integrate our microgrid so quickly, we outpaced our interconnection application process,” Laval said, adding that “it can take anywhere from 16 to 18 months to get approval to export energy.”
Unlike many microgrids, Duke Energy’s microgrid has no generation source with spinning mass to provide inertia, so if power is lost, it’s lost instantly. That meant this facility had to coordinate solar, storage plus a load bank to ensure no power crept back onto the grid when the solar generation was running full throttle during the test project.
Given these conditions, the Duke Energy team knew they didn’t have time to wait for data signals to travel to a centralized computer and then wait again for control signals to come back to that load bank or storage system. “Our system has layered intelligence doing distributed applications,” Laval explains. That is, Duke Energy delegated optimization routines to the field devices. They only used the microgrid management system to control the point of common coupling with the grid.
Duke Energy demonstratee that it could switch from the battery energy storage system from a current source mode in grid connected mode to a voltage source mode during a microgrid islanding event within a few cycles to enable a seamless transition without the need to black start. The team also successfully managed its net-zero optimization routines with device-specific, grid-edge software.
Now, Duke Energy is working with SEPA to expand the original OpenFMB use cases. “We’re adding a new one called “˜DER circuit segment management,'” Laval said.
Putting that use case into action, the Mount Holly microgrid will soon help the utility regulate voltage and improve power quality on the local circuit.
CPS Energy’s microgrid is already feeding into the grid. “We have a 20-kw solar PV installation and 75 kW battery energy storage system,” explained Boston. The microgrid can feed back into the CPS distribution system after power gets stepped up from 480 volts to 13,000 volts with a 500-kv transformer.
What is unique about the CPS Energy microgrid is that it also has a meteorological station for weather observation as part of a solar- and load-forecasting research project being conducted by scholars at the University of Texas-San Antonio.
“They have a sky imaging camera out there. They found an inexpensive way to take pictures of the sky and apply some algorithms on that and begin to project what would be the effects of cloud cover on the solar generation,” Boston said. The forecasts cover three intervals: 15 minutes ahead, hour ahead and 24 hours ahead.
Boston said predictive capabilities are vitally important with a microgrid powered by renewables, an energy source that isn’t dispatchable. “If the sun is out, we’re producing energy at a high rate,” he explained. “When cloud cover comes in, we can have sporadic output. The better we get at forecasting the need, the better prepared we’ll be to support an irregular flow of energy.”
CPS Energy also coordinated its grid-edge devices using the microgrid management system with help from OpenFMB. “We were able to demonstrate with this project that not only were we reading data from different pieces of equipment using different communications protocols, we were also able to set control points from the microgrid management system to the battery,” Boston said.
At the same time, CPS Energy had grid-edge control active, as well. “We had apps built and put on devices that weren’t in our enterprise system but they were running on field computers. Instead of bringing back information and making a decision in one central location, we can distribute that through our system, which increases speed, resiliency and lowers the need for large amounts of bandwidth,” Boston notes.
Control here, there, everywhere
While grid-edge intelligence and control is one way to support DER integration, the OMNETRIC Group team thinks centralized coordination could at times be crucial, so a hierarchical approach is what they focused on delivering with the system tested at the Duke Energy and CPS Energy microgrids.
“The big idea is that we should aggregate distributed energy resources,” said James Waight, senior manager of grid operations at OMNETRIC Group. He includes non-critical loads among the DER, along with generation, storage and control technology associated with each microgrid.
“Each microgrid has its own controller, but these microgrids exist in a larger framework.” Waight explained.
He said a utility could use controllers at several levels of the grid that report in a hierarchical fashion to speed up and simplify communications through the system.
Why the hierarchy?
“A large utility may have millions of customers,” Waight said, which means there could be tens of thousands of microgrids on its system. “A distribution-level controller would do things like monitoring and outage management. Then it would report to a higher-level controller on the transmission system, and that in turn might report to a market like PJM or MISO. Each level of the hierarchy sees a simplification of what’s below. It’s a more manageable way to handle complexity.”
There may be differences in each microgrid, but the microgrid management systems are still able to handle the quirks of each microgrid, such as the no-feed-in constraint Duke Energy had to accommodate.
In part, that’s due to the versatility of OpenFMB. “Depending on your facility, you might have different assets and requirements,” Duke Energy’s Laval said. “The INTEGRATE project proved the framework was transferrable because it was easily leveraged at three different sites.”
CPS Energy’s James Boston adds: “From a utility perspective, we’re in an age where our equipment and technology is moving at a faster pace than ever before. How can we be sure the technology we’re putting in today will be able to communicate and work along with the technology we put in tomorrow? We strongly believe that interoperability is one way to bridge that gap, and OpenFMB delivers it.”
OpenFMB is an interoperability catalyst, and it delivered flexibility for Duke, CPS, NREL and OMNETRIC Group in these implementations. SEPA and the OpenFMB Technical Working Group would like to see more utilities participate and bring their use cases for the team to solve. They would also like to see more vendors and solution providers explore possibilities to deliver interoperability using the framework. | PGI
Aaron Smallwood, is senior director of technical services for Smart Electric Power Alliance (SEPA). Smallwood has been in information technology for 20 years and in the utility industry for the last 15 years. As director of IT Operations at the Electric Reliability Council of Texas (ERCOT), he was responsible for the multi-data center IT operations of ERCOT’s real-time grid and market systems, deregulated retail market systems, enterprise data warehouse, systems integration and market settlement systems. Prior to ERCOT, Aaron was responsible for managing the relationship between IT and utility business units at Aquila Inc.