DER-Grid Edge, Metering, Solar, Transmission, Wind

Why machine learning could improve resiliency, save millions

By John Spizzirri, Contributor

Power grids across the United States are facing greater technical challenges as new technologies and needs place higher demands on an already overburdened system. 

We no longer power just the standard household fare of lights, refrigerator, television and air conditioner. Those “clouds” on which you store your life are computer banks that must be cooled 24/7; the use of electric vehicles is increasing as will the electricity mix that will charge them; and more unpredictable weather systems stress utilities, while the growing use of renewable sources introduces additional uncertainties into the day-to-day operations of the grid.

It all begs the questions, how do utility companies keep up with rapid increases in technology and uncertain fluctuations in demand and renewable supply without sacrificing reliability?

There are solutions, and the U.S. Department of Energy’s (DOE) Argonne National Laboratory is beginning to explore them through a particular avenue called machine learning, a suite of artificial intelligence techniques already driving the rapid resolution of issues in a number of research fields.

Machine learning, as advertised, use sample data and progressively learns from it to improve predictions about new data. Its goal is to find an optimum combination of variables that meet the needs of a proposed problem. 

Data, Data and More Data

At the heart of these machine learning techniques is data and lots of it. There already exists within the power industry large amounts of data that can inform optimization models. Some are as simple as historical data of local energy use and weather patterns. Other data are derived from an ever-growing ensemble of equipment sensors or simulations of possible usage or hazard scenarios. 

“With electricity, supply and demand have to be in perfect balance all the time,” said Guenter Conzelmann, of Argonne’s Energy Systems division. 

“So the more information you have, the more you can narrow down usage behaviors or determine how weather patterns will affect consumption. This needs to be done within hours or even minutes so operators can reliably schedule power plants to meet demand.”

Already recognized for its work in energy systems, Argonne brings to the power sector a depth of knowledge in machine learning gained from on-going work in disparate areas of research, from cosmology and medicine to urban and materials sciences. 

In particular, there have been fruitful results from the use of machine learning throughout the lab’s multifaceted engineering programs, especially in the optimization of internal combustion engines.

For example, Argonne researchers were asked to help redesign a diesel engine to run on low-octane gasoline, so they had to figure out how to modify the various design parameters for high efficiency and low emissions.

The team ran data derived from computational fluid dynamic simulations through a machine-learning algorithm and developed a trained model that could predict the performance of the engine given a certain set of design parameters much faster than the simulations could — from two to three months to several days. 

“We’ve proven that we can use machine learning for design optimization of engines, where there are multiple objectives that you’re trying to satisfy given a large number of complicated parameters,” said John Harvey, of Argonne’s Technology Commercialization and Partnerships division. 

“So now we’re trying to expand that, to plug into an incredibly complex problem like energy infrastructure.”

Tackling the Grid

Computational scientists like Feng Qiu are meeting that challenge. He is among the first wave of researchers at Argonne to use machine learning specifically to resolve planning and operations issues inherent in energy systems. 

Qiu and colleague Álinson Santos Xavier are working to help systems operators speed up optimization of the security-constrained unit commitment (SCUC), which determines generation resource commitments, considering many possible factors that can quickly and negatively affect the systems. SCUC is also often used to clear electricity markets and determine electricity prices in the day-ahead markets.

SCUC is solved multiple times each day by system operators. For large systems where SCUC is hard to solve, operators can get only what Qiu and Xavier call an “O.K.” answer within a time limit — about 20 minutes — which does not serve the electricity producers and consumers to the best possible advantage. 

Because the day-to-day SCUCs are very similar — only demands and several other factors change — solutions and outcomes exhibit certain predictable features. Input data and solutions accumulated everyday can be fed into machine-learning models that learn patterns and solution strategies. This, in turn, creates a tool that provides operators with a faster path to resolution for transmission line issues and day-ahead market clearing problems. 

“Right now, we have tested this on a large system which is close to the realistic problem, and we see a resolution that is 10 times faster,” said Qiu. 

“So, if we can improve the ‘O.K.’ answer with a better answer for system operators, the savings for the whole electricity market in U.S. could be hundreds of millions dollars each year.”

Kibaek Kim, an assistant computational mathematician, is focused on power systems under uncertain or extreme conditions, which can range from natural hazards, like hurricanes, to terrorist activities.

During such events, operators must react to system interruptions immediately and with a high degree of certainty in their response. To help them obtain optimal or best solutions in real time, Kim’s team uses a combination of optimization, machine learning and computer simulation. 

While input used for the machine learning models can come from historical data on infrastructure damage and its impact on distribution, this information is usually not available or incomplete. Simulations not only capture the nuances of dynamics specific to a power system, but they can be designed to run millions of possible scenarios on specific types of events that better inform machine learning algorithms.   

For instance, a scenario may represent a perturbed system in which a transmission line is cut to observe how the system reacts. One line failure can cause cascading failures, which are extremely difficult to detect and prevent but by using machine learning, we can potentially help to provide more meaningful information to system operators.

Like Qiu and Xavier, Kim also suggests using machine learning to improve electricity market efficiency. The electricity market is cleared every day by independent system operators, for whom complex optimization problems need to be solved efficiently. The efficiency and fidelity of the optimization model and solutions are key to market efficiency, as well as reliability. The market clearing models can be efficiently solved with support of machine learning techniques, for example, to provide better predictions for renewables and loads.

“A good machine-learning algorithm should be able to guide the operator on how to quickly adjust the dispatch controls, instead of having to recompute everything from scratch,” suggested Kim. 

“In the machine-learning framework, we try to make the model as realistic as possible and run as many scenarios as possible to let the machine learn about the state of that particular system.” 

Ultimately, the results are meant to address the system at the planning and operations levels, allowing utilities to design infrastructure systems more resilient and robust to uncertain or hazardous events. 

While both research teams are currently focused on more practical and immediate-impact problems within the power industry, the possibilities for machine learning in this field are endless. 

“We certainly see more opportunities for involvement at Argonne,” said Conzelmann. 

“Obviously, this will depend on more data, and that has increased as the grid is increasingly monitored by more and more sensors, with the addition of smart meters in our homes as well as advanced sensors that may collect data 30-60 times per second.”

Equally important and complex factors already in play include increased reliance on smart grid technology, the development of large-capacity energy storage and the critical goal of securing grid reliability and resilience. But among the bigger picture concerns, and perhaps the most data intensive, is the integration and control of distributed energy resources into existing power grids.

With so many moving parts, both natural and technological, most researchers agree that the U.S. electricity grid is a complicated system to optimize and one that requires reaction on a time-scale of minutes versus days. Understanding the interplay of all these parts requires more data than is currently available to make machine-learning predictions feasible.

It would require the integration of more sensors on components like solar panels and wind turbines, input from more advanced weather radars and sensors, and the ability to effectively analyze all of this data to maximize the use of distributed energy resources.

“So all of this data can be used by high-performance computers and machine-learning algorithms to improve the forecast for the next hour or the next 24 hours, and we can create a model that runs almost instantaneously,” added Conzelmann. 

“So it’s a perfect application for machine learning, to help improve forecasts for distributed energy generation.”

With its nascent success in machine learning among so many domains, Argonne continues to prove that the machine learning model, from the standpoint of any industry, can rapidly and efficiently evaluate and optimize complex systems and technologies. Even the electric grid. 


Author: A Chicago-based writer for more than 25 years, John Spizzirri has authored numerous science and tech articles for scientific and academic institutions, nonprofit organizations, and a wide range of general and trade publications.


At DistribuTECH 2019, February 4-6 in New Orleans, representatives from Con Edison will discuss how they are using machine learning to help predict transformer failure. Learn more about the show, here.