Already cliché in 2020, utilities are quickly realizing that “abnormal” is, in fact, the “new normal.” For an industry regulated on and engineered around predictable demand and operations, operational seasonality has effectively taken on a whole new meaning. Demand for electricity has been unpredictable and defying forecasts already this spring, which many attribute to 2020 circumstances like the pandemic shelter-in-place rules, closed office buildings, and increased energy use in residences.

This summer looks to be even more complex given so many variables – i.e. businesses reopening while others shut down again given the virus’ second wave (or continuing first wave), fluctuating AC usage, hurricane season and more. If utilities over supply, they lose money. If they under prepare, they risk blackouts. Famous Greek philosopher Heraclitus said it best, “change is the only constant in life.” This is a philosophy that utilities, independent power producers and grid operators can certainly subscribe to.

Though operators are not always experts at identifying novel patterns, many are experts at equipping themselves with data collection tools to help identify these novel patterns. However, they tend to overcomplicate and obfuscate these data streams with myriad databases, file standards, protocols and hierarchies – inadvertently designing powerful data collection systems for single-use applications. That is the old way, and unfortunately, still a widely-used process.

The new way, and perhaps the industry’s best means to navigate complex change, exists with connecting and contextualizing the information available so that the most relevant patterns can be discovered with the highest accuracy and lowest marginal costs. Rather than creating system-to-system integrations and manual mapping, the new way automatically connects the data for you. Let’s dive into a few examples of how improved data processes can help electricity generators and operators during unpredictable times.

Better Data Access and Distribution Improve Decision-Making in Fast-Moving Scenarios

The most valuable commodities in a crisis are time and information. During hurricane and wildfire season, for instance, the flow of information between decisionmakers and operation teams is critical for safety, resiliency, and the customer experience. But when data is hard to access, time consuming to analyze, and lacks relevant context, poor or incomplete decision-making can lead to costly, dangerous outcomes.

While some of these problems are addressed today with individual point solutions, they paint over the fundamental data management opportunity that must be addressed for sustainable, long-term answers that can better adapt to perpetually evolving seasonality. These problems with limited operational visibility can be solved with data contextualization and a single-pane-of-glass mentality that: enables decision makers with machine-learning based insights that surface relevant information from more data sources than before; and empowers field workers with self-service digital applications that get them the right data at the right time with the right context. Overall, better data management tools that help with data contextualization help to yield faster and improved data results in times of crisis. Don’t wait for a crisis to address the problem.

Improved Predictive Models Powered by Better Data Help Minimize Blackouts and Supply Disruption     

At both grid and asset levels, predictability is key to managing risk appropriately and providing consistent supply. Predictive models do not necessarily need more data types, but they do depend on the RIGHT data for best performance. In this sense, risk management on the grid is quickly becoming much more about the fusing of disparate data sources like weather patterns, vegetation growth analysis, transmission data, time series, infrastructure attributes, work order history, and others, into more comprehensive risk profiles and predictive models. The fusing of these different data sources is what’s key with risk management, compared to just improving the analysis on the existing time series and event logs. If these variable data sources continue to exist across silos, data management teams, vendor solutions, open sources, etc., it is prohibitively expensive and not feasible to create the optimal data model that identifies the meaningful relationships between all these different sources of data. Operators must look to tools and methods to automate the contextualization of data, unlocking it from the silos.

Optimizing Grid Balancing and Planning with External Data Analysis

While data is only half of this ongoing operational challenge (intra-organizational data sharing policy and process being the other key element), the integration and analysis of new, non-traditional data sets offer the means to maximize existing infrastructure and lower costs. How? Externally, utilities must identify newly emerging societal patterns and adapt supply and distribution accordingly.

Think weather data, better characterization on the impact from EVs, or developing a better understanding of long-term consumption trends from work-from-home activities. Internally, utilities can combine this field data with their own smart meter data and dynamic line ratings to truly optimize grid operations and improve the customer experience as a result. Conceptually simple, but significantly complex from a data architecture perspective. However, this can all be achieved using advanced data management systems and partners that can provide a smarter, more contextualized approach that offers a sustainable solution long-term.

Next Steps

Dynamism in grid operations is here to stay over the long haul, adding a new chapter and sense of urgency to enterprise digital transformation efforts. There’s also no longer any doubt that the utilities that invest in digital gain clear competitive advantages from their data, noted in the sections above. As disruption is far from over, the next opportunity in true data operationalization requires processing and contextualizing cross-silo data in a way that better mimics the human understanding of patterns and relationships, which occur at significantly lower human (and marginal) costs. This is what will enable organizations to truly harvest the value promised by machine learning and advanced analytics.