Communication Technology, Customer Service, Cybersecurity, Executive Insight, Policy & Regulation, Transmission

IT strategy and the electric grid: How the New York ISO is changing its approach to technology

The New York Independent System Operator (NYISO) manages the flow of electricity for the electric grid in New York State and oversees the wholesale electric markets. At our 550-person organization, more than a third of our staff are involved in the creation, modification and continued support of our IT. Technology plays a major part in our operation, and always will.

The energy delivery business has seen tremendous change in the past few years, as we transition to a new era of energy sources. Today, our energy comes from solar and wind, energy storage and on-site generation as well as the more “traditional” sources of power such as gas generator, hydro, nuclear and other large generators. Management of today’s electric grid must be able to take into account all these numerous, dissimilar and far-flung sources. And, in this time of tremendous and rapid change, we must also be prepared for new methods of injecting energy onto the grid.

We have recently recognized the need to be better positioned to meet the NYISO’s future business needs. In order to become more agile, more flexible and more efficient, we had to change the way we approached our IT.

What kind of changes? In the past year, we’ve identified four objectives in our IT strategy which together would improve our ability to maintain reliability and integrate these emerging technologies:

  1. Modernize delivery capability and application architecture
  2. Increase automation
  3. Grow our cloud-based technology
  4. Advance cybersecurity risk management

Modernizing Delivery Capability

The NYISO is tasked with a vital role in our community: making sure the lights stay on in New York State. As a conservative organization, we don’t make changes lightly and insist on using proven technology to ensure reliability.

In the past, our typical approach to release new IT projects was to do about three major releases a year. Under this development model, when a major project is launched, it began when the end-users gave their tech requirements to our developers. Then the developers go off for months to build the program. When they’re done, they hand it off to our quality assurance (QA) group for testing. When that’s done, the end-users finally get their hands on it. At that point, they often find things that aren’t exactly what they had in mind, and the developers come back for some final changes.

The process could be time-consuming, but it served our needs well for the 20-year life of the organization, enabling us to deliver the high levels of quality that the business required.

More recently, however, we recognized what some other industries have already embraced: that more efficient processes provide greater value, especially in the time of an energy grid in transition. Thus, we are moving from using traditional waterfall development methodologies to applications based on a micro-services architecture, built using Agile Development methodologies. 

Under this model, our IT staff work with their customers (the end-users) from the start of the development process, involving them in all aspects of programming and testing so the perfect product is delivered.

This new philosophy recognizes that defining a comprehensive list of requirements up front is a difficult task to get right. When we include end-users in the process, making sure they meet regularly with developers, we can help keep the project on track while reducing the chance for unhappy surprises later on.

Under this model, we’ll run through software “sprints” of about two weeks. At the start of each sprint, the end-users sit down with the developers to discuss development during this period. several weeks later, they get together to see the result.

We first introduced this concept about two years ago while making changes to the programs that run our Installed Capacity market. Joe Nieminski, an engineer in market operations, was one of the first end-users at the NYISO to work with developers in this way. He estimates the change has reduced the number of bugs from hundreds to around 20 per project.

“It felt good. We gained a little more power in the process,” he recalled. “It brought the team together a lot more.”

Rikki Brown, a scheduling engineer who uses programs to keep track of generator and transmission outages, agreed.

“You’re not waiting until the end to run into a bunch of issues and having to deviate course,” he said. “It prevents a lot of rework. I know what data I’m looking for. Bringing us into the process earlier has uncovered issues before developers code something that isn’t right.”

This approach doesn’t work for every piece of software. For instance, when working with vendors, where you want an up-front cost based on your exacting specs, it would not do to make changes to your requirements every few weeks.

As we continue down this path, we expect to make changes to some of our legacy applications that will make our older software as nimble as newer technologies. While this work will be time-consuming in the short-term, it will serve us well over the long-term.

Of course, we have to do all of this while continuing our role as manager of the electric grid and the wholesale markets that we oversee. It’s a bit like reworking the plane engine while it’s flying. But our talented staff is certainly up to the task.

Increased Automation

While this is taking place, we will also be making increased use of automated tools to speed up the software development process. In the old days, our QA team was completely separate from our software team. Their work began after the developers were done. They were even under different managers.  

Today, the two teams work together, and we often develop automated testing algorithms while we are developing the application code. This allows constant testing of the software while it’s being constructed. When we’re done, the automated testing leaves us better equipped to quickly retest the software when we make future changes or add new functionality.

Cloud-Based Processing

In the past, we have relied upon the cloud only for such applications as payroll, performance management, and other back-office uses. Today, we use cloud computing for instant access to massive processing space that would cost millions of dollars and take many months to install if we were to host the hardware on site.

The first team to make use of cloud computing was Resource Planning, which began using it last spring (about two years after we first began looking into the idea and did a variety of testing). Planning serves a vital role in the continued reliability of the grid. Using computer models, we continually look ahead days, months and years to determine how the grid might be impacted by the loss or gain of generation or transmission equipment, as well as other factors that might affect the grid.

Team members say the old method of running their simulations, using our on-site high-performance computing network, was limited. Staff had to always check with their colleagues to make sure no one had any large programs running at the same time, which could bring processing down to a crawl. An employee with a project they thought was more important could pre-empt someone else’s planned work, leading to resentment. “It was contentious at times, when we had limited resources,” recalled Michael Welch, a senior planning engineer.

“Now,” he said, “we theoretically have access to infinite resources. We have a wider pipe for everything to fit through. That allows for some of the resource-intensive work to be done more quickly.”

The team now estimates that programming speed has increased 20 percent with access to cloud-based processing, plus the added benefit of not having to wait to schedule their work. In addition, with cloud computing, you pay for what you use. “We have more resources available at less cost,” Welch said.

We have since expanded cloud access to a number of departments in planning and operations. However, we don’t foresee a time in the future where we won’t have our own processing hardware. Keeping our servers on-site to run critical software is needed to meet stringent regulatory requirements.

Cybersecurity

Security is always foremost in our minds, especially because of the critical nature of the services we provide. Threat actors are innovative, adaptive, and will move quickly to exploit any weakness. To address these risks, we have adopted a comprehensive program for addressing physical and cybersecurity risks. For instance, we now have a 24-hour cybersecurity team in a dedicated office that is capable of responding to threats in real time. Our software developers also take security into consideration when working on new programs, consulting with our cyber experts from the start to integrate best security practices into the code.

Much of this is drawn from the North American Electric Reliability Corporation Critical Infrastructure Protection standards, known as NERC-CIP, as well as other industry practices. We are actively engaged in enhancing cyber- and physical security practices to address evolving risks. We continually collaborate with ISOs and other industry partners to assure best practices.

We also work with a variety of state and local government agencies on security initiatives, and in the past we have led cybersecurity exercises to test incident response plans, enhance teamwork and identify opportunities for improvement. We also take part in GridEx, a biennial, sector-wide grid security exercise conducted by NERC.

It’s a changing world out there, and the energy transmission business can’t afford to be caught unprepared. New York State relies on our ability to keep the lights on without fail. To do that, the NYISO must be ready to support innovation and disruptive technology advancements. The work we have discussed here will do just that and allow us the flexibility to embrace whatever changes hit our industry, including those we have not even yet foreseen.