Considerations for big data and system protection

System protection has its share of complications. There are relays, which are the most numerous, varied pieces of equipment in a substation. There are relay settings, which vary depending on the numerous and varied schemes throughout the protection system. Then there is the various hardware and software products used and the skilled people who perform work in this highly specialized field.

When it comes to pulling meaningful information from vast amount of data generated during typical system protection workflow processes, tremendous complexities surface. Outside of absolute standardization, there is a way to effectively manage relay data and make decisions from what is revealed in the analysis of it.

In a typical relay workflow from a high level, engineers design and coordinate protection schemes, and through that process develop settings to be applied to the relay devices. The relays used might be single-function electromechanical or solid-state devices with just a few settings, or multi-function microprocessor devices with hundreds or thousands of settings. The settings are applied to the devices by relay technicians in the field who perform tests on the devices and who are, generally speaking, also responsible for ensuring the settings on the devices are correct.

In looking a little closer at this general workflow example, several moving parts become apparent. Each make collection more complex and can erode any gains made toward standardization and efficient workflow processes.

How exactly do the engineers design schemes, and are they uniform from one engineer to another? Protection engineers share a strong knowledge of electric power systems and protection schemes, but they are individuals who each possess their own philosophies and opinions when it gets right down to it. One scheme to the next throughout a utility’s service territory could vary significantly if coordinated in different ways by different protection engineers with different experience and training.

When does this matter? Ask any technician who has ever performed dynamic testing on a complex scheme. Ideally, the technician would have a standardized test procedure as a basis for testing schemes, but unless there is uniformity one scheme to the next, in terms of how logic is applied in asserting the relays’ various protective functions and also including in some cases the relays’ communications, the ability to test schemes from a single testing procedure is effectively impossible. It is for this reason that scheme testing varies considerably as well as the schemes themselves as different approaches are applied.

Delving deeper, what about testing the relays? With so many variations among relays, obviously there is no one single test procedure and certainly no single approach. Just like engineers’ individuality revealing itself in a company’s protection schemes, relay technicians also have widely differing methods and philosophies that come to light when looking through test results.

Whether it is because different test sets are used, or because of different corporate divisions or training methodologies, whatever the reason, it is common for companies that are not standardized to struggle with pulling information out of the test records that are generated. And regardless if the records are paper-based or electronic, it is still a manual process to verify that the records are complete and accurate.

Second, how exactly are the relays identified and what is the information that is kept for each? In many companies, there is a mix of old and new equipment, with old and new inventory management methodologies. It is still commonplace to see manual, paper-based records and computerized electronic records side-by-side in modern protection system programs.

Looking at relays, data can vary significantly. IEEE has numeric designations for protective device functions, for example, that could be used in some ways to identify relays, but what if the relay is a microprocessor-based multi-function device programmed at one site for breaker failure protection and at another site for time overcurrent?

This may be a simplistic example, but it illustrates a basic point: To leverage information from relay system data, whatever that data is — settings, logic, test plans or test results — all aspects involved must be identified and considered.  The problem is, the manufacturers’ software is meant to work only with their respective equipment and devices, meaning a typical relay data management system looks more like a “system of systems” and less like an integrated, single source of record.

Companies invest heavily on software, equipment and training, yet the reality is, many are still unable to leverage all of the functionality available to them with the systems they’ve got. Aside from this, there still is no one all-in-one protection system software solution used by engineers and technicians alike that is capable of modeling system relays, coordinating protection schemes, controlling relay test sets, testing logic, managing work orders and producing holistic reports in the vein of asset management.

From a data collection perspective, it would be great if there were such a software product that could perform all these functions and support a wide range of users because then everything would be standardized and it would be relatively easy to make sense of all the data. But this is an unrealistic construct given the inherent complexities already mentioned. But to reiterate, the work that is performed within system protection is highly specialized and no all-in-one solution is practical — nor even desirable sometimes — for a myriad of reasons. Yet, companies need data management to make business decisions.

Whether in terms of system protection or any other area of the business, companies need to look for ways to standardize workflow processes and leverage the data that gets generated by integrating effectively among the systems they have in place.

Companies that struggle with this can begin by asking themselves, “Of all the things we do within system protection, from designing protection schemes to testing and maintaining relays, from managing settings to managing results, in everything we do, what are the areas of concern?” and look for the fault points in the places where the areas of concern reside because that’s probably where there’s a problem with data.

Whatever the problem may be, by looking inward and paying attention to the details of each step of the processes that occur, other questions can be asked, for example, “What is it we really want to know about what we’re doing and where is the source of record of that data?”

Controls should not be overlooked as companies analyze their processes within system protection because If there are no controls, there won’t be standardization either. Standardization is the key element of any efficient data management system.

For example, as users login, their authentication should be governed by system security privileges granted to them based on their role in the organization. If that role allows them to create new device records, the entry of that information should be evaluated in comparison to data quality standards such as spelling and completeness as set by system administrators.

By asking the right questions and by implementing proper controls, companies that may have previously been struggling with certain areas of their system protection data management gain ground toward defining the problems they actually have and not just the symptoms. Armed with this information, they are in a position to improve, and even help those resources who can help them, whether internal or external to their organization.

Previous articleLight Serviàƒ§os de Eletricidade accepts the award
Next articleFlorida regulators OK Duke Energy rate hike

No posts to display