Don’t Derail Your ADMS Implementation With Bad Data

by Ross Shaich, Utility Integration Solutions LLC


Advanced distribution management systems (ADMS), which include outage management and distribution management, are keys to automating utility processes and integrating systems. These are mission critical and the systems with which they interface give results only as good as the data that goes in.

While perfection is unattainable, it is possible to be useful with values like 60 percent data quality, but the aim should be much higher. In addition, the need to review data never ends.

An ADMS is the center point of data flowing between multiple systems (Figure 1), including geographic information systems (GIS), customer information systems (CIS), SCADA, interactive voice response (IVR) and advanced metering infrastructure (AMI). The GIS data, which is this article’s focus, is particularly critical because it inputs data on which ADMS depends for its mission critical function. Inaccuracies in the ADMS can propagate to the other integrated systems, multiplying the impact of poor data quality.

Some small amount of inaccuracy is unavoidable, but it must be minimal. Ramifications of bad data can include wrong outage predictions to overloads to even crew safety. Dispatchers who see significant errors in the system will likely block the system from going live until data improves. If it is already live, they will feel crew safety is too important to use an untrusted system and will rely instead on what they trust (like paper maps) and not the system.

An ADMS often has data requirements stricter than the source GIS system it came from because the data is used differently. If inaccurate data doesn’t impact GIS usage, issues likely will remain until another system reveals them. For example, certain topology and energization factors that weren’t important in the GIS are important in the ADMS. If extra, missing or incompletely defined connections or mismatched phasing exist in the ADMS, deenergization or looping appears in feeders and customers may not be properly restored in the ADMS. These kinds of problems potentially go unnoticed when isolated in a GIS.

Following is a short list of problems that can be caused by bad or missing data in an ADMS:

“- Incorrect or lack of energization

“- Incorrect reports and indices

“- Outages not predicting correctly

“- SCADA devices with wrong measurements or lack of control

“- Sluggish performance

“- Missing or incorrect device details

“- Overloaded circuits

“- Incorrect switching plans

“- Proliferation of incorrect information out from the ADMS

“- Crew members injuries

Review and Correct Early

Best practice is to correct data issues as early as possible because considerable effort and duration is needed. Some issues are like an onion, in that removing one layer reveals more. Rewards are reaped in later project phases, especially during testing, when data issues are resolved early. Data model fixes made late in the project often result in time-consuming ramifications, like the need for extensive rework to already created test cases or training materials. When significant data model fixes are implemented during testing, testing progress may pause while changes are made to large numbers of test cases. Data changes can even result in invalidating test results, requiring re-execution of previously passed tests.

Another reason to review and fix the data model early is to maintain the users’ positive opinion of the system.

Fixes should be made before functional testing or training occurs, otherwise, a negative opinion can form early and be hard to change. Users with negative feelings about the new system can undermine user confidence in the system and impact motivation, as well as morale.

Complete functional configuration is not necessary to begin data reviews. Following are some things that should be checked early:

“- Object naming conventions

“- Device attribute mapping

“- Each object class builds successfully

“- Customer to device mapping

“- Object symbolic representation, rotation, sizing

“- Text and object scaling (proportionally good)

“- Coded model processing/build rules work as expected

“- Incremental model changes succeed in applying

“- Connectivity

Team members must take time to review data and identify necessary changes. In addition to their impact on data quality, these reviews build buy-in. Technical team members can do some reviewing, but the business users know requirements a non-user can’t. Because users might have responsibilities to the live environment, too, their time should be used wisely. A good strategy for including users is to enable their participation by providing team member backups, have someone dedicated to the project, or have a pool of multiple users available who are able to share the responsibilities.

If a dispatcher must prioritize between his or her normal dispatching tasks vs. data reviews, the higher priority always goes to dispatching. If dispatcher participation isn’t enabled, data reviews will progress slowly.

Verify Data Flow Steps

Each step in the model builds process needs verification through an extract-transform-load process. Issues can exist with source data, GIS data extraction, formatting and processing the data for building into the ADMS, as well as the actual build. Figure 2 represents a model-build process flow. Some ADMS flows may combine some of these steps into one.

Perform Data Reviews Across Multiple Stages

Data reviews done in multiple stages enable users to notice more subtle defects because the systemic, easy-to-spot issues were resolved early. If too many simple issues remain until the later project phases, it is hard to focus on finding the subtle ones, and there will be too much to be fixed at crunch time. The biggest risks here are:

1. Insufficient time to retest the fixes

2. Uncertain stability due to fixing complex issues too close to the go live date

3. Damaged user confidence

4. Fixable issues reaching production with work arounds

Start With Small Representative Datasets

Small but representative datasets enable multiple quick iterations through the “identify, fix, rebuild and test” cycle. Some defects can be found only with a large model, but it’s important to fix what can be fixed in the small model first. As issues are fixed, the model gets rebuilt to verify not only the fix but also that it 1) provided the desired effect, 2) did not break something else, and 3) did not expose additional issues. The larger the model, the longer each full cycle (identify, fix, rebuild and test) will take due to longer rebuild and review durations. For this reason, it is best to start with small but representative datasets some call Data Set 0 (DS0) and Data Set 1 (DS1). This is a methodology developed by Configured Energy Systems in the early 1990s. DS0 is a model containing at least one of every class of object. DS1 is a small, representative model of four to six substation feeders (if modeling substations, them too) used for testing. It is important to remember that resolving a systemic problem for one object of a particular class resolves all objects of that class.

The company’s composition must be considered when selecting data sets. As utilities merge territories, they might have multiple GIS systems. It is important to get data from each of those GIS systems and operating companies. Different GIS systems might have different GIS vendors, extract differently, have different object classes and have different levels of data quality.

Data Quality in Later Stages

Once the system is configured, testing reveals whether data fields in tools, such as the display of current outages, are correctly populating. This review best begins with the small DS1 model. Unlike the earlier DS0 object review, it is necessary to wait until the system is reasonably configured to see how data appears in the as configured tools. Defects can be a matter of data requirements that were not identified, data that isn’t mapped as needed, the chosen data not fulfilling the need as hoped, or a failure in the build process causing inaccessible data.

After the DS0 review and some review in DS1, it’s time to build the full model in addition to DS1 and test it. A full model reveals issues that a small model cannot. Performance, data quantity, one offs and de-energization or connection problems now are noticeable. The full model with full system configuration and full integration is mandatory for a valid performance test. Since DS0 and DS1 iterations can be done quicker than with a full model, DS1 iterations should be made first, but don’t wait too long to work with the full model. Several connectivity or performance issues would be undetected in DS1 but would show up in a full model. Just like with DS0 and DS1, it takes multiple iterations of lengthy full-model builds before testing is complete.

Example of Large Model Problem

A problem was detected when one utility’s background objects were being duplicated multiple times. Since background objects do not affect connectivity, it wasn’t impactful enough to be noticed until the full model was built. Once the utility began using the full model, the quantity of background objects consumed too much system memory, so the maps could not be displayed. Prior to the full model, the viewer tool loaded slower than expected, but the data still displayed. With the full model, failure to load messages appeared. Ordinarily, the amount of data loading would be well within the limits, but because this limit was hit immediately upon trying to load a single search result, a definite data quantity problem was revealed. It took several weeks to fix the problem because more than one fix attempt was required. A lot of time was therefore used just waiting for builds to finish. This was only one of several data issues found in the full model.

Coding Fixes and Workarounds

Temporary workarounds for bad data, while not ideal, are used “to get by” until a permanent fix is made. Sometimes there is insufficient time to make the permanent fix before the go-live date. Other times, the fix is complex, touches many aspects and adds too much risk right before the go-live date. There is the risk that even if it’s believed the problem is solved, a new showstopper problem results from implementing the fix. Fixing one layer of problems can reveal additional problems in the next layer. If continual attention was not given to the model from early on, the consequence is increased pressure to resolve all problems immediately, compact the schedule, and bare the risk of unresolved issues on the go-live date. Fixing too much, too quickly, too radically can impact stability, user confidence, performance or go-live dates.

When deciding whether to fix or work around, ask:

“- Is the workaround reasonable?

“- Would fixing require altering a go-live date?

“- Is the sponsor’s go-live deadline flexible?

“- What is the level of risk involved in the fix (include testing time available)?

“- Can the permanent fix be made in a timely manner?

Example of Connectivity Issues

Source data issues at one utility caused its ADMS model to have many connectivity problems near circuit breakers. While best practice would have been to cleanse the data in the source system, in this case a difficult and complex update process made that impractical if the utility was to make the planned go-live date. Instead, a work-around was implemented involving jumpers placed to bypass problem areas. While the workaround still took time to implement and delayed the go-live, it was completed in a more acceptable time and the delay to production was minimized. Once it went live, the utility planned the permanent fix and implemented the fix at a later time on their live system.


Perfect data is unattainable, but the higher the data quality, the greater the system’s value. It is important to aim high, but as Voltaire might say; “Don’t let the perfect be the enemy of the good.” If you wait for perfect data, you will never go live. Go live when the data is sufficient to create business value, but don’t stop working to improve. Data quality improvement and maintenance is a never-ending process. Begin data reviews early, make wise use of limited business user’s time, improve data where necessary and justified, and choose the appropriate data model type and size. If you do this, you significantly increase your likelihood of a successful ADMS project implementation.

Ross Shaich has more than 18 years’ experience in ADMS implementation and support of large-scale enterprise projects. He has served as subject matter expert, test lead, functional lead, test designer and project manager. He holds a master’s degree in project management and is a certified PMP. Reach him at


Previous articleDistribuTECH 2016: Orlando Show Covers Strategies and Technologies for Adaptation and Survival
Next articleThe Utility’s Role in IoT: Leveraging the Power of the Active Grid

No posts to display