Big Data Jolts Utilities – Why Harnessing Big Data Will Help Companies Surge

by Werner Hopf, Dolphin

Big data is everywhere, and utilities are creating more of it every minute. Delivering power is all about efficiency, which is why the energy industry is among those that have fully embraced that big data can unlock insights into how energy is distributed and consumed.

Applying analytics to big data is important, but in the rush to parse and dissect mountains of information, many organizations fail to address the big problem created by big data: There is too much.

Too much data is a drain on many important resources and necessitates a data volume management strategy to ensure high system performance.

Data Volume Management

Businesses without a data volume management plan typically see three problems:

1. The strain that existing processes put on the information technology (IT) budget. Data grows quickly; after all, it is being created by every aspect of the operation all the time, and processing speeds grind to a halt. Users grow frustrated with how long it takes to search for data and run standard reports.

2. As companies adopt innovative data-intensive initiatives such as smart grid programs, the companies must learn to manage new complex operational issues.

3. The time it takes for IT administrators to complete backups and restore instances. As data grows, systems get bigger and require more maintenance. Energy is a 24/7 business, and every minute the system is locked because of system maintenance directly affects day-to-day operations.

These issues become more problematic as organizations look to migrate to powerful in-memory systems such as SAP HANA (high-performance analytic appliance). In-memory systems can process more data faster than before to quickly return new analytics. But without a data volume management strategy, the cost of in-memory systems can become so great that it becomes difficult for businesses to see a return on investment in a reasonable time. But energy providers and utilities can use a practical road map to evaluate their data strategies, whether they plan to continue on legacy systems or invest in exciting platforms such as SAP HANA.

Determining Data’s Value

When implementing a data strategy, remember that all data is not created equal. Some data yields a high level of business value while other data yields a low level of business value. It’s important that the cost of storing data is on par with the value it provides to the operation.

Mapping data to short-term and long-term business goals and legal requirements will clarify which data sets carry high vs. low value. Such an assessment also will determine where data originates, how it grows, how frequently it is used and the interrelationships it has with other types of information. The result is a blueprint that identifies the most important data from static, less valuable information.

Storage Architecture Matters

Once the data blueprint is defined, a data storage architecture can move data from online to nearline and then to archive storage as it matures and decreases in value for the business. Data deemed to have a lower value, such as that which is infrequently accessed or stored for compliance purposes, does not need to be stored in the online system. This data can be relocated to a lower-cost storage tier using archiving. Users easily can search for and retrieve data from the archive through the enterprise resource planning (ERP) system when it is needed. Eventually, when data reaches its end of life and has met its legal and compliance purposes, it can be purged.

More valuable, static data can be moved to nearline storage. Nearline storage allows for high-speed access to information while keeping the total cost of ownership in line with budget projections. Only the most current, valuable data needs to remain in the live ERP system.

Case Study: Benefits of Archiving

This multitiered data storage model recently was used by one of the largest electric utilities in the U.S. to improve the return on investment in technology as it migrated to the next-generation SAP HANA ECC and BW systems. This utility services nearly 14 million people across 50,000 square miles of territory.

The utility was intrigued by the promise of SAP HANA to offer faster reporting, faster data loading times, reduced maintenance costs and decreased development costs compared with its existing SAP systems. The goal was to achieve new analytical capabilities it currently could not implement; however, the company was concerned about keeping the total cost of ownership of the system in check. Specifically, the cost related to HANA’s high performance pay-as-you-grow in-memory storage meant that the company had to develop a plan to control its database growth. The utility partnered with Dolphin to architect a solution that included data archiving and PBS Software’s nearline storage.

The result was a 50 percent reduction in growth and the ability to seamlessly execute queries that retrieve information from both HANA and nearline storage as appropriate. The calculated return on investment was reduced from more than 15 years to two and a half years.

Continual Benchmarking

Once a data volume management strategy is in place, continual benchmarking ensures that data growth is managed, the system performs as expected, and IT costs remain low. Too frequently, organizations in and out of the utility industry implement data volume management strategies but fail to implement a process to monitor and manage data growth. The initial success that comes from mapping, filtering and sorting the existing information, witnessing a jump in processing speeds as terabytes of information are freed up, creates a sense of accomplishment; however, strategies regularly must be evaluated, updated and improved as business tactics change and new analytics are introduced.

Benchmarks for data volume management should support the needs of a wide range of users from the IT manager to the chief financial officer and CEO. High-level executives must understand the impact of big data on the business now and in the future so they can formulate effective business plans.

The IT team needs data that will help them keep data volume growth under control and let them adjust their budgets and take corrective action before problems arise. This requirement for continuous monitoring underscores the need for a real-time information life cycle management dashboard that benchmarks data volume management initiatives against internal goals for processing speeds and reporting times and against the performance of other businesses. With this information, companies can understand big data effects and make informed business cases for continued investment in new processes and technology.

Make Archiving a Priority

More data is being created today than at any other time, and the cost of storing this information can outpace IT budgets and cause performance bottlenecks. Advancements in enterprise computing technology, with SAP HANA leading the pack, represent a leap in how fast data can be processed and analyzed, but this new technology does not eliminate the need to properly manage data; it only increases it. All companies-even companies that are not ready to migrate to a next-generation platform-can benefit from a data volume management plan that makes peak performance and lower IT costs priorities.

Werner Hopf, Ph.D., is CEO of Dolphin Enterprise Solutions Corp., www.dolphin-corp.com. He has more than 20 years of experience in the information technology industry, including 15 focused on SAP technologies. Hopf is a well-known expert in data volume management and data archiving strategies and solutions.

More POWERGRID International Issue Articles
POWERGRID International Articles Archives
View Power Generation Articles on PennEnergy.com
Previous articleDuke Energy, Other Utilities: Drones Have Role in Energy Sector
Next articleGet Smarter! DistribuTECH Conference & Exhibition Celebrates 25th Anniversary in San Diego

No posts to display