When it comes to data center technology, gaining visibility into power consumption is a critical part of the business case for any provider, given that power can be the biggest expense.
Providing energy, cooling and maintaining the right temperature for rack equipment to run optimally are all parts of the equation; and misjudging capacity planning for energy usage also results in wasted money, wasted energy and lost opportunities for operational optimization. In the worst-case scenario, a lack of accuracy can result in downtime that can cost organizations millions in hard dollars as well as brand equity.
Accuracy is often seen as a box to check, to enable usage-based billing and to minimize revenue leakage, for one. But the importance of the quality of the metrics in measuring power consumption grows dramatically in the aggregate. Given a 400,000 square-foot data center, being off in measurement accuracy for all of the equipment housed there translates into real money that impacts the bottom line in a significant way for the data center operator.
But there’s another reason to care about ensuring the quality of the metrics and tracking consumption so closely: With the right data collected from the right points within the facility at a granular level, it’s possible to gain an end-to-end view of what’s happening in the facility. That makes for better efficiency assessments and energy management in general, but it also becomes possible to identify specific problem areas and boost optimization.
Knowing exact amperage eliminates tripped breakers; tracking voltage allows operators to see variations in power and prevent trouble; the ability to track the power factor can prevent inflated power bills; and tracking wattage allows visibility into the heat generated. Also, operators can track kilowatt hours to track energy by end user and groups.
For instance, when it comes to wattage, accurate measurement allows users to identify when an anomalous amount of power is going to a rack. If usage is off compared to the average, then it’s likely that there’s something there that needs remediation. Such visibility allows for root cause analysis of issues as well, minimizing maintenance and troubleshooting costs for operators.
When it comes to capacity planning, an end-to-end view also enables data center operators and their customers to understand the impact that new applications have on power consumption once they get to scale. In turn, operators and customers can use that information to load-balance applications across racks, and ultimately achieve economies of scale.
When deploying a system one must also take in to account the best way to gather and integrate the data coming from the systems. It is important to assess the ease of integration of the solution as well as the scalability. In today’s technology one must ensure there is future proofing to ensure additional costs and delays are not incurred.
The ability for a system to be flexible enough to be integrated in the correct manner the first time as well as the adaptability of the system to work with new software is an important consideration. Multiple communication protocols, onboard configuration, logging and alarming are essential to meet these requirements. These features need to be applicable to the business requirements of optimization, speed of information and speed of deployment.
The bottom line? Analytics and reporting can only be as good as the information these systems have to work with. The industry has a mantra to optimize the data center and maximize uptime and energy efficiency, but all too often “good enough” has been the standard for gathering accurate and detailed data.
But with an accurate measurement and monitoring system for amperage, voltage, power factor and wattage that is easy to deploy and integrate, operators can make the most of their analytics opportunities to build a smarter, more efficient data center that minimizes power waste and boosts customer satisfaction.