How can data center Cooling Optimization projects be justified?
Legacy data centers are typically not the most energy-efficient due to years of change, add-ons, and expansions. In many cases, the approach is to “just keep it running” until a new facility can be built. That approach results in a lot of unnecessary operating expenses which can add up to hundreds of thousands of dollars over the years.
Improving the energy efficiency of cooling in a legacy data center will result in:
- Significant direct energy savings and paybacks within 3 years; typically, 20 to 35% energy cost reduction
- Deferral of capital spend on new cooling units (estimated installed cost of $2,300 per kW of cooling);
- Deferral of site expansion (estimated cost of $1,000 square foot)
- Reduction in greenhouse gases due to reduced energy use;
- Reduced carbon footprint that can result in off-setting carbon tax

HOW IS ENERGY CONSUMED IN A DATA CENTER?
This chart shows a typical breakdown of energy demand. Energy demand values vary widely from one data center to another, being dependent on the type and age of the data center, how well has it been maintained, age of cooling equipment, and what efforts have been made to improve operational efficiency. The IT equipment consumes between 50 and 55% of the total. Power related equipment, PDU’s, UPS and Switchgear account for 9%, lighting comes in at 1%. One of the biggest demands for energy is cooling at 38% and in a legacy data center this is often much higher.
Cooling can account for more energy use than the IT equipment – this is when the power usage effectiveness metric (PUE) is in the 2.0+ range, which isn’t good! Today’s data centers aim to have an average PUE as close to 1.0 as possible. There is a lot that can be done to not only reduce the energy demand, but also avoid large capital dollar spend on new cooling.
In the example below, the data center is a 500 kW site. The IT power draw is 275 kW, cooling 200 kW and the ancillary power is 25 kW, which includes lights, power distribution losses, security systems, etc. This example would have a PUE of 2.0 meaning there is room for improvement.
By optimizing cooling, an energy reduction of 30% could be realized which would bring PUE down to 1.5 and demand saving of nearly $75,000 per year could be realized.
HOW DO WE ACHIEVE 30% ENERGY SAVINGS?
The first step is to determine how the site is currently operating, with a focus on cooling, and identify where there are issues and how the site can be optimized. During this diagnostic process, measurements are taken and data collected to understand the state of the data center operation. This requires reviewing air flow management, equipment layout, IT load distribution, cooling layout, distribution, and operation, as well as identifying deficiencies in operation that could affect cooling. Energy metering is also applied to the cooling systems to establish baseline values.
To create a cooling profile, a number of data points are established. Some of these include:
- Cooling capacity relative to IT load
- Air flow measurements to determine
- Air flow provided relative to required for adequate cooling
- Air flow reaching the IT equipment to optimally cool IT load
- Cooling system return air and supply air temperature differentials
- Inlet temperature profile for IT equipment
ASSESSING COOLING OPERATION
Two metrics that SCTi has developed, and which offer detailed insight into the cooling operation, are Cooling Efficiency and Cooling Effectiveness.
Cooling efficiency calculates how much power is required to generate 1 kW of cooling capacity. The lower this number the more efficient cooling systems are operating.
Cooling effectiveness highlights how much cooling is required to cool 1 kW of IT load. If more than 1 kW of cooling capacity is required to cool 1 kW of IT load, this is a clear indication of the need for improvements.
These two metrics are established using energy metering at the baseline stage, and similar metering at the end of the optimization project.
Based on the results of the diagnostic audit, recommendations are made for improvements which include how significant the energy reductions will be, what the cost will be to make changes, how this will impact PUE, and what the expected payback is.
ENERGY CONSERVATION MEASURES
Once the operating conditions of the site are known, various energy conservation measures (ECMs) can be applied to rectify the issues. As noted in previous blogs and webinars, SCTi takes a holistic approach to improving data center cooling. In this approach, we have categorized the ECM’s into 8 groups including Air Flow Management, Cooling Technology Upgrades, Network Sequencing, etc.
Details on how we apply these ECM’s is outlined in our webinar “5 ECMs to Optimize Your Data Center Cooling”
ADDITIONAL BENEFITS OF APPLYING ENERGY CONSERVATION MEASURES
The direct benefit of implementing the ECMs is a significant reduction in energy costs. But there are many added benefits including;
CAPEX DEFERRAL FOR NEW COOLING:
By optimizing air flow and the operation of cooling systems, overall cooling capacity can be increased by 25% or more, meaning more IT load can be accommodated. This is due to the improvements in air flow that allow return air temperatures to be higher, allowing the cooling units to work less and more energy efficiently. Increasing a 70 kW cooling unit capacity by 25% means it now has over 87 kW of cooling capacity, an addition of 17 kW, that can be used for more IT load.
On a basis of one cooling unit that doesn’t seem like much, but if we use the data center example above it would not be unusual to have 6 cooling units (70 kW each) operating in that space. By improving air flow, the 25% increase in cooling capacity per unit would result in 105 kW of additional cooling capacity.
Taking into account the cost of a new 70 kW cooling unit, about $85,000, and installation, roughly another $80,000, totaling $165,000, this would equate to $2,300 per kW if more cooling were to be added. If optimization didn’t happen and new cooling units were purchased, this would be equivalent to over $240,000 capital spend, which in reality is not needed.
On top of that, rather than reduced energy costs there would be higher energy costs due to the new cooling unit.
IMPACT ON IT EQUIPMENT:
Now, take this a step further. Conditions of poor cooling may be causing issues with the IT equipment operating at too high a temperature due to poor air flow management. The typical response to this is to lower supply air temperature, meaning cooling units operate with higher energy consumption, or to add another cooling unit – which is costly as shown above.
In a site with optimized air flow and cooling, the inlet temperatures for the IT equipment are much more stable and consistent throughout the site. This means supply air temperatures can be raised, resulting in significant energy savings from the cooling units. It also means the equipment density in racks can be increased, resulting in fewer racks and rack space required, possibly to the point of avoiding a site expansion.
RELEASE STRANDED IT CAPACITY:
In the example, by improving cooling efficiency you have released stranded IT capacity of over 100 kW, an increase of 36%. If existing power and space are available, the added IT load can be achieved by increasing the density of existing racks or by adding more racks. An option that didn’t exist before optimization.
If the air flow and cooling had not been optimized, it may erroneously be concluded that no more IT equipment could be added to the site without incurring the cost of a new cooling unit or expansion of site footprint.
Costs and disruption to expand an existing site are enormous. Estimating space for a single rack considering common areas, aisle, etc. works out to roughly 42 sq ft per rack or a total of about 1000 sq ft for 25 new racks. Cost estimates for a data center building range from about $700 to over $1,200 per sq ft. Using a proxy of $1,000 sq ft, the cost of the expansion would be $1,000,000 for the base building, not including power infrastructure, racks, generator, additional cooling units, etc.
AND WHAT WOULD BE THE PAYBACK FOR THIS CAPITAL OUTLAY?
Optimizing an existing data center is straightforward and can be accomplished with no disruption to the operation. The benefits of energy savings, improved thermal conditions, and the ability to add more IT load cost effectively are realized immediately – not 18 or 24 months down the road. Payback is typically within 3 years rather than 15 to 20 years for a new build.
COOLING OPTIMIZATION IS GOOD BUSINESS
From a strategic and economic perspective, data center optimization makes a lot of sense – large capital spend on new cooling equipment and site expansion can be deferred or even eliminated; reducing the energy consumption of cooling means more is available to support the revenue-generating portion of your business, the IT equipment; cost for added power infrastructure into your data center can be deferred.
Assessing the opportunity to achieve these benefits in your data center is a straightforward process. If you’re interested in learning more about energy savings achieved through cooling optimization and the added benefits, take a look at our case studies.