fbpx

Do you really need another cooling system?

Blog Image

Cooling is one of the least understood and most overlooked aspects of data center operation. There are a lot of misconceptions about cooling – a few are based on some logic, others are downright ridiculous.  

One example is the belief that adding more cooling capacity to the data center will keep the IT equipment cooler and reduce the risk of downtime – which we all know is the worst-case scenario for a data center.  

Let’s look at some facts. 

Fact 1: Cooling systems operate best and most energy-efficient when they are working hard.

What does that mean? Cooling systems are designed to remove heat. They operate best when there is a high delta temperature (∆T) between the return air temperature from the IT equipment, and the supply air being pushed out into the room. We have seen many cases of a T of less than 3°C, which means the cooling unit is doing very little cooling and operating very inefficiently. Effectively its operating as a large fan

There are a few reasons this occurs, including:

  1. having too much cooling in the data center
  2. poor placement of the cooling units, resulting in it being unable to pull much heat from the room 
  3. a very lightly loaded data center, which in turn generates very little heat 

Ideally, ∆T should be in the range of 7°C to 10°C. Anything less than this means the cooling unit is not operating at an optimum level.

Fact 2: Cooling system location determines the level of cooling performed.

A common misconception is that all cooling units pull the same amount of heat from the data center, regardless of their location. Exhaust air from servers is hot and rises toward the ceiling. Consequently, cooling units closest to the exhaust air will pull most air and heat from the room. Meaning, you can have some cooling units operating at 100% capacity while others are operating at 25% capacity or less. 

This inconsistency in operation can result in hot spots. The units operating at a low level of cooling may not be providing sufficient cool air to some servers – often resulting in the erroneous conclusion that more cooling is needed. 

Fact 3: Airflow should be closely matched to the requirements of the IT equipment

Another common misconception is that more airflow is better.

In the case of a raised floor data center, that conclusion is entirely unsuitable. Air from the supply plenum under the raised floor is filled with cool air from the cooling units at a high velocity (which means low pressure). The level of airflow through the perforated tiles is determined by several factors, but the pressure differential between the supply plenum and space in the data center is a key contributing factor.

If there is a high volume and high velocity of air in the supply plenum, the pressure in the space drops. In many cases, this results in a negative flow of air through the perforated tiles – meaning the warm air from the room is being pulled down into the supply plenum. When this happens, effectively no cooling is provided. As well, excess air in the supply plenum can result in vortices, which created dead spots – again, meaning no air is passing upwards through the perforated tiles. 

The volume of airflow being provided by the cooling units should be aligned with what is required to cool the IT equipment.

A simple calculation, to provide a good estimate of air volume required is multiplying the IT kW load by 150 cubic feet per minute (CFM). For example, a data center with a heat load of 200 kW would require at least 30,300 CFM from the cooling units. If the volume of air is substantially above that level, there is too much cooling and airflow. As a result, proper cooling is not being provided for the IT equipment and a lot of money is being wasted

Fact 4: Running cooling units in “fan-only” mode can cause high inlet temperatures to IT equipment

When hot spots are encountered, some data center operators conclude that more airflow is required. If there are cooling units in standby mode, they will turn these units on in “fanonly mode to generate the additional airflow to the supply plenum.  

In “fan-only” mode, the cooling unit is pulling the warm return air from the room and pushing it into the supply plenum without cooling the air.  The net result of this is that the warm air being pushed into the supply plenum raises the temperature of the air in the plenum and the inlet air temperature of the IT equipment. Because the inlet air temperature is warmer, the exhaust air from the server is warmer. As a result, the cooling units that are performing cooling have to work harder to bring the return air temperature down to the setpoint level. 

The negative impact on energy consumption is twofold.

First, the fans on the cooling units operating in fan-only” mode are operating needlessly and consuming energy as they push the warm air back into the room. Depending on the size of the cooling unit, this operation can result in thousands of kWh of needless energy consumption. Secondly, the units that are performing cooling will be working harder to cool the warmer return air. This will account for thousands of wasted kWh per year. 

Running cooling units in “fan-only” mode is not effective.

Good airflow management practices would resolve the issues and actually result in a reduction of kWh usage per year rather than an increase.

Fact 5: Hot spots in the data center do not mean you need more cooling.

Hot spots in the data center are usually caused by poor airflow management, which can include having too much airflow. Other common reasons are lack of blanking panels in empty spaces in the racks, poor placement of perforated tiles, spaces between the racks and placement of IT loads relative to cooling systems. 

The primary goal of following proper airflow management practices is to separate the cool supply air from the warm return air as much as possible. 

Filling all empty spaces in the rack inlets with blanking panels is an extremely simple and inexpensive first step. Without blanking panels, air can circulate from the exhaust of the servers to the front inlets. The cause of this recirculation could be the poor placement of perforated tiles, poor cable management and doors on the back of the rack preventing the warm air from escaping. By using blanking panels, the warm air cannot easily circulate to the front of the rack and cause high inlet temperatures. 

Cooling units in a data center can only provide a certain amount of air. By placing perforated tiles in the wrong places or using high airflow tiles, the supply air can be diverted from the cold aisle to other spaces. 

few examples of this include: 

  1. Placing perforated tiles in the hot aisle to make it more comfortable for the short time period technicians are in that space. 
  2. Having a continuous line of perforated tiles in the cold aisle, when in fact the racks with equipment are very limited or the heat load in each rack is low.
  3. Using high airflow perforated tiles extensively to provide better cooling.
  4. High volume perforated tiles are not intended to be used everywhere in the data center. They should be used in cases of high-density racks, ie 6kW plus. Even then, grate tiles are not a good option in most cases as the velocity of the air exiting the tile is so high the cool air bypasses the server inlets on its way to the ceiling. 

Moving and rearranging racks can result in spaces between racks. These spaces provide a great conduit for warm exhaust air to recirculate to the front of the rack; causing the perception of a hot spot. These spaces need to be closed off as much as possible. 

Logic implies that the highest heat load racks should be placed closest to the cooling unit. Although this is beneficial to get the warm exhaust air back to the cooling unit effectively, the exact opposite is the case for the supply air – which is the most crucial component. As mentioned before, the supply air exits the cooling unit at a very high velocity.

Remember…

Bernoulli’s principle states that an increase in the speed of a fluid (being the air) occurs simultaneously with a decrease in static pressureTherefore, the perforated tiles nearest the cooling unit supply will have the lowest volume of air passing through. Again, often leading to an incorrect perception that there are hot spots. If high heat density racks are placed directly in front of the cooling unit, they should be at least 2.5 meters (8 feet) from the cooling unit. At this point, the velocity of air begins to slow down, and the static pressure rises, enabling more airflow through the perforated tiles. 

What does all of this tell us?

Adding a new cooling unit to a data center is a very costly and disruptive alternative. In the vast majority of cases, it will not solve the problem and will, in actual fact, exacerbate existing issues. 

The first step should always be to follow good airflow management practices. In our over 10 years of experience with data centers of all types, by taking steps to optimize airflow, we have resolved the perceived problems and have released a substantial amount of cooling capacity. Allowing customers to add even more IT load with the existing cooling systems. Learn more about airflow management best practices in our white paper Airflow for Dummies.  

If you are looking to improve cooling efficiency and effectiveness in your data center, fill out our High-level Assessment Form to begin the process. 

Join our mailing list

Sign up to receive email updates about new announcements, educational resources, and more.

  • This field is for validation purposes and should be left unchanged.