Thermal optimization starts with improving the delivery of supply air to where it is needed.
One example is the belief that adding more cooling capacity to the data center will keep the IT equipment cooler and will reduce the risk of downtime – which we all know is the worst-case scenario for a data center.
LET’S LOOK AT SOME FACTS.
What does that mean? Cooling requirements for a data center are typically determined by the projected heat load from the IT equipment and other heat generating equipment that will be in the white space. Air flow, on the other hand is just assumed to match the needs of the IT equipment. And if there is adequate cooling capacity then there will be enough air flow. True enough, however where this begins to unravel is in the delivery of the air flow to where it is needed. Success of the cooling systems to meet the needs of the space is dependent on how well the air flow reaches the IT equipment and passes through to extract heat from the components. If certain areas of the room have much higher density rack heat loads and there is not adequate air flow in that area to meet that need, hot spots will arise – leading to the misconception that more cooling is needed. In actual fact better air flow delivery will resolve the problem and save the thousands of dollars in capex for a new and unnecessary cooling system.
Fact 2: Placement
of cooling and IT equipment is crucial to good cooling operation
Another common misconception is that all cooling units will extract the same amount of heat from the data center regardless of their location and proximity to the IT load. Cooling systems operation repsonds to the return air temperature which represents the set point. If cooling systems are close to the highest heat load racks they will pull in more heat and typically operate at a higher percentage of cooling capacity. At the same time, cooling units
further away may be receiving lower temperature return air and therefore will operate at a lower cooling capacity. The result is some cooling units will be operating at 100% capacity while others are operating at 25% capacity or less. We have seen rooms with multiple cooling units of which less than half the units were providing any cooling resulting in the high-density areas experiencing hot spots.
Fact 3:
Airflow should be closely matched to the requirements of the IT equipment
In the case of a raised floor data center, supply air is channeled under the floor and enters the white space through perforated tiles. The level of airflow through the perforated tiles is determined by several factors including the pressure differential between the supply plenum and the white space and the location and percentage opening of the tiles.
If there is a high volume and high velocity of air in the supply plenum, the pressure in the supply plenum drops and can, in many cases, result in a negative flow of air through the perforated tiles – meaning the warm air from the room is being drawn down into the supply plenum. This is common when perforated tiles are located too close to the cooling system. When this happens, no cooling is provided to the nearby racks.
The placement of cooling units can result in air pathways crossing in the supply plenum causing vortices, which create dead spots – again, resulting in no air passing upwards through the perforated tiles.
By introducing additional cooling and air flow into a space, the volume of supply air can reach the point where air flow through the perforated tiles decreases causing even more cooling issues.
Fact 4: Thermal issues in the data center do not mean you need more cooling.
Thermal issues are very seldom caused by insufficient cooling unless there is failure of a cooling system. Rather they are usually caused by poor airflow delivery and poor air flow management. The lack of blanking panels in empty spaces in the racks, poor placement of perforated tiles, spaces between the racks and placement of IT loads relative to cooling systems can all cause thermal issues.
Good air flow management practices requires the separation of the cool supply air from the warm return air as much as possible. Filling all empty spaces in the with blanking panels is an extremely simple and inexpensive first step. Without blanking panels, air can recirculate from the exhaust of the servers to the front inlets. By using blanking panels, the warm air cannot easily recirculate to the front of the rack and cause high inlet temperatures. Blanking panels will also reduce air bypass which if not controlled, results in reduced cooling system operating efficiency.
Logic implies that the highest heat load racks should be placed closest to the cooling unit. Although this is beneficial to get the warm exhaust air back to the cooling unit, the exact opposite is the case for the supply air – which is the most crucial component. The supply air exits the cooling unit at a very high velocity. Bernoulli’s principle states that an increase in the speed of a fluid (being the air) occurs simultaneously with a decrease in static pressure. The perforated tiles nearest the cooling unit supply will have the lowest volume of air passing through.
Adding a new cooling unit to a data center is a very costly and disruptive alternative, which, in the vast majority of cases, will not solve hot spot problem and will, in actual fact, exacerbate existing issues.
The first step should always be to follow good airflow management practices. In our over 10 years of experience with data centers of all types, by taking steps to achieve thermal optimization, we have resolved the cooling problems and have released a substantial amount of cooling capacity. In doing so, customers have been able to add even more IT load with the existing cooling systems.
Learn more about airflow management best practices in our white paper Airflow for Dummies.