Our most recent webinar focused on the application of 5 energy conservation measures (ECMs) to optimize data center cooling. In the colocation case study, we discussed the application of these ECMs to the data center and how each one contributed to optimizing cooling. To guide our interactive discussion, we asked our audience to list their top data center cooling questions, and today we’re sharing those answers with you.
WHY OPTIMIZE AIRFLOW?
In a data center, the air is what removes the heat generated by the electronic components in the IT equipment, resulting in cooling the equipment. All IT equipment has operating specifications for temperature and humidity to ensure proper operation. The intent of “optimizing airflow” is to ensure adequate airflow is being provided to maintain thermal conditions at a consistent level so that the IT equipment will operate within appropriate manufacturers’ specifications.
A common misconception is that by providing lots of cooling and airflow, the IT equipment is less likely to fail due to a thermal event resulting in conditions outside the manufacturer’s specifications. In actual fact, providing too much airflow by operating more cooling systems than what is required to meet the cooling needs of the IT equipment often results in less airflow being delivered to the IT equipment. As well, the energy costs to provide the excess cooling are unnecessarily high.
On the other hand, inadequate airflow can cause equipment failure due to overheating. Supplying airflow at too low of a temperature requires cooling systems to work excessively, resulting in high energy bills and more wear and tear on the cooling equipment.
By optimizing airflow, meaning following good airflow management practices, knowing what the IT heat load is and how much airflow should be delivered, the IT equipment operation and efficiency of the cooling system will be improved. There are a number of benefits to optimizing airflow:
- Cooling systems will operate more energy efficiently
- There is less risk of downtime due to thermal events
- The cooling capacity of the data center can be maximized, enabling additional IT load to be added without incurring the cost of more, very expensive cooling units.
HOW DO YOU REDUCE OR REMOVE HOT SPOTS?
Hot spots occur for a number of reasons. However, it is very seldom, if ever, due to insufficient cooling capacity. Ironically, most data centers encounter hot spots due to too much cooling and poor airflow management practices.
Hot spots are defined as air temperature in the cold aisle being close to or above the ASHRAE recommended inlet temperature of 27°C. IT equipment receiving inlet air can be prone to failure or shutdown due to internal electronic components overheating. Poor airflow management practices can result in the hot exhaust air recirculating to the inlet side of the rack, diluting the cooler supply air, resulting in inlet air being too high to provide adequate cooling.
The primary intent of good airflow management is to separate the cool supply air from the warm exhaust air as much as possible. To achieve this the following practices should be followed:
- ensure all empty spaces in the racks are filled with blanking panels
- spaces between racks should be filled or closed off to avoid exhaust air from recirculating to the front of the rack
- use properly sized perforated tiles relative to the rack heat load in front of racks
- proper placement of IT equipment in racks – no equipment blowing hot exhaust air into the cold aisle
- avoid excess use of high flow rate perforated tiles
- do not put perforated tiles in the hot aisle
- ensuring cooling capacity is adequate, but not excessive for the data center heat load
- avoid placing an obstruction in the supply plenum
HOW DO YOU DETERMINE HOW MUCH COOLING YOU REALLY NEED IN THE DATA CENTER?
The amount of cooling capacity required in a data center is dependent on the amount of heat being generated. Calculating cooling capacity is not an exact science. Other conditions, such as placement of cooling units and IT equipment racks, depth of supply plenum, ceiling height and rack heat distribution, all influence how much cooling capacity is required.
Cooling systems cycle a supply of adequate cool airflow to remove the heat generated by all the IT and other equipment. The largest contributing source of heat is from the IT equipment, accounting for 90 to 95% of total heat load. Other equipment consuming electricity, such as lighting, power distribution systems, fans on cooling systems, etc. also result in heat generation. The heat from these sources is low, typically less than 10% of overall heat generated, but should be taken into account (Learn more about how this works in “Debunking Data Center Cooling Myths: You Can Cool Better with Less Equipment”)
The easiest way to determine IT heat load is to read the values on the data center power plant supplying electricity to all the IT equipment. This could be done at the UPS level, or the power distribution units (PDU’s). For each kW of electricity consumed by the IT equipment, one kW of heat is being generated and will require cooling. By knowing the total kW of heat being generated, the required cooling capacity can be calculated. To allow for the potential failure of a cooling system, N+1 cooling capacity should be added.
All cooling equipment has spec sheets that list the cooling capacity, typically in kW and BTU’s (British Thermal Units). As a guide in estimating required cooling capacity, a one to one ratio of IT kW heat load to a kW of cooling can be used. This simple calculation will give you an estimate of how well aligned the cooling capacity is with the IT requirements.
To determine more accurately how much cooling is needed, a much more thorough analysis of cooling system operation, IT load and site conditions is required. This is an integral part of the SCTi Audit process, which then enables us to accurately define the level of available cooling capacity and airflow in a site.
ADDITIONAL COOLING CAPACITY FACTORS YOU MUST CONSIDER
Calculating the required cooling capacity is also influenced by a number of other conditions.
Airflow is a very important consideration in ensuring the cooling requirements of the IT equipment can be met. Cooling capacity only defines the value of mechanical cooling available. However, if airflow is not well managed the cooling units will not be providing the level of cooling expected. Poor airflow management has a number of negative impacts. First, it means the supply air may not be delivered to the IT equipment as expected. Secondly, without good separation of the cool supply air and warm exhaust air, the return air temperature is lowered, causing the cooling unit to operate less efficiently. Thirdly, the mixing of the cool supply air and warm exhaust air will cause the data center operators to think more cooling capacity is required.
In a raised floor data center, too much airflow will result in a high-pressure differential between the supply plenum and data center space, which will reduce the airflow through the perforated tiles. Airflow exiting the cooling system is at a high velocity, which will result in poor airflow, even negative airflow, through the perforated tiles that are within a few feet of the cooling unit.
The placement of cooling units is an important factor. Air takes the path of least resistance. A cooling unit close to the IT equipment will end up working a lot more than a cooling unit that is far away from the heat source. In these cases, it can appear as though additional cooling capacity is required when, in actual fact, the overall cooling operation is 50% or less.
Too much cooling results in a low return air temperature to the cooling unit, or what is referred to as a low delta temperature (ΔT) between the supply air temperature and return air temperature. This causes cooling units to be very energy inefficient, reducing cooling capacity by 25% or more, and not producing the level of cooling capacity specified.
External factors, such as ambient temperature, can have some influence on heat levels in a data center however, in the Canadian market, this is not a major issue. It has also been shown, in cases where this could be a factor, the size of the data center is important – smaller data centers with less total heat load can be impacted more by ambient conditions than larger data centers with high heat loads.
Personnel in a data center was a consideration years ago when there might have been offices with staff dedicated to the data center. Even then, humidity was a major concern. Permanent offices are a thing of the past, and in most cases, personnel working in the data center are limited in number and are only present long enough to perform the necessary tasks.
HOW DO YOU MAKE COOLING SYSTEMS WORK TOGETHER?