The vast majority of company data centers have gone through some level of server virtualization, with the objective of making server fit-ups faster, reducing the number of servers and related physical space required, all while reducing energy consumption. These actions typically result in a positive impact on the IT side and reduce energy consumption. The question you should be asking is, “how does virtualization affect my data center cooling?”
The short answer is that virtualization impacts data center cooling in a number of ways – both positive and negative. Although the general conclusion is that virtualization reduces overall heat load, and therefore should require less cooling, other factors come into play and, if not understood, may result in the conclusion that more cooling capacity is required.
Replacing Single Application Servers
Virtualization entails migrating the applications from lower power draw servers, which would generally house one application, to higher-powered servers that will accommodate multiple applications. This process can migrate 10, 20, or 30 applications from individual servers to one higher-performance device. By eliminating a large number of lower performance servers, considerable space can be freed up in the data center, and the related heat load will also be reduced.
Virtualization requires more powerful server hardware, which means more powerful processors, more memory, higher IO, and faster bandwidth, which translates to higher power draw and heat generation.
In previous blogs, we have highlighted that servers drawing 1 kW of power will generate 1kW of heat that must be rejected and cooled (Learn more in “Debunking Data Center Cooling Myths”). Similarly, the higher-powered servers used for virtualization will draw more power and generate more heat. The ratio of applications on the new servers can range from 10:1 to upwards of 30:1. In many cases, this means 10 single application servers can be replaced with one server. This does not mean the newer servers will be consuming 10 kW of power, but they will have a higher power draw. For example, let’s say the lower-powered server that is being replaced had a power draw average of 350 watts (0.35kW). By replacing 20 of the lower-powered servers with one higher-powered server, the overall heat in the data centre would be reduced by (20 X 350 = 7,000 watts or 7 kW). If the new server has a power draw of 3 kW, then the overall reduction in heat load would be 4 kW. In an intensive virtualization program replacing many single application servers, the overall heat load can be significantly reduced.
Virtualization also means the number of elements that must be managed has now increased 10 to 20 times, and every server that hosts a virtualized application also has a virtual switch that must handle 20 to 40 times the number of managed network elements. All this accounts for more heat load that needs to be cooled.
Changes to the heat distribution dynamics
Even though the net result of virtualization can be an overall reduction in heat load, virtualization can dramatically change the heat distribution dynamics of the data center.
Let’s take an example of a data center with 500 servers.
- Of the total servers, 300 are 500 watt or 0.5kW power draw servers (total 150 kW of heat generated).
- The remainder are a mix of blade servers or higher-performance servers, each drawing 2 kW, equating to a total power draw of 400 kW.
By virtualizing applications, the 300 older servers are removed, so the 150 kW of heat generated by these units is eliminated. If the virtualization ratio is 20:1 (i.e. 20 old applications being migrated from single application servers to 1 new server), the total migration of 300 applications would require 15 servers generating 30 kW of heat. The net of the virtualization exercise would result in a heat load reduction of 120 kW.
- This assumes virtualization takes place on existing high-performance servers. If new servers are required to accommodate the virtualized applications, then the reduction in heat load will be less.
What you see isn’t always what you get!
A side note is important here. Quite often, in the discussion of moving to new servers for virtualization, a reference will be made to the increased power draw of higher-powered servers. This is due to the nameplate rating, which may show, for example, a 3kW power input. The nameplate rating is quite misleading. Server manufacturers typically use a common power supply across many different types of servers as it is more cost-effective due to the mass production of a common component. In reality, the power draw rating on the nameplate should be de-rated by at least 50% to determine what the actual power draw and related heat load will be.
This means if 100 new servers with a nameplate rating of 2 kW were added and operating at the maximum power draw, an additional 200 kW of heat would be generated. In actual fact, the real value of normal operation would be 100 kW.
Confusion about nameplate rating and power draw often leads to companies adding substantially more cooling capacity than will be needed. An example of this would be the group purchasing the servers take the nameplate at face value and request an additional 200 kW of cooling and power capacity. Engineering may look at this and would typically add a buffer or safety factor of say 25% – bringing the total up to 250 kW. Then facilities management gets involved, and they add an additional buffer of 25%, bringing the new total to over 310 kW. Suddenly people come to the conclusion that more cooling capacity is needed. Multiply this one example by a number of such cases, and this can add up to very large and unnecessary capital expenditure when, in reality, the total new heat load added would account for an additional 100 kW. Certainly, the cooling requirement was much lower, and the existing cooling capacity was likely satisfactory.
Data center heat density problems when virtualizing servers
Virtualization generally requires the implementation of new servers, which, in many instances, are housed together in the data center. The racks in this area will often be loaded with 5 or more new servers, resulting in a rack heat load of 10 kW or more per rack. This will dramatically change the balance of the heat distribution profile in that area. In areas where the old servers were located, the per rack heat load was likely closer to 3 or 4 kW. The increased heat density of the newly loaded servers means more cooling and airflow will be required in that area.
If 10 racks are being equipped with higher-powered servers, this could mean in one small area, the total heat load would be equivalent to 50 kW. So now, there is a significant imbalance in the heat profile across the data center. If this is the case, the new loads can create hotspots requiring more airflow and cooling in that area. Consequently, even though the total heat load in the data center has decreased, the new dynamics will require a rebalancing of airflow.
Server workload shifts operating pattern and affects cooling operations
Virtualization can shift the workload on the servers dramatically, without any physical server changes. This dynamic operating pattern can affect cooling operations primarily because of the concentrated nature of the equipment. The thermal profile of the room will change alongside the dynamic software changes to the physical servers. Hot spots will emerge as the workloads change, and the cooling system airflow must change accordingly to maintain optimum inlet temperatures. In many cases, the reaction to this higher demand for cooling is to add more cooling capacity – which is ironic, as the total heat load in the data center has actually decreased.
The appropriate action is to balance the airflow to meet the new cooling demands. Good airflow management is essential to virtualized sites. Thus, cooling needs to be provided when and where it is needed, ideally by a solution that is able to sense the changes in the environment and react appropriately. You must know what the cooling capacity of your data center is, and ensure capacity planning is done to avoid the capital costs of new cooling systems.
Good airflow management practices are key to unnecessarily adding more cooling and avoiding the large and needless capital expenses of new cooling units. Reconfiguration of perforated tiles and improving the integrity of the raised floor (to eliminate air bypass) will go a long way to avoiding the issues noted above. Further, in some cases, a form of air containment can be used to provide the separation of the cold and hot air and remove air recirculation from the mix.
To learn more about good airflow management, read our blog post “Airflow for Dummies – Breaking Down Best Practices.”