What is High Density Rack Cooling?
Server rack density is steadily increasing with advanced technology and computing power requirements. A parallel phenomenon is that existing data centers are not expanding physically at the same rate as their computing power. As a result the load density per square foot is increasing in virtually all data centers and this will continue into the foreseeable future. Many data centers are now in the 20-40 watts per square foot with some already approaching 100 watts per square foot.
The increased heat load and heat density takes place in the form of more powerful servers located in the same rack configuration, and in the same physical space as before. Many data centers are land-locked within a building or building space, while data center operators & owners cannot relocate on a regular basis as server density increases. The emerging problem is how to manage the increased heat density within the constraints of an existing facility. These sites require an effective high density rack cooling system in order to function while allowing for growth. Along with this requirement comes the growing world-wide demand to reduce energy consumption.
By far the greatest potential for energy savings is not in the data processing equipment but in the cooling equipment that serves it. Consider the electrical power required to cool a data center using traditional means is approximately equivalent to the power input to the computer. Therefore the optimum high density rack cooling system should address both effective server rack cooling and optimum energy efficiency.
Data Center Requirements
The original ASHRAE standard for data centers was 68-75F and 40-50% relative humidity. The ASHRAE Thermal Guidelines for Class 1 data centers was updated in 2008 to a recommended temperature of 64F-81F and allowable relative humidity of 20-80%, with a dew point not exceeding 59F. The driving factors in the past were more temperature-sensitive equipment, moisture issues with paper printers which are now all but obsolete, and static electricity discharge which can now be safely controlled with a variety of means. The wider acceptable envelope for data center temperature & humidity allows for greater energy savings in the cooling systems, provided they are designed correctly and always with optimum cooling as the first priority.
The PUE (Power Usage Efficiency) factor, developed by the Green Grid measures the total facility power required in kW against computing power. The current national average PUE is >2.0 while in a perfect world with no energy used for cooling it would be 1.0. Currently the most energy-efficient systems in operation by industry giants Google and Yahoo are achieving a PUE close to 1.2 which is an improvement of 60% over the national average. These power savings are not possible with any form of traditional data center cooling systems. Traditional CRAC (Computer Room Air Conditioner) cooling systems are the most common in-place cooling systems, generally but not always utilizing a raised floor and underfloor air distribution to condition the data center space. These legacy systems cannot usually deliver the required cooling for higher density applications and without additional equipment are unsuitable for high density rack cooling. Insufficient airflow through the server racks causes hot spots that threaten reliable server operation.
Read Article: Thermal Management of Data Centers