The thinking was that the computers needed to be kept cold to avoid overheating, but as computer hardware technology has improved, the risk of overheating has been reduced. Cold server rooms is an old school design that not only consumes a significant amount of energy maintaining the frigid temperature, but also increases the initial construction cost by purchasing and installing a dedicated unit. Currently there is no consensus on the optimal set point, but 77°F - 85°F seems to be recognized as a safe range depending on the size of the space and equipment within.
Fortune 500 companies such as Hewlett Packard, Google, Microsoft and Intel have been researching the energy savings associated with raising the set points in their data centers while observing computer failure rates. Some takeaways from their studies include:
· Intel conducted a 10-month test to evaluate the impact of using only outside air to cool a high-density data center in New Mexico, where the temperature ranged from 64 degrees to as high as 92 degrees. It found no consistent increase in failure rates due to the greater variation in temperature and humidity and concluded, “This suggests that existing assumptions about the need to closely regulate these factors bear further scrutiny.”
· Intel’s Global Green Building Manager Taimur Burki, speaking at Greenbuild last month, noted their new standard for data centers is 90° on the cool intake side and up to 135° on the racks’ exhaust side!
· Mark Monroe of Sun Microsystems has described how “Data center managers can save 4 percent in energy costs for every degree of upward change in the set point.” Keeping a data center’s temperature ambitiously high, however, may leave less time to recover from a cooling failure, so long-term equipment life should be a cost consideration.
· Although the industry is generally in agreement to raise temperatures, actual set points vary. Dean Nelson, also at Sun, says, “There’s diminishing returns when you raise the temperature beyond a certain amount, because now your cooling systems are working harder to keep up with the increased fan speed on the server. You’ve got to find that sweet spot. But it’s not where we are today. It’s not 65°. I think it’s probably around 80° to 85°.”
The website Data Center Knowledge has additional articles and information.
For smaller “server closets” or telecom rooms, using an exhaust fan tied to a temperature sensor is an energy efficient and cost effective alternative to designing a dedicated cooling system. Based on a recent energy audit, UC Santa Cruz developed a new campus standard for exhaust fans in telecommunication rooms that includes continuous air ventilation sufficient to limit temperature rise to 10° F for a 1 KW load from communications equipment. Yet, since there is still no uniform design, the standard also reflects the uncertainty by requiring a dedicated HVAC system with a thermostat set to maintain temperature in the range of 68°F to 78°F.
In a December 2008 article for the website ITBusinessEdge, Drew Robb details temperature’s effect on equipment reliability at the time: “For every 18° F above 70 degrees, electronics reliability is reduced by 50 percent. Therefore, it is best to set the AC to run at around that level or just a little higher – no more than 77 degrees.” Today we should be designing our telecom and server rooms to operate at higher temperatures and working towards standardizing the absence of a dedicated system.