Energy-Efficient Data Centers

Data centers are major energy consumers. Their energy consumption has been increasing by double-digit figures every year to power up the massive deployment of ICTs. There are several ways to optimize energy efficiency in data centers (FEMP 2011).

First, IT systems themselves can be optimized. Their consumption traditionally accounts for 50% of the total electricity bill, the rest being consumed by cooling and secured power. Servers are an intuitive first target for energy optimization. They run partial loads most of the time and the use of variable speed fans or power management devices can help optimize the power consumed by servers. Distributing the overall computing load across multiple servers (and processors) can also yield energy savings. Here, beneficial hardware and software technologies include multi-core processors and server virtualization. As well, storage devices can be optimized by managing accurately the data that needs to remain online against the data that can be stored offline. Network equipment has also considerably improved from an energy consumption standpoint. Well-designed network systems help reduce the energy footprint too. Finally, new high-efficiency power supplies have today an efficiency of 95%, against 70% for outdated technologies.

Air management and cooling is another area for improvement. Well-designed air flow through cold/hot aisles in data centers is key to limiting the heating of the various components and systems and the associated cooling required. Even proper cable management can help to improve air flow. Cooling must be well designed to ensure energy-efficient data center operation. There is a vast array of Computer

Room Air Conditioning (CRAC) systems available in the market. High-efficiency equipment that use variable frequency drives to adjust energy consumption to actual cooling need is favored. Central air handlers are also more efficient than modular systems (FEMP 2011). Equipment can also be cooled using direct liquid cooling. The liquid captures the heat generated by the equipment and drives it outside of the server room, instead of it being dispersed inside the room’s atmosphere (which would then require additional cooling). Free cooling can also be used, with air-side or water-side economizers, which basically use the temperature gradient between the server room and the outside.

The power supply plays also a critical role in achieving energy efficiency. Load factor is a key element when it comes to redundant systems. Smaller uninterruptable power supply (UPS) units are preferred to larger ones as they increase the relative load factor and thus energy efficiency. The consolidation of redundancies (one power supply source per server rack instead of one per server) also helps better distribute power to the loads. Finally, DC power, required by many components, can be distributed in an organized manner in order to avoid multiple AC/DC conversions throughout the system, and the corresponding losses.

The heat generated by servers can also be reused to maximize the energy efficiency of a data center. Cogeneration can contribute to a better efficiency ratio. Wasted heat can also serve to keep standby generators warmed up, or to run absorption chillers as a complement to electrical systems.

Finally, the consolidation of data processing remains the best way to reduce energy use and its associated costs. Cloud-based solutions and collocation data centers are more efficient than small systems owned and operated by individual businesses.

 
Source
< Prev   CONTENTS   Source   Next >