Hybrid Processes to Improve Energy Efficiency of Data Centers
A measure used to gauge the energy efficiency of data centers is called the Power Use Effectiveness (PUE). PUE is the ratio of the amount of energy used in the center by the amount to run the processors. It is a number larger than or equal to 1 and it is desirable to make it close to 1. In other words, a smaller PUE is better because it shows that smaller energy is used for operations other than running the processors. An industry analysis a decade ago found an average PUE of 2.5. An organization, known as the Uptime Institute, publishes average PUE figures for the industry. In 2009, this number was declared to be 2.5 [29]. It is encouraging that this number is actually dropping very fast. In 2011, the Uptime Institute declared the PUE for the industry to be 1.8 [30]. An Uptime Institute study in 2014 studied the PUE of cloud data centers from Google and Facebook public disclosures plus AWS internal data, all of which show PUEs under 1.2 [31]. These numbers appear to be very good, since in 2008, the Uptime Institute declared that the typical data center has an average PUE of about 2.5, but that number could be reduced to about 1.6 employing best practices [32].
Some simple measures help in improving the PUE in data centers. These are decommissioning or repurposing servers which are no longer in use, powering down servers when not in use, replacing inefficient servers, and virtualizing or consolidating servers. Technology helps as well, by making use of intelligent power management, energy monitoring software, and efficient cooling systems. There is a data center described in Reference [4] that uses air conditioning only 33 h/y although the data center is always functioning. This is achieved by using an intelligent and hybrid cooling system.
Some drastic measures have been taken by the information technology industry to combat the problem of energy consumption. Considering the fact that cooling is a very important part of the overall PUE, the computing company Microsoft introduced an underwater data center [33,34]. Of course, it is not clear that warming the world’s rivers and oceans is a good practice. But at the end of the day, the demand for significantly more computations will inevitably be with us, and it is important to keep the energy efficiency of the Internet and the cloud at a high level. It is good to lower the PUE of a data center as close to 1 as possible, but it is actually necessary to go a step further and make sure that the energy consumption by the search itself results in a minimal level of energy consumption.
Document [35] lists 12 methods to reduce energy consumption in data centers. These 12 methods can be grouped into changes in the information technology infrastructure, airflow management, and managing air conditioning. In terms of information technology, virtualizing servers, decommissioning inactive servers, consolidating lightly used servers, removing redundant data, and investing in technologies that use energy more efficiently can provide substantial improvements. One of the improvements in terms of managing air flow is a “hot aisle/cold aisle” layout where the backs of servers face each other so that the mixing of hot and cold air is avoided. In order to further reduce mixing hot and cold air, containing or enclosing servers is recommended. Improving air flow by means of simple measures such as using structured cabling to avoid restricting air flow is recommended. Finally, adjusting the temperature and humidity, employing air conditioning with variable speed fan drives, bringing in outside cooling air, and using the evaporative cooling capacity of a cooling tower to produce chilled water are recommended to potentially make significant changes.
There are a number of technological achievements to improve data center energy efficiency. Reference [36] proposes a resource management system by consolidating virtual machines according to current utilization of resources, virtual network topologies established between virtual machines, and thermal state of computing nodes. Virtualization is an important tool in data center energy efficiency. To that end, Reference [37] provides a survey of existing virtualization techniques. Reference [38] introduces an optimization based on scheduling tasks according to their thermal potential and with the goal of keeping the temperature low. This chapter introduces gains in temperature and cost reduction compared to other techniques. Traffic engineering is employed in Ref. [39] to assign virtual machines. This paper states, based on experimental results, 50% energy savings was achieved. Reference [40] provides an analysis of how increased ambient temperature will affect each component in a data center and concludes that there is an optimum temperature for data center operation that will depend on each data center’s individual characteristics. In terms of shutting down inactive servers, Reference [41] introduces a technique that predicts the number of virtual machine requests, together with the amount of CPU and memory resources of these requests, provides accurate estimations of the number of physical machines that will be needed, and reduces energy consumption of cloud data centers by putting to sleep unneeded PMs. The study shows the technique achieves substantial savings in energy consumption [42].
Ebrahimi et al. [43] reviewed data center cooling technology, operating conditions, and corresponding waste heat recovery opportunities. The study indicated that a major source of waste energy is being created by data centers through the increasing demand for cloud-based connectivity and performance. In fact, recent figures show that data centers are responsible for more than 2% of the U.S. total electricity usage. Almost half of this power is used for cooling the electronics, creating a significant stream of waste heat. The difficulty associated with recovering and reusing this stream of waste heat is that the heat is of low quality. In this study, the most promising methods and technologies for recovering data center low-grade waste heat in an effective and economically reasonable way are identified and discussed. A number of currently available and developmental low-grade waste heat recovery techniques including district/plant/water heating, absorption cooling, direct power generation (piezoelectric and thermoelectric (ТЕ)), indirect power generation (steam and organic Rankine cycle), biomass colocation, and desalination/clean water are reviewed along with their operational requirements in order to assess the suitability and effectiveness of each technology for data center applications. Based on a comparison between data centers operational thermodynamic conditions and the operational requirements of the discussed waste heat recovery techniques, hybrid processes of absorption cooling and organic Rankine cycle are found to be among the most promising technologies for data center waste heat reuse.
Ayanoglu [14] and the report by the Lawrence Berkeley National Laboratory [3] have an encouraging conclusion. It states that improved energy is almost canceling out growing capacity. In 2014, data centers in the United States consumed 70 billion kWh. If energy efficiency levels remained as they were in 2010, the energy consumption by data centers today would be 160 billion kWh. The surprising reality is that the estimate for 2020 is only 73 billion kWh [3]. However, although short-term predictions appear to be good, there are still concerns for the long-term future, such as ten years in the future [42].