Data centers are notorious for their enormous energy needs. According to the US Department of Commerce, the power densities of these facilities, measured in kilowatts (kW) per square foot (ft2) of building space, can be nearly 40 times higher than the power densities of commercial office buildings. And those power densities vary from one data center to the next. Because large, concentrated energy consumption is expensive and burdens the electrical grid, data centers are excellent targets for efficiency improvements. Server and HVAC systems are the primary energy consumers in a data center, so they should be the first end use you evaluate for your load-reduction effort (figure 1). Measures for these systems can offer simple payback periods of a few years or less.

Average energy-use data

Figure 1: Energy consumption by end use

In 2012, the US Energy Information Administration (EIA) conducted its latest edition of the Commercial Building Energy Consumption Survey (CBECS), which included energy end-use surveys for 800 buildings that contain data centers or server farms. The EIA estimates that these buildings represent nearly 97,000 facilities across the US, with only about 3% of buildings containing large data centers of more than 10,000 ft2. The vast majority of these facilities are small or embedded data centers. There has been a general trend toward consolidating computing power into larger facilities. It’s in these facilities that we typically see computing make up a larger fraction of total facility energy usage. Although CBECS data are weighted toward smaller facilities, they’re the best energy end-use data publicly available.
A pie chart showing electricity end uses for data centers: cooling, 20%; computing, 19%; ventilation, 17%; lighting, 15%; refrigeration, 6%; and other, 23%.A pie chart showing natural gas end uses for data centers: heating, 60%; cooking, 19%; water heating, 17%; and other uses, 4%.

Finding the best efficiency solution for your data center depends on its size, function, existing condition, and age. For example, measures that work well in a server closet won’t perform sufficiently for large server farms, sometimes referred to as hyperscale data centers. And solutions that work well for large data centers often aren’t feasible for smaller facilities because of their cost or complexity. To choose the best energy solution, you need to understand how large the data center is and how sophisticated its operations are.

Target the IT power load with efficiency upgrades first because you can realize savings at little or no cost. Later on, you can reduce cooling loads to increase those savings. All of the power used by IT equipment eventually turns to heat, which the cooling system must remove. If your IT equipment uses less energy, you’ll save cooling-system energy. Although facilities vary, a typical data center that reduces its computer load power requirements by 1.0 kW offsets approximately 0.6 kW of air-conditioning power (figure 2).

Figure 2: Savings in IT energy use lead to significant upstream savings

Although the main purpose of a data center is to operate IT equipment, in this example, only 47% of the total power entering the data center directly powers the servers and chips. Within each server, an even smaller fraction of energy provides valuable computing services.
Illustration (copyright E Source; data from APC) showing an example of all the electricity used by a data center, separated by energy into data center, energy into servers, and energy into chips. Energy into data center: Cooling (41%), Lighting/auxiliaries (2%), uninterruptable power supply (6%), Switchgear/generator (1%), power distribution unit (3%); Energy into servers: Fans (3%), Power supply (17%); Energy into chips: Chips (27%).

Quick fixes

Although long-term, capital-intensive solutions tend to have the highest overall potential savings, you can make several low-cost changes to decrease your bottom-line expenses. For maximum effect, focus first on quantifying, managing, and reducing computing and cooling loads. Most of these recommendations apply to small data centers.

Computing load

Optimize power management software Using software tools for power management, you can adjust energy consumption in response to processor activity level. These tools let you run the server run at the minimum power necessary to perform computational tasks. You can schedule adjustments and power some servers down (or even off) when workloads decrease, such as at night or on weekends. Or, if your servers run continuously, you can adjust microprocessor power draw to match the computational demand on the server.

Similar to how a networked PC power management application works, power management tools for servers identify equipment that are drawing more power than their level of activity requires. Once you identify servers that aren’t operating efficiently, you can run alternative server-management scenarios to see if redistributing the workload among servers has any effect. In larger facilities, you can manage workloads dynamically in clusters of servers to ease power management and reduce equipment downtime.

Go virtual Virtualizing your servers can reduce energy and capital costs. It can also facilitate a transition to more-robust backup systems. Virtualizing means dividing servers into multiple virtual environments to allow more than one server to operate on the same piece of hardware. Consolidating dedicated servers into fewer virtualized units decreases the number of required systems, which can reduce energy consumption. Despite the initial investment costs, the payback period is typically between one and three years.

Virtualization leads to high demand and power densities within a server room. Keep that in mind when designing or redesigning your cooling and airflow requirements in a virtualized environment.

Eliminate old, unused, and underutilized IT equipment Older servers tend to run less efficiently than newer systems. Unused or underutilized servers use more energy per unit of computational workload than fully utilized equipment. Research from Stanford University’s Steyer-Taylor Center for Energy Policy and Finance indicates that as many as 30% of servers in data centers are comatose, meaning they consume electricity but don’t provide any useful information services. Take stock of your in-house servers and eliminate old or underutilized equipment. A blog post by global sustainability consultancy Anthesis, Report Finds That 30% of Servers Are “Comatose”, provides more-detailed information about the Stanford study.

Cooling load

Broaden temperature and humidity ranges Historically, data centers have operated within highly restricted temperature and humidity ranges. But studies—including those from Lawrence Berkeley National Laboratory (LBNL) and ASHRAE—show that data centers can tolerate a wider range of environmental conditions. You can reduce HVAC operating costs considerably by widening these environmental control settings, including increasing the setpoint temperature on the upper end. You can find recommendations for the thermal design of data centers in the ASHRAE TC9.9: Data Center Networking Equipment—Issues and Best Practices (PDF). At the same time, it’s common for hyperscale data centers to operate at temperature and humidity setpoints well outside of TC9.9-recommended ranges. For example, at 90° Fahrenheit and 80% relative humidity.

Hire a building commissioner According to the US Environmental Protection Agency, ongoing commissioning of a typical data center could improve HVAC efficiency by 20%. Commissioning is a process where engineers observe a building and perform tune-ups as needed to ensure all systems are operating efficiently. When the commissioning process is applied to an existing building that hasn’t been previously commissioned, it’s called retrocommissioning. When applied to a building that’s been previously commissioned, it’s called ongoing commissioning, recommissioning, monitoring-based commissioning, continuous optimization, or persistent commissioning. Ongoing commissioning uses sensors and software in addition to personnel to provide a real-time account of systems within the building.

You can see savings by resetting HVAC controls to reduce waste, performing simple system maintenance, and identifying inefficient equipment to replace. For a summary of design and commissioning considerations for data centers, see the presentation Data Center Design and Commissioning (PDF) from the 2014 NEBB annual conference, Delivering Building Performance and Energy Efficiency.

Longer-term solutions

Benchmarking

Calculate performance metrics Power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE) are two common metrics for characterizing a data center’s energy consumption. The Green Grid, a global consortium of organizations collaborating to address data center issues, developed both metrics. You can find more information on the Green Grid’s calculation method in the white paper Install infrastructure management software These tools can simplify the benchmarking and commissioning process, and they can continuously monitor system performance via network sensors. Real-time benchmarking notifies you when systems fail and validates efficiency improvements. Additionally, the data the infrastructure management software collects will help you improve the effectiveness of other measures, such as airflow management.

Create a facility energy model and complete an analysis There are many tools that perform energy-modeling analysis for different scales and systems. If you’re building a new data center facility, you’ll need to complete some engineering analysis. But for many facilities, that’s the first and last analysis. Modeling during facility construction and before any expansion or upgrade can ensure that your facility maintains its optimized energy consumption. For a review of available energy-modeling tools for data centers, refer to the IEEE paper Data Center Energy Consumption Modeling: A Survey (PDF).

Efficient IT equipment

Buy energy-efficient servers Many blade servers (thin, minimalist hardware) use low-power processors to save on energy and hardware requirements. Microsoft engineers report that low-power processors offer the greatest value for the cost, in terms of computing performance per dollar per watt. The Standard Performance Evaluation Corp. (SPEC) maintains a list of Published SPEC Benchmark Results of energy use across a range of server models.

Look to professional consortiums or trade organizations for recommendations and comparisons of server models according to energy use. For example, the Green Grid’s Library and Tools web page provides content and tools to help you benchmark, evaluate, and compare equipment and facility performance. For larger facilities, unbranded or white-box servers are a popular option. These servers let you tailor their performance to fit computing applications for your data center.

Spin fewer disks Replacing spinning disks with magnetic tape storage—a relatively old information technology—can save nearly 99% of energy consumption. Similarly, implementing a massive array of idle disks (MAID) system can save up to 85% of storage power and cooling costs, according to manufacturers’ claims. Typically, data is stored on hard disk drives (HDDs) that must remain spinning (and constantly consuming energy) for users to retrieve information. MAID systems catalog information according to how often it’s retrieved and place seldom-accessed data on HDDs that are spun down or left idle until the user needs the data. One disadvantage to the MAID approach is a decrease in system speed because the HDDs must spin up again before the data is accessible.

The lower operating costs and reduced energy consumption of MAID systems come at the expense of higher latency and limited redundancy. If you use a MAID system for backup data storage, the data it contains may not be immediately available if another server fails. The HDD must first come up to speed before you can access the data. Tape storage has similar drawbacks, but it can save you energy and money on long-term storage.

Replace storage drives with more-efficient models Flash-based solid-state drives (SSDs) and HDDs serve different functions in a typical data center. While there’s been some speculation that SSDs would eventually outperform HDDs on an energy basis, so far it hasn’t proved to be the case. Modern HDDs are about twice as energy efficient as comparable SSDs. In general, it’s an energy-use best practice to use the most-efficient HDDs wherever possible and use the most-efficient SSDs only where needed.

Airflow management

Separate hot- and cold-air streams Most data centers have poor airflow management, which has two harmful effects. First, the mixed air recirculates around and above servers, warming as it rises and making the higher servers less reliable. Second, data center operators must set supply-air temperatures lower and airflows higher to compensate for air mixing. This wastes energy.

Setting up servers in alternating hot and cold aisles is one of the most effective ways to manage airflow (figure 3). This allows you to deliver cold air to the fronts of the servers, while waste heat concentrates and collects behind the racks. As part of this configuration, you can close off gaps in and between server racks to minimize the flow and mixing of air between hot and cold aisles. LBNL researchers found that, with hot-cold isolation, air-conditioner fans could maintain the proper temperature while operating at a lower speed, resulting in 75% energy savings for the fans alone.

Figure 3: How to set up a hot aisle–cold aisle configuration

The hot aisle–cold aisle concept confines hot and cold air to separate aisles. Limiting the mixing of hot and cold air means it takes less energy to cool the servers.
Illustration (copyright E Source using data from Lawrence Berkeley National Laboratory) showing a model of a hot and cold aisle configuration. It shows how the server racks are separated by alternating rows of cool and warm air.

Reduce bypass-airflow losses In data centers that have poor airflow-management strategies or significant air leakage, bypass airflow can occur when cold, conditioned air cycles back to the air conditioner in the computer room before it cools any equipment, resulting in wasted energy. An LBNL study found that up to 60% of the total cold-air supply can be lost through bypass airflow. The main causes of bypass-airflow losses are unsealed cable cutout openings and poorly placed perforated tiles in hot aisles. You can eliminate this type of problem by identifying bypasses during a study of the cooling system’s airflow patterns. The Schneider Electric white paper How to Fix Hot Spots in the Data Center (PDF) provides guidance and links to additional resources related to airflow analysis and troubleshooting for cooling systems.

Efficient cooling

Bring in more fresh air When the temperature and humidity outside are mild, economizers can save energy by bringing in cool outside air rather than using refrigeration or other cooling equipment to cool the building’s return air. Economizers have two benefits:

  • They have lower capital costs than many conventional systems.
  • They reduce energy consumption by making use of free cooling when ambient outside temperatures are sufficiently low.

In northern climates, this may be the case for most of the year. It’s important to put economizers on a recurring maintenance schedule to ensure that they remain operational. Remember to pay attention to the dampers. Stuck-open dampers can go unnoticed for a long time and increase HVAC energy consumption.

Use evaporative cooling In low-humidity areas, you can take advantage of evaporative cooling to cool your facility rather than using your refrigerant-based cooling system. In northern climates, the opportunity to use this free cooling with a tower-and-coil approach (also referred to as a water-side economizer) can exceed 75% of the total annual operating hours. In southern climates, free cooling may only be available during 20% of operating hours. You can also expand the number of hours where free cooling is a viable option by widening the allowable temperature and relative humidity ranges.

While the water-side economizer and cooling tower are operating, the free cooling this system provides can reduce the energy consumption of a chilled-water plant by up to 75%. The National Snow and Ice Data Center (NSIDC) case study NSIDC Green Data Center: Overview describes the successful use of direct-indirect evaporative cooling in a data center. Strategic use of evaporative cooling techniques can help minimize or, in some cases, eliminate the need for mechanical cooling.

Upgrade your chiller Many general best practices for chilled-water systems (including using centrifugal and screw electric chillers to optimize chilled-water temperatures) also apply to cooling systems for data centers. If your facility isn’t taking advantage of these techniques, consult an HVAC expert to learn about some cost-effective savings.

Humidification

Install ultrasonic humidifiers Ultrasonic humidifiers use less energy than other humidification technologies because they don’t boil the water or lose hot water when flushing the reservoir. Additionally, the cool mist they emit absorbs energy from the supply air and causes a secondary cooling effect. This is particularly effective in a data center application with concurrent humidification and cooling requirements. According to the ENERGY STAR fact sheet eBay Data Center Retrofits: The Costs and Benefits of Ultrasonic Humidification and Variable Speed Drives (PDF), when operator retrofitted an eBay data center’s humidification system with an ultrasonic humidifier, it used 90% less energy than traditional humidification methods. However, ultrasonic humidification systems in data centers generally require pure water. You’ll have to factor in the cost and energy consumption of a high-quality water filtration system, such as one that uses reverse osmosis. If you don’t use a water filter, a thin layer of minerals can build up on server components and short out the electronics. For an example of a real incentivized project’s cost savings, see Ryan Hammond’s presentation SMUD Custom Incentive Program (PDF), from the Emerging Technologies Coordinating Council quarterly meeting.

Liquid cooling

Cool server cabinets directly Although you may not want to have water near your computer rooms, some cooling systems bring water to the base of the server cabinets or racks. This practice cools the cabinets directly, which is more efficient than cooling the entire room (figure 4). Many vendors offer direct-cooled server racks, and several industry observers have speculated that direct-cooling techniques will dominate the future of heat management in data centers. This is typically a new-construction measure.

Figure 4: Direct-cooled server racks increase cooling efficiency

Rather than cooling the servers indirectly by cooling the entire room, direct-cooled server racks circulate cool liquid below the server cabinet—a more efficient approach.
Drawing (copyright E Source; adapted from EYP Mission Critical Facilities) of a direct-cooled server rack.

Give the servers a bath Liquid-immersion server cooling is another direct-cooling approach. It’s usually a new-construction measure and is rarely used, except in hyperscale environments where it continues to be developed and applied because it has significant energy-savings potential.

One approach promoted by Green Revolution Cooling (GRC), a leader in immersion cooling for data centers, submerges high-performance blade servers in a vat of inexpensive nonconductive mineral oil that’s held at a specific temperature and slowly circulated around the servers. This technique saves energy for two reasons. First, the mineral oil’s heat capacity is more than 1,000 times greater than that of air’s heat capacity, resulting in improved heat transfer. Second, the oil pulls heat directly off the electrical components instead of just removing heat from the air around the server. These factors improve cooling efficiency and allow the working fluid to operate at a warmer temperature than would otherwise be possible with air cooling. The 2014 Submersion Cooling Evaluation (PDF) from Pacific Gas and Electric Co.’s Emerging Technologies Program found that GRC’s system yielded more than 80% savings in energy consumption and peak demand for cooling.

Let servers heat the building The National Renewable Energy Laboratory (NREL) uses a warm-water liquid-heat-recovery approach to simultaneously cool its Peregrine supercomputer and heat the building. Sealed dry-disconnect heat pipes circulate water past the cores, then capture and reuse the recovered waste heat as the primary heat source for the data center, offices, and laboratory space. NREL also uses the recovered waste heat to condition ventilation makeup air. NREL estimates that its liquid-based cooling technique saves $1 million annually in avoided energy costs.

Power supply

Replace inefficient UPS equipment Some uninterruptable power supply (UPS) equipment traditionally used in data centers has a low power factor. It draws a large parasitic amperage even when sitting idle, leading to significant energy waste. The two main types of UPS technology include static (battery storage) and rotary (generators and flywheels). Rotary systems were common years ago before they were replaced by battery UPS. And they’re making a comeback. With improved technology and controls for these large rotary UPS, both their efficiency and reliability can now outperform most static systems. However, rotary UPS systems are typically only an option for larger data centers.

All content copyright © 1986-2020 E Source Companies LLC. All rights reserved.