What is data center efficiency, and how can it be improved?

Efficiency is defined as a ratio of output from a machine (server, truck, catapult, or other) to the total energy input. The measure of efficiency we're all most familiar with is "Miles Per Gallon". How many miles does your car go (output is distance traveled) from a unit of energy (input is gallons of fuel)?

In the world of data centers the measure of efficiency most commonly used is "Power Usage Effectiveness" or PUE. It is a ratio of how much energy is used by the IT load (servers, switches, routers, storage) compared to the non-IT load energy used by the building/facility itself for cooling, lighting, and other non-IT load items. The theoretical ideal PUE is 1.0, indicating 100% of the energy is IT load. A PUE of 2.0 represents an even split of 50% IT and 50% facility loads. In the real world, most data centers have a PUE between 1.25 and 1.7, with some extremely efficient data centers achieving 1.1 or less.

How can you increase data center efficiency?

The simple, though not very reasonable answer, is turn off the cooling systems and lights. The more practical answer is to focus on where efficiencies can be found by reducing the energy required to cool the heat load generated by the IT equipment. The use of fresh-air cooling, containment, and higher set-points are common strategies for improving data center efficiency.

  1. Fresh-air cooling is a method of using filtered outside air in the data center when exterior temperatures are within the desired interior temperature range. If the outside air is cool enough (typically at night, or any non-summer season in most of the world's northern latitudes) the cooling systems use filtered outside air to supply the data center instead of creating cold air through air conditioning (which consumes more power). The car analogy is to use the ventilation system without the A/C; the air is still filtered, but uses far less energy to keep you cool inside.
  2. Containment is a strategy to gain efficiency through both separation and focus of the air inside a data center. By containing and separating the spaces where cool air flows in, and hot air is exhausted out, the cooling system doesn't have to use as much energy to maintain target temperature ranges. Containment is likely the best bang for the buck way to improve the cooling efficiency of all data centers. As such it will be the subject of a follow-up blog post with much more detail.
  3. Higher set points means running the data center warmer. It is the same as setting your thermostat at home to a temperature that minimizes how much work your air conditioner performs. IT Managers fear heat in data centers, which stems mostly from experience of cooling systems failure leading to IT equipment failures. The reflexive response is keeping their data centers chilled to meat-locker temperatures. The reality is that heat itself doesn't cause failures. IT equipment is tested and certified to run in a very wide range of environmental conditions. Microsoft even proved that servers run fine with no environmental management at all. The common cause of equipment failure is rapid changes of temperature. When a data center goes from meat-locker to oven in a matter of minutes is when things break. Careful planning of loads within your racks, along with proper containment strategies can allow higher temperature set-points with minimal risk of server failure.

To remain competitive in an increasingly competitive marketplace, many of our data center colocation partners strive to be ever-increasing their operational efficiency, below are just a couple examples of providers that prioritize this aspect of operations.

Here is a video from ViaWest talking about some of the aspects of their innovative Generation 4 data center designs that facilitate "super high-density cooling" in several of their data centers across the US and in Canada.

Interxion sees energy efficiency as a core differentiator in the European colocation market. "Much of a data center’s energy use is driven by the mechanical cooling system, of which the chiller is a primary component – accounting for as much as 65% of energy consumption. Engineering teams must find efficient ways to reduce demands on the chiller to offset energy expenditures, like Interxion’s groundwater cooling system in Copenhagen or applying adiabatic cooling as a basic design element for the mechanical system", said from Bob Landstrom, Director of Product Management for Interxion.

Centeris SH1 data center south of Seattle operates 100% Outside Air Intake & Evaporative Cooling methods, maximizing Western Washington’s temperate climate and good air quality. SH1’s intake design and Air Handling Units ensure proper air flow with critical filtration in place before the air cycles into the data center. Centeris is able to keep facility power and maintenance costs lower by avoiding significant capital investment and high maintenance costs faced by traditional data centers. That savings is passed on to their clients in the form of reduced MRCs.

Here is the list of our current domestic and international providers. This list is consistently growing as we add more quality partners to service our client's business needs.