Designing Radically Efficient and Profitable Data Centers


Do you ever wonder what keeps our e-mail servers, search engines, and Web applications like Facebook and Flickr running?


Data centers around the world are responsible for storing and
processing the "petabytes" of information that power modern computing.


But what's supporting data centers?


Vast amounts of power.


In 2000, data centers comprised 0.8 percent of total US electrical consumption. By 2005, despite a 7 percent growth in electricity production, data centers' power consumption grew to 1.4 percent of the total, the equivalent of seven medium-sized (750 MW) power plants. By 2010, according to the Energy Information Administration, that amount is expected to reach 2.3 percent.


The IT industry has proven that the pace of technology development can be stunning. Moore's Law predicted that the amount of transistors per square inch on circuits would double every 24 months. Instead, it has doubled every 18 months.


This explosion in power density does not come without costs, though, and is causing increased temperatures inside and around the chips.


Increased cooling needs together with increasing IT demand are driving the growth in energy consumption. Meanwhile, climate concerns are mounting and prices of electricity are rising fast. Companies that run datacenters everywhere are feeling the crunch on their bottom line.


Competing during these changing market conditions formed the focus of a recent gathering of U.S. experts at the Next Generation Data Center Conference in San Francisco.


There, Rocky Mountain Institute (RMI) energy analysts Sam Newman and Bryan Palmentier showed how designing radically efficient data centers can keep the industry in the black for years to come.


From past
and current client experience, RMI has found that the same computing services can be provided with 95 to 99 percent less power than standard practice. And these gains are achievable with off-the-shelf technology.


According to RMI's research, average data centers are hugely energy inefficient. For every 100 watts these data centers consume, only 2.5 watts result in useful computing (see graph below). The rest of the power is wasted on low server utilization and inefficiencies in the server power supply, fans and hardware that cool servers, UPS (uninterrupted power supply), lighting, and central cooling.



Datacenters can achieve radical power savings by increasing the productivity of technologies closest to the end-use. This avoids all upstream inefficiencies as well. In the case of data centers this means focusing on IT loads -- by turning off unused servers, purchasing more efficient models, or running multiple applications on one machine.


Once IT loads are addressed, upstream equipment can be downsized as
well. For instance, smaller IT loads require smaller cooling systems.


These are only a few examples of the possibilities whole-systems design presents. A host of organizations from Lawrence Berkeley National Lab
to the Uptime Institute to the Alliance to Save Energy have compiled their own strategies to cut datacenter energy use.


Though most of us may never see these developments when logging into our e-mail or updating our profiles, many of our favorite online services depend on these kinds of breakthroughs.


But in an industry that prides itself on innovation, creative solutions are no doubt at hand.


Image Credits:: Andres Rodriguez (servers), EPA 2007 (energy use graph),
Jeff Kandyba and RMI (energy flow chart).


Tags: Electronics

WHAT'S HOT ON FACEBOOK