There have been a few recent analyses showing that cloud computing has significant efficiency and cost advantages. The most recent one with which I am directly familiar was conducted by Jon Taylor’s team at WSP Environment & Energy for Salesforce.com, and it showed per-transaction emissions reductions averaging 95 percent for companies that shift to using the cloud.
I can think of four reasons why cloud computing is (with few exceptions) significantly more energy efficient than using in-house data centers:
1. Economies of scale. It’s cheaper for bigger cloud computing folks to make efficiency improvements because they can spread the costs over a larger server base and can afford to have more dedicated folks focused on efficiency improvements.
For example, there are usually significant fixed costs of implementing simple techniques to improve Power Usage Effectiveness (PUE), like the costs of doing an equipment inventory and assessment of data center airflow (same for implementing institutional changes like charging users per kW instead of per square foot of floor area). Whenever there are costs that are substantially fixed (i.e. only weakly related to the size of the facility), bigger operations have an advantage because they can spread the costs over more transactions, equipment, or floor area.
There’s also a substantial advantage to having “in house” expertise devoted to efficiency, instead of having staff split between different jobs. Technology changes so rapidly that it’s hard for people not devoted to efficiency to keep up as well as those that are.
2. Diversity and aggregation. More users, more diverse users, and more users in different places means computing loads are spread over the day, allowing for increased equipment utilization. Typical in house data centers have server utilizations of 5-15 percent and sometimes much less, whereas cloud facilities for major vendors are more in the 30-40 percent range.
3. Flexibility. Cloud installations use virtualization and other techniques to separate the software from the characteristics of physical servers (some call this “abstraction of physical from virtual layers”). This sounds like a great thing for software and total costs, but why is it an energy issue?
Using this technique means that you can redesign servers to optimize them and drop certain energy costly features. For example, if software can route around physical servers that die, you no longer need to have two power supplies in each server; the death of any one particular server doesn’t matter to the delivery of IT services.
In essence, this technique redefines the concept of reliability from one that is based on the reliability of a particular piece of hardware to one that is based on the reliability of the delivery of the IT services of interest, and this is a much more sensible approach.
4. Ability to sidestep organizational issues instead of having to address them head-on (which is hard and slow). While most company in-house IT operations face the problem of a disconnect between IT departments driving server purchases, and facilities departments paying the electric bill, that problem has largely been solved for the cloud providers. They generally have one data center budget and clear responsibilities assigned to one person with decision making authority.
Economies of scale are more powerful in the cloud scenario, because you’ve gotten rid of the impediments to taking action and can allow those economies to work their magic. Finally, it’s much easier and cheaper for people stuck with the in-house organizations to use a credit card to buy cloud services instead of waiting around for their internal IT organization to get its act together.
These four big energy advantages will over time translate into more and more pressure for companies to adopt cloud services, because the economic advantages (driven by the energy advantages) are so large. And it’s not just energy costs, it’s the capital cost of all the supporting equipment, which in a standard in-house facility can be $25,000/kW and (together with the energy costs) add up to half or more of the total costs of the facility (for details see Koomey, Jonathan G., Christian Belady, Michael Patterson, Anthony Santos, and Klaus-Dieter Lange. 2009. Assessing trends over time in performance, costs, and energy use for servers. Oakland, CA: Analytics Press. August 17.)
Of course, there are still issues to work out. For example, people haven’t really ironed out the complexities about liability for cloud outages. And there will always be providers who will want to have their own in-house facilities for security reasons (like big financial institutions). But even in that case, the benefits of a virtualized cloud infrastructure can be brought to the in-house facilities.
You won’t get the same diversity, but the other benefits of cloud will still be powerful. I’ve also heard of companies creating “private clouds” for use by other companies that pay in to use them on a “members-only” basis, thus dealing with the diversity and security issues. So things are evolving rapidly, but the economic benefits are so large that we’ll see a whole lot more cloud computing in coming years.
This post originally appeared on Jonathan Koomey’s blog.
Jonathan Koomey is a researcher, author, lecturer, and entrepreneur whose work spans climate solutions, critical thinking skills, and the energy and environmental effects of information technology. He’s been a Consulting Professor at Stanford University since 2004, and recently held visiting professorships at Yale (Fall 2009) and Stanford (2003–4 and Fall 2008), and worked as a researcher and scientist at Lawrence Berkeley National Laboratory (LBNL) for more than two decades
Image courtesy of Pike Research, Grigorieff Photography, GigaOM Events
Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.From car to cloud: the future of the in-vehicle app landscapeThe Structure 50: The Top 50 Cloud InnovatorsVMware’s Cloudy Ambitions: Can It Repeat Hypervisor Success?