In answering the prior question…
As you know, computers consume a ton of energy – if you don’t work in a datacenter, you may not know what I’m talking about. But you know how your laptop warms your lap? Or your PC heats up your den? Multiply that a few thousand times over, and you have a problem faced by most datacenters – power draw and heat dissipation. Map that challenge to every business on earth, and you have a global power crisis as the network is built out. (And talk to some web 2.0 startups, you’ll hear many say their second biggest operating expense, after salaries, is electricity – that’s why the big search companies are building data centers where power’s cheap).
Back to my story… the CIO in my prior posting informed her CEO that in order to support more analytics and trading activity – the computational heart of their business – they needed to build a bigger computing grid. For which they needed more space (which isn’t cheap in midtown Manhattan), and more power – to which he responded, “the CEO of the power company is a friend of mine – let me just give him a call.”
The CIO replied, “no no, those are only a couple of limiters. The more power we bring in, the more cooling we need. The more cooling we need, the more power again. But the thing that’s really holding us back is even with more budget for space, power and cooling, we need backup power in the event of an outage, and the generator necessary to provide backup power of this magnitude is the size of a locomotive, and the only place we could possibly put that is on the roof, and look, we DON’T HAVE ANY MORE ROOM!”
And now you know why we’ve been so focused on the physical size and energy efficiency of our new computing and storage platforms, in addition to their raw performance. (And if you’d like a sample to try for yourself, just click here, or read what others have experienced.) The lowest end systems start at $795 (no, that’s not a typo).