The N1 Grid Service – A True Computing Utility


UPDATE: quick update to the post below. Go here for more details and to test drive some of the technologies.

I was with a CIO from a big investment bank about three months ago. He, and his team, are some of the more innovative folks on Wall Street, creating infrastructure when their demands exceed the IT industry’s ability to keep up (though not quite to the extent as the bank that still internally maintains their *own* linux distro). They drive us hard.


Our discussion was wide ranging – mostly a brainstorming session. He told me all about how much it was costing them to provision infrastructure, manage infrastructure, operate infrastructure, etc. They wanted to know what we were doing about “virtualization.” Now, one man’s virtualization is another man’s outsourcing contract – so we spent some time getting down to brass tacks. “What do you mean by virtualization?” I asked.


And it turns out what he meant was, he wanted to reduce the cost of running his infrastructure. He was spending a ton of money operating what was supposed to be a commodity. Virtualization, in his definition, was technology that would automate those processes.


Now as you probably know, the IT industry is obsessed with “virtualization.” It’s the buzzword of the day (sorry, couldn’t resist). But I wanted to try a new theory – “What would you think,” I asked, “if Sun built out a farm of 20,000 CPU’s, all running Solaris, divisable into “Trusted” containers? And I sold you the right to use (RTU) an industry standard OS/CPU combination in increments of cpu-hours? Solaris/Opteron or Solaris/SPARC, you pick. We’ll run all your Red Hat apps in a Janus partition, and spare you the license cost.”


Good news – he paused. “You’d charge me by the cpu hour? Sounds interesting. How much would you charge me?” Good question. My response, “No clue, but we could put the RTU’s on eBay and see what happened.”


He smiled, and responded, “Probably doesn’t make any sense.” Why? Because he said we were unlikely to meet his transactional requirements over the internet – connecting systems across the globe still meant latency (the speed of light, if nothing else) had an impact on their business performance.


“Frankly,” I said, “I’m not interested in workloads with those sensitivities. Let’s keep this simple. We’ll provision cpu’s by the hour running Solaris Containers. In Siberia, for all I care (to keep cooling/real estate costs low – no offense to our friends in Siberia). And we’ll tell our customers – if you have comptutational workloads, that require 10’s or 100’s or 1,000’s of cpu’s, for defined periods of time (ie, 5 hours, or 3 days or 3 months) – discrete jobs like rendering a movie, or doing a monte carlo or geophysical simulation, or modeling a protein – then we can run your loads on demand for less than anyone in the industry. He went on to say, “clever idea, doesn’t make a lot of sense.” Ah well. I thought…


Since then, I’ve met with a movie studio, rendering one of the new superhero movies, and they said “AWESOME, love to not have to own the farm that renders the monsters. We just throw it away at the end of the movie anyways.” I met with a company that designs jet aircraft, and they said “FABULOUS, love to not have to own the farm that runs the finite element wing sumlations,” and I spent some time with a buddy of mine in academia, who said, “GREAT, love to let you deal with my compute farm – I’d like to get back to folding proteins.” Maybe the idea has legs.


And then the banking CIO called back, and said, “I thought about it, and it’s not a lot of our workload, but I bet we could take 5-10% of our workload and leverage your compute farm. When’s it up on eBay?”


Now you know why we’re announcing a new compute utility tomorrow morning – taking N1 to its next logical step, to a secure N1 Grid service, available on demand, to any customer with a credit card, or a purchase order. In increments of an hour. A virtual compute ranch – virtual because you don’t employ the people, or operate the technology that manages it. A grid that let’s your network do the walking.


And what’s the price going to be? You may have to look on eBay. Or we’ll just throw out a number. Maybe both.


And here’s a view on why this is one of the most important announcements we’ll make this year: what Google, eBay and Salesforce.com are proving are the economics of using someone else’s uniformly standardized infrastructure to run your business. Sun’s business, historically, has been the opposite – we deliver infrastructure to customers who work with us to customize that infrastructure to unique workloads. What salesforce.com and others prove is that there are some workloads for which the reverse can be true – mapping the workload, like salesforce automation, to a singular service provider with a common infrastructure, yields savings from economies of scale that vastly outweigh any potential expense in changing workflows/workloads. The ASP (application service provider) model is, in fact, a great model.


One man’s virtualization is another man’s web service. The era of mapping workloads to network service infrastructure is officially underway.


Price per cpu/hour?


Stay tuned tomorrow.


And the best part about running Solaris 10? You, or your business, would be in a position to sell trusted containers, raw compute power, “back to the grid” when your exchange is closed, your employees have gone home, or your machines aren’t in use. Talk about turning the industry inside out…

Leave a comment

Filed under General

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s