Given a choice, few consumers would pick a double decker bus over an Italian sports car.
But if you were in the business of moving people at maximum efficiency, buses are hard to beat. Their mileage per passenger mile is >20x your average sports car. One way they achieve this, in the language of IT, is that buses parallelize transportation. They optimize for multi passenger performance, versus single passenger performance (largely the opposite of what most consumers do, parents in minivans notwithstanding).
That same focus on industrial efficiency (and divergence from consumer preference) has been washing over the datacenter for a few years. While consumers turn to Dolce and Gabbana phones, datacenters are increasingly pining for high performance buses – infrastructure optimized for utilization, efficiency, and overall performance, not just simple component speed.
That’s a focus we initiated years ago – when we dove into chip multi-threading, shipping the industry’s first octal (8) core microprocessor – each core with 4 threads of execution, making for a sub-$4,000 server capable of doing 32 threads of work simultaneously (with great mileage). (Click here to try one free.) We deliberately prioritized efficiency over single thread performance – and it was a good bet. Intel, AMD, IBM and Sun are now all investing heavily in multi-core platforms – with Sun farthest ahead, having taped out the world’s only mainstream hexa-deca (16) core microprocessor. (Which I embarassingly referred to as a “hexacore” chip on our last earning’s call… I’ve since been practicing, hexadecacore. Say it with me, hexa-deca-core.)
Hardware without software isn’t worth much, and happily, Solaris knows how to “scale,” or take advantage of all those separate lanes – so applications can take immediate advantage of the innovation without any extra work. Customers can run one application on each thread on the chip, or one app over many threads, or even assign different OS’s to each core on the chip – with every permutation in between (now fashionably referred to as “server virtualization“). The result? Customers buy fewer, bigger boxes, consume less space and power, and generally get way better mileage. Traditionally, if you wanted to guarantee application performance, you gave an application its own server. Server virtualization products like VMWare and Solaris 10 let you condense lots of apps onto a single box, giving each the illusion it has its own box – while a policy engine automates how precious cpu and memory are allocated to guarantee the same performance. Again, radically reducing boxes, power, space, heat – and spend.
We’re bringing that same virtualization to the storage world – having released a file system, ZFS, that knows how to “scale,” eliminating volume management and abstracting away the complexities of dealing with herds of disk drives. Even when those drives fail (after all, there are only two types of disk drives: those that have failed, and those that are about to do so). With some of our customers deploying thousands and even tens of thousands of disk drives, ZFS allows for big pools to be simply aggregated together, delivering reliable service from cheap parts – with incredible simplicity and data integrity.
And that leaves only one segment of the datacenter untouched by a focus on parallelism and virtualization. And for the company that’s always said, “the network is the computer,” it’s been an obvious gap. What about the network itself?
As you may know, most networking devices are single threaded – they parallelize work via physical ports. You want more network? Buy more ports. Yielding all kinds of cabling messes, waste, management headaches and even weight problems (copper weighs a lot, and raised floor datacenters are now hitting their weight limits – no, I’m not joking). The networking world has, for the most part, failed to keep pace with the
brutal efficiency parallelization of the computing world.
In a perfect world, you’d want computers, not people, to dynamically allocate scarce networking resources, like we now see with server resources – assigning, say, lots of bandwidth and guaranteed service levels to high value customers, and less of both to low value (or ‘best effort’) services. You’d want a simple policy engine, not a human being, making such decisions – responding to demand or business rules without physical intervention (eliminating cable swaps and port proliferation, for example). That is, you’d want to virtualize the network, too.
That’s why we just introduced Project Neptune – a silicon project that marries the parallelism of the microprocessor (for Intel, AMD and SPARC systems), with the parallelism of the underlying operating system (Solaris, Linux or Windows), with parallelism in the network itself. Which in
concert with some software magic (which goes by the name of the Crossbow project) allows enterprises to collapse cabling, ports, cards and spending – by bringing parallelism to basic network infrastructure (for geeks, you can take multiple TCP streams and allocate them to different processor threads, spreading out load and freeing up CPU’s/ports). Ports become a physical convenience, just like a server – what’s happening inside depends upon rules or policies set by the user/administrator to automate such decisions. Like I said, the network is the computer, and the computer’s virtualized, so why not the network?
Now who would find Project Neptune appealing? Anyone’s whose spending, for either software licenses, administrative effort, NIC cards, cabling or hosting charges, is related to the number of ports in their datacenter. After all, the future of network computing looks a lot more like a greyhound bus station than a Formula 1 race – as unglamorous as that might be, buses are a lot more efficient, and a lot better for the planet. Fewer, more powerful ports, like fewer, more powerful servers, is a good thing (for us, anyways :).
If you’d like to try a Neptune card for free, just click here. And before you ask, yes, that is a Project Blackbox mocked up like a bus, and no, it’s not a new form factor.
For far more technical details and insights (on Project Neptune and Crossbow), here are some great links and blogs:
And best of all, here’s a truly great podcast, brought to you by some of the engineers responsible for the innovations in Solaris and Neptune.
Today marks Sun’s 25th birthday – there will be lots of folks focused on the proper celebrations (throughout the year – stay tuned). Having been at Sun for only 10 years, I’m very much in awe of our history, and still feel like a newbie (the podcast, above, features four individuals who have each, even the youngest, been at Sun for longer than I).
To me, the best way of celebrating Sun’s history – is to celebrate the future we’re helping to create. Neptune’s a timely example. The network is the computer… indeed.
Happy Birthday, everyone!