A few years back, I remember sitting with a group of customers talking about wine, and virtualization (a natural pairing, if ever one existed). Wine, because we were at an event Sun was hosting in Napa Valley, the heart of California’s wine country – virtualization, because the attendees were data center professionals who’d come to talk about the future.
The customers in attendance all ran very high scale, high value data centers, who would deservedly respond to the accusation that they “hugged” their servers with “and what of it?” They were the individuals who kept some of the world’s most valuable systems running with exceptional reliability.
But they were all starting to see and worry about the same thing, running applications in “virtualized” grids of networked infrastructure (“cloud computing” wasn’t yet in vogue, or I’m sure someone would’ve used the term).
Now, virtualization is a simple concept with a fancy name (abbreviated to “v12n” by the cognoscenti – by that method, I am “j14z”). It’s simply slicing up physical computers into many smaller “virtual” computers, each of which can be outfitted with its own OS and application stack.
That is, not only does a virtualized computer take on the task of running multiple OS’s (running atop a hypervisor, described below), but the OS’s themselves might change over time, responding to load or schedule. The traditional view of “computer A runs OS/Application B” can now give way to a more responsive “these computers are available for high priority work,” without regard to operating system or architecture. A spike in on-line shopping might reallocate more “virtual” machines to transaction processing during peak shopping hours, shifting to a different OS/app stack when the frenzy dies down. Capacity moves from fixed to fungible.
Although desktop virtualization wasn’t the focus of these customers, most live in a world with multiple desktop OS’s, too – it’s not that they all (like me) run five different desktop OS’s, most don’t – it’s that they have multiple generations of Windows, or no longer have the source code to legacy applications, a condition that dictates you keep old OS’s (and hardware) around. Desktop virtualization enables users to run multiple OS’s side by side on a single desktop, and divorces software upgrades from hardware upgrades (an innovation keeping CIO’s and developers smiling).
Back to the datacenter, virtualization can enable extreme infrastructure consolidation – decoupling applications from hardware drives more efficient capacity planning and system purchases. And as exciting as that was to everyone, if things went wrong, you could also tank the quarter, blow those savings and end your career. So, why all the anxiety?
If I could sum it up, these customers worried that virtualization would dissolve the control they’d carefully built to manage extreme reliability. In essence, they could hug a virtualized mainframe or an E25K (hugging is the act of paying exquisite attention to an individual machine), but it’s far harder to hug a cloud. Nor can you ask a cloud why it’s slow, irritable, or flaky, questions more easily answered with a single, big machine.
As the wine soothed their anxieties, a few of them began to draw out their vision of an ideal cloud environment (our laptops were open to take notes). Summarized, here’s what they wanted:
Extreme diagnosability. Datacenter veterans know that things rarely run as planned, so assuming from the outset you’re looking for problems, bottlenecks or optimization opportunities is a safer bet than assuming everything will go as expected. They all wanted ultimate security in responding to the question “what if something goes wrong?” – their jobs were on the line.
Second, they wanted extreme scalability – they all believed the move toward horizontally scaled grids (lots of little systems, ‘scaled out’), would give way (as it always does) to smaller numbers of bigger systems (‘scaled up’). We’re seeing that already, with the move toward multi-core cpu’s creating 16, 32, 64 even 128 way systems in a single box, lashed together with very high performance networking.
But scalability applies to management overhead, as well – having 16,000 virtualized computers is terrific (like 16,000 puppies), until you have to manage and maintain them. Often the biggest challenge (and expense) in a high scale datacenter isn’t the technology, it’s the breadth of point products or people managing the technology. So seamless management had to be our highest priority, with extreme scale (internet scale) in mind.
They wanted a general purpose, hardware and OS independent approach. That is, they wanted a solution that ran on any hardware vendor they chose, not just on Sun’s servers and storage, but Dell’s, IBM’s, HP’s, too. And they wanted a solution that would support Microsoft Windows, Linux and not just Solaris. Ideally embraced and endorsed by Microsoft, Intel, AMD, and not just Sun.
And finally, they wanted open source. After years of moving toward and relying upon open source software, they didn’t want to reintroduce proprietary software into the most foundational layer of their future datacenters. Some wanted the ability to “look at the code,” to ensure security, others wanted the freedom to make modifications for unique workloads or requirements.
And with that feedback, the answer to the above seemed obvious to one attendee, “why can’t you guys just use Solaris?” They all ran Solaris in mission critical deployment, all appreciated its performance, they loved the diagnosability (via delivered via DTrace), and the capacity to scale to the largest systems on earth. It was the perfect answer until one of the customers asked, “do Windows customers want to run Solaris? I don’t think so.” The “Solaris” brand didn’t convey OS neutrality – and that neutrality was core to what we were thinking. But we knew the underlying inventory of OpenSolaris innovations would certainly give us a fabulous headstart.
That’s the rough backdrop to what drove our virtualization announcements last week – a desire to solve problems for developers and datacenter operators in multi-vendor environments. If you look to the core of our xVM offerings, you’ll see exactly how we responded to the requirements outlined above: we integrated DTrace for extreme diagnosability. We leveraged the scale inherent in our kernel innovations to virtualize the largest systems on earth. We’ve built a clean, simple interface to manage clouds (called xVM OpsCenter, click here for more details), to address management and provisioning for the smallest to the largest datacenters. And everything’s available via open source (and free download), endorsed by our industry peers (watch these launch videos to see Microsoft and Intel endorse xVM – no, that’s not a typo, Microsoft endorsed xVM). We even leveraged ZFS to get a head start on storage virtualization (the next frontier).
And why call it xVM? To make sure everyone knew we weren’t simply targeting Solaris – xVM virtualizes Microsoft Windows, Linux (Ubuntu, RHEL, all other distros) alongside Solaris (8, 9 and 10). Customers can consolidate those operating systems, and similarly consolidate their hardware infrastructure – and use xVM OpsCenter to manage and maintain the whole plant.
This week, we’re unveiling a full line of desktop to datacenter virtualization offerings, covering desktop virtualization (xVM VirtualBox), datacenter virtualization (xVM Server), high scale management (xVM Ops Center), and Virtual Desktop support (xVM VDI and SunRay). All endorsed and supported by the industry, and all in use by some of the most powerful customers on earth.
And to that end, I’d like to offer my thanks to the customers who were present at that event a few years ago, and offer my sincere congratulations to the teams involved in bringing xVM to market, across Sun and our partner community.
With all the celebration around xVM, perhaps our next customer event should be held in Champagne…