The Rise of the General Purpose System

NOTE: Update at bottom…


Silicon Valley’s hot again. How can I tell? My favorite barometer is a personal one – my commute down either of the the two major highways joining San Francisco and San Jose is as bad as it was in the bubble.


On a less anecdotal note, I’ve also been spending a lot of time with newly funded startups (and the ballooning ranks of venture and private equity investors). Interest level, and market opportunity, are up, for the innovations that fuel the internet.


On the technology front, one of the most interesting trends I’ve seen is the near disappearance of custom hardware. I’m not seeing nearly so many ASICS or custom boards built by hardware startups hoping to become the next Sun or Cisco. I was talking to one such company just a couple weeks ago, run by a guy known for having made big investments in custom hardware designs over the years. So I asked “where’d all the ASICS go?”


His response? “The bar’s a lot higher today – general purpose products are so fast, we do pretty much everything in software.” As Solaris and the systems on which it runs get faster, they’re continuing to displace a breadth of specialized solutions in the market, from customized operating systems or private distros to ASICS and daughter boards.


And in an interesting way, this goes against what customers want.


For the most part, customers love special purpose systems. The NAS filers, load balancers, storage switches and firewalls, even custom search appliances, solve a specific problem, do so with great focus, and are like novacaine on a technical problem. Have a pain? Numb it with an appliance.


There’s only one part they don’t love. Living with the economics.


Leaving high price tags aside, specialized products typically require specialized skills, customized management or versioning processes, and they tend to be difficult to integrate into increasingly uniform datacenter processes. (Southwest Airlines gets great economic advantage from only flying Boeing 737’s – most CIO’s crave a “737 datacenter,” built on one OS, with shared services for all – yet most will admit to having inherited a Noah’s Ark).


On the other hand, suppliers (like Sun) love general purpose systems. By design, general purpose systems, like general purpose servers or operating systems, aren’t focused on one market. Instead, they focus on horizontal segments of the market (like web serving), and allow us to amortize R&D investments over a far broader opportunity. While allowing us and our customers to leverage the management, supply chains, administration, versioning and even ISV’s that build up around very high volume platforms.


Potentially more work for the customer on day one, but typically massive financial and administrative savings. As an example, tomorrow morning we’re introducing a new product, internally named “Thumper.” It’s a 4 core server with 24 terabytes of storage, housed within a very small (4 RU) box, leveraging the most advanced file system to hit the market in years, ZFS.


We’re still figuring out what to call the product, “open source storage” or “a data server,” but by running a general purpose OS on a general purpose server platform, packed to the gills with storage capacity, you can actually run databases, video pumps or business intelligence apps directly on the device itself, and get absolutely stunning performance. Without custom hardware (ZFS puts into software what was historically done with specialized hardware). All for around $2.50/gigabyte – with all software included.


How much new work does a customer need to do to run Thumper in their network? None. It’s just a Solaris system, managed, versioned and administered like all their other Solaris systems. How much work does an ISV need to do to take advantage of Thumper? None, like I said, it’s just Solaris, the same as what runs on our, HP’s, Dell’s and IBM’s servers.


And the best part for Sun? Thumper leverages the general purpose systems innovation at our core, leverages the open source operating system used in datacenters across the world already, and allows us to amortize a tighter R&D budget over a broader market. While driving cost down for customers, and expanding the market for our ISV’s, resellers and partners. Moving from specialized to generalized.


So if you’d like to know what Thumper looks like (and at 170 lbs, it is a thing of beauty, but a very heavy thing of beauty), it looks like this:




And yes, we will be including Thumper in our Try and Buy program. And I’d like to personally apologize to all those poor UPS, DHL and FedEx drivers…



_____________________________________


Update: video for this morning’s launch event, here. Worth watching all (especially Fowler’s segment). My favorite part was Tim O’Reilly talking about the impact of Web 2.0 on application architectures and datacenter requirements. He’s accompanied by Scott Yara from Greenplum, one of the smartest startups I’ve seen in a while. Their interview is at 1:04:35 in the playback.


And btw, given the volume of comments related to “how do I replace a dead drive” in Thumper – the answer is you don’t, you let Solaris and ZFS simply remove it from use (while maintaining provable data integrity), and leave it for an annual maintenance call to clean out failed drives and drop in fresh ones (known in the business as “failing in place”). If you’re interested in understanding the magic behind ZFS, this is a great tutorial (delivered by one of the inventors of ZFS, the very eloquent Bill Moore).

49 Comments

Filed under General

49 responses to “The Rise of the General Purpose System

  1. General Purpose Systems can offer some advantages, but I think you need to time it with accuracy. We need to remember that customers do not “buy” products, they “hire” them to carry out specific tasks. You can design a wonderful product with state-of-the-art technology, glamorous design or whatsoever; but if the customer was not trying to carry the task forehand the product will flop.

  2. Anantha

    Excellent, I was at a customer briefing a couple of weeks back and saw more details on these products. They look very good, specially the X4600. Most important thing about the X4500 is how you position it and sell it. Good luck.

  3. Razvan Vilt

    Hi Jonathan,
    Thumper is an impressive system. Even if just for the concept. There is just one problem: no Sun Microsystems in Romania, only a few companies that bring some Sun servers, but you can’t customize the configuration.
    At the company that I work for, we have a small ops by Sun standards (30 employees), but we do need a rather complex network of 5 servers (that is, if you can call 5 servers complex). One month ago we payed USD 12000 for two Opteron based Proliant servers. I didn’t really mind that, because they are not that bad, but having Sun servers would of rocked, especially considering that in the following 18 months, we expect to move to a new architecture based on Xen + Solaris + RHEL, from our new one based on RHEL + Windows 2003.
    Back to the initial problem. I know that entering a new country is not easy, but just opening a Demo+Shop+Service in Bucharest and Cluj would be a pretty nice first step. I promise to be the first client steping in through the door.
    HP and IBM sell most of the servers over here, and HP is the first one in the desktop computer bussiness (figure that out, as we only have HP desktops and servers), and I know a lot of network admins that favor Sun over other brands over here, much in the way that a lot of guys in the US favor Apple for the desktop.
    Let’s solve this problem in Romania, as it’s not really a small market anymore, and see where we get from there.
    Cheers,
    Razvan

  4. K. Barry A. Williams

    Jonathan,
    After reading John Fowler’s comment of how the name Thumper came about, and now your issue of what to call the system, may I suggest “The Standard Open Source Storage System”. It is I believe, what the industry would like to have, a standard, a system that can be used by anyone, that uses open source code OS (Solaris) that is a standard in-its-self, using standard off the shelf components, providing the customer with the lowest TCO (Total Cost of Ownership). The latter should also be included as being a standard. I do not know what the customer will pay to purchase this system, but if you are correct in that the custome r basically will have with this storage system a “plug-in-play” system, the additional cost to implement this storage system should be just the boot time and cable connection to the data center time. Can’t get much simpler than this!!
    Barry Williams
    Systems Group

  5. So essentially I can store the library of congress and the collected documents from the Smithsonian all in one server AND run Oracle and PostgreSQL in their own Zones AND use ZFS to manage it all without concern for abstracted disks.
    I like it.
    Dennis

  6. [Trackback] “Tomorrow morning we’re introducing a new product, internally named ‘Thumper.’ It’s a 4 core server with 24 terabytes of storage, housed within a very small (4 RU) box, leveraging the most advanced file system to hit the market in years, ZFS…

  7. Netapp Employee

    Ignorance seems to be showing – well I can understand if someone has misled you into believing all that appliances have custom hardware designs. Netapp for example has _NO_ custom hardware. It’s a software game.
    Appliance philosophy is low administration in addition to being purpose built. Your blender is a good example of low administration.
    Sun servers and solaris systems have high administration overheads. Please have one of your folks send you the manual for your systems (hardware and software combined) – the size of which _far far_ exceeds that of Netapp’s manual.
    Jonathan, you are a great guy – I like you. Sometimes you get carried away with wrong info that people give you. We are here to help you. Go SUN !

  8. last comment didn’t show up? again: To add to this riff I would point to an SOA appliance vendor i met recently called Reactivity. VP of Marketing Joelle Gropper-Kaufman took me through the corporate strategy – taking XML routing workloads off of the app server and onto the appliance… when i asked where they got the hardware – it turned out to be Sun… its been a while since I talked to a third party OEMing Sun hardware. High performance general purpose, as you say Jonathan.

  9. In your post, you mentioned ZFS. It was rumored months ago that the next version of Mac OS X was going to switch from HFS+ to ZFS. I spoke to many Mac users about this, and they really disagreed, believing HFS+ is sufficient. Personally I didn’t see the logic of it, because Sun would not benefit very much, apart from saying their file system is used in OS X. Is this still a possibility?

  10. Yikes — is that the way I think it looks — access to the DDM’s is on the top of the chassis? That’s going to be really annoying for the people who have to maintain it — and with that kind of density, I’ll bet the thing’s got cooling problems and will therefore require frequent DDM replacement.I have to admit it does look cool though.

  11. Gayatri Sinha

    ah ha, then it was definitely you I saw walking on Mission street in SF today morning.. a very Good morning indeed for me.

  12. jwa

    Sweet jesus. How is this beast cooled?

  13. John H Silver

    Good article – except perhaps it should have been titled – Rebirth of the General Purpose System – since this is exactly what mainframes have always been. Same everything except resources are allocated/shared using a Systems Resource Manager so that specific applications get what they need when they need it.

  14. Hi, Jonathan,

    this entry I do like a lot, due to the following reasons:

    You highlight the need for standards
    You highlight the trend for general purpose systems
    You highlight the trend for software instead of special hardware

    These things together do force customers to think differently.

    Customers now face the problem of putting special purpose environments onto general purpose hardware, just like what we saw in the IT industry years ago, when we made the switch from programming in Assembler to programming in high level languages (up to “virtual” languages like Java).

    We also see that in bigger general purpose pieces in classical industries. No one today is replacing the lightbulb in a switch in a dashboard in a car any longer. The replacement is taking place at the switch level or even at bigger units.

    And with the trend induced by Moore’s law, systems get cheaper then the average consulting man-day rate (around 1500 to 2000 dollars a day).

    So, it’s cheaper to replace a complete box, instead of replacing faulted parts.

    This in turn creates demand for provisioning, and standards again in how to do software (or systems) provisioning.

    This in turn leads to my question:

    Can you elaborate a little bit more, on how you see Sun helping customers solving these new problems?

    Your Sincerly,

    Matthias

  15. Jonathan Abbey

    Nice look, it reminds me of my TRS-80 Model I, lo these many years past. Battleship grey, simple and neat.

  16. Wow, that’s quite a box. However, it seems as though there are some trade-offs for packing so many drives into 4U. To me, the most obvious is heat (and I’m sure Sun has put lots of thought into it) that may contribute to drive failure.. and when a drive fails, as they inevitably will, how will you replace the single drive without taking down the entire box? In a fully loaded rack, you’ll have to slide out the thumper or you’ll have to slide out the boxes on top of the thumper. Does this mean that the drives are not hot-swappable?
    Or should users just assume that since there is so much storage, drives could be considered as “dead sectors” and just disabled?😉

  17. Wow, that’s quite a box. However, it seems as though there are some trade-offs for packing so many drives into 4U. To me, the most obvious is heat (and I’m sure Sun has put lots of thought into it) that may contribute to drive failure.. and when a drive fails, as they inevitably will, how will you replace the single drive without taking down the entire box? In a fully loaded rack, you’ll have to slide out the thumper or you’ll have to slide out the boxes on top of the thumper. Does this mean that the drives are not hot-swappable?
    Or should users just assume that since there is so much storage, drives could be considered as “dead sectors” and just disabled?😉

  18. MGSsancho

    omg thats amazing. i have a question, can we umm buy it as a barebones, and add the HDDs over time as our data requirements increase? cuz i did the math $61K is a lot of money. but if it were sold as a barebones, and if we could just pop in a hdd and it joins the virtual drive while the system is on, that would be amazing

  19. Yeah, I second Victor Trac, how do you take care of the cooling and replacement? These are not exactly SCSI, and their replacement rate would probably be more.

  20. Igor

    Thumper is an excellent technology – but it’s pricing is as strange as it gets: to charge extra $37K ($70K-33K) for upgrading 48 SATA drives from 250GB to 500GB??? Only laziest (or really stupid) customers would pay that much for <$10K upgrade.
    In almost every situation I can think of 2x12TB boxes ($33K each)are better than 1 24TB box ($70K)- and with Sun’s prices it seems CHEAPER as well.

  21. Prince

    Hi Jonathan, Good to see SUN marching ahead with ‘beefier’ systems. Some folks have even started telling ‘do we need such powerful x64 systems today?’

  22. Mikael Gueck

    The current product names (X4200, X4500, X4600, Sun Java System Application Server, Sun Java System Web Server, Sun Java System Portal Server, …) are still a bit too exciting and distinguish the products too much, perhaps you could take MD5 sums of the names and use them in marketing. Names such as Glassfish, Thumper or Niagara, on the other hand, could cause some emotions and excitement in readers and open up Sun to possible heart attack lawsuits.

  23. With regards to the concerns about the top loade hdd’s according to Tim Bray

    “If one of the disks fails, the little LED beside it lights up. The software handles it (see below) and things go on running; the intent is that you service it about once a year, swapping out the failed drives, which are easy to find. Bringing down maintenance costs is a big deal with a lot of our customers.”

    I guess the theory is you can afford to bring a system down (either this box or the one in the rack above) once a year?
    Maybe you could have the server on slidable tray, and have power and network cables long enough so you can slide the server out and replace the drive?
    I’m sure people will come up with good solutions to that problem…

  24. [Trackback] This is one large set of disks to have in only 4u of space. And to top it off the thing has 4 cores. I love commodity hardware and sun has been rolling out some nice commodity hardware these days. The price for some of the equipment has started to catc…

  25. Kieran

    I agree:
    Storage Server: http://www.newegg.com/Product/Product.asp?Item=N82E16811152069
    Storage Server Motherboard:
    http://www.newegg.com/Product/Product.asp?Item=N82E16813182078
    CPU: http://www.newegg.com/Product/Product.asp?Item=N82E16819103588
    Storage Controller: http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
    Disk Drives: http://www.newegg.com/Product/Product.asp?Item=N82E16822136010
    ~= 2K for 8X 320GB drives
    Download OpenSolaris with build 43+ including the iSCSI target. Run RaidZ2 across 8 disks with iSCSI exported.
    Sprinkle in some community documentation on how to configure and set-up ZFS and opensolaris and we have a the beginnig of real do it yourself white box storage arrays.
    ~$1K per TerraByte of HA Enterprise storage

  26. The drives are designed to be hot-swappable. You take a drive out of commission with standard Solaris hotplug administration commands, then you lift the failed drive out (the LEDs guide you to the one you just readied for removal), replace it with a new drive, then another command to online the drive, and you’re off again. The whole process takes 1 minute (including opening and closing the cover).

  27. The problem would be easily solved if they storage unit that they were housed in placed the device on it’s side and not on it’s belly. They could be placed back-to-back with the disks readily available. Of course this would require custom cabinets, but hey that’s the price you pay for this kind of density.

  28. Claire Connelly

    @Victor
    Dell rackmount servers slide out of the racks on extensible rails; presumably these systems will do the same. Still a pain to be sliding machines in and out, and loose cables could be a problem, but doable.

  29. the x4500 is da killer. One such box can hold 40,000 VHS quality movies. As we know, the profit engine of the internet is driven by online video. There are zillions of web sites that do video streaming. One x4500 is enough for one such site.
    I suggest SUN bundle some video streaming software to make the x4500 a turnkey solution…
    I hope those DHL/UPS/FedEx delivery folks won’t file tort claims for back injuries….As a precaution, SUN should put big warning labels on them.

  30. [Trackback] It’s not that often that I get overly excited about a server, but you’ve got to hand it to the guys at Sun – this is an impressive box! It’s a 4 core server with 24 TB (yes, that’s TERRABYTES) that fits in a 4U chassis and weigh…

  31. Hot swapping the disks works fine; just make sure your stabilizer bar at the bottom of the rack is extended.
    I’ve been using prototypes of Thumper for quite some time now; I am totally enamored with it. My favorite way of thinking of the storage density: take four racks of Thumpers, and you have about a petabyte of data. (Plus 160 Opteron cores to go through all that data!)

  32. Michiel

    What about cooling? The drives seem to be packed pretty dense, so how does air flow from the front fans to the drives in the back.
    And also, is there no plan to sell barebones so I could start small and expand the system as my business grows?

  33. It seems Sun is closely cooperating with Google. When Jonathan says: “how do I replace a dead drive” in Thumper – the answer is you don’t, you let Solaris and ZFS simply remove it from use. He is just following the Google principle: Let Software take care of HW failures. Way to go. Good work! It’s great to see Sun responding to the real market demand and not cooking up new dishes that no one seems to care about.
    IMO, Sun stands a very good chance of a rebound with following these strategies and continued R&D spending (and not listening to Wall Street🙂.
    Abhishek

  34. just checking

    can someone post a link to the place which tells more about why the name ‘Thumper’ was selected?

  35. [Trackback] Sun may be getting their game on if the X4500 is any evidence. Packing 24 TB of storage and 2 dual-core 64bit Opterons into a 4U box, it is the first system anywhere to leverage the very cool ZFS.
    ZFS: Google File System For The Rest Of Us
    Google&#8217…

  36. Tod

    This seems to be missing the mark, IMHO. For a Sun-focused Solaris shop that doesn’t really care about the cost, it would be fine, or if rack mount space is the most important factor, given its high density.
    Other companies, such as Silicon Mechanics, offer 24-drive, front-loading RAID servers in a 5U form factor using all standard off the shelf components (Xeon motherboard, 3Ware controllers) for close to half the price. You can run any standard x86 OS you want (we use Linux) and it comes with a 3-year warranty instead of 2-year.
    Sun and other big tech companies still too often think they can build their business on charging big premiums for marginal value.

  37. Richard S

    This is probably the Xyratex 4835.

    http://www.xyratex.com/products/storage-systems/storage-4835.asp

    N+1, hot-swappable power and cooling
    Hot-swappable I/O modules and drives
    Dual I/O paths to the individual drive level support HA configurations

  38. Kevin Fu

    Hi Jonathan,
    The application is the key for the success of general purpose system. I saw Thumper as a solid foundation for a lot of applications. To name one that is not currently available on it: Virtue Tape Library.
    You got the computing power, you got the “cheap” disks, what you need is an application on Solaris to simulate tape library and tape drives and move data to the real tape library through fiber channel PCI cards.
    HDS (Hitachi) just released their “virtue tape library” product, which is an AMD server, plus some storage system and running Linux and VTL applications. I saw it a perfect example as “customized” system that can be replaced by Sun’s Thumper. Though the magic will be the “killer” application!

  39. Oren Tirosh

    Some people (including people in the storage business who should really know better) automatically assume that since ZFS ends in “FS” then it’s just another filesystem. From the point of view of disk arrays and volume management filesystems are usually considered as irrelevant.

    Boy, are they in for a surprise. ZFS is going to eat their lunch. ZFS is relevant even if you don’t use the posix filesystem semantics at all and just carve it up into zvols to be exported via iSCSI to Windows servers, for example. Advanced features like snapshots and clones all work just fine at the zvol level.
    This does suggest that the name ZFS may not be doing a very good job of representing its true capabilities.

  40. Richard

    This type of packaging is good… but it is not new… the first example was done some years ago by Silicon Gear Mercury series…in San Jose.
    ZFS is great… Are there any ZRAID performance figures..? How many simultaneous disk failures can it tolerate..?
    Thanks

  41. Doug Smith

    We are getting closer and closer to the HAL-9000. Remember the scene in the movie, “2001” as the crystal holographic blades slowly rose from the core, as Dave unlocked the retainer clips?
    Keep the project name “Thumper”.

  42. HC

    I don’t know if you have, but you oughta take a look at this:
    http://www.theregister.co.uk/2006/07/13/fish4_goes_down/

  43. S. R.

    So, the x4500 is a wonderful system. I have been waiting for something *exactly* like this to run with the Solaris 6.06 ZFS release. However, there is a problem with it. I have been paying about $1200/TB for my storage. Now, I am not opposed to paying a premium for what I consider to be superior systems from Sun. However, I am not willing to pay 3 or 4 times market price for such a commodity item as a hard drive. Let’s see here. The difference in price between the 12 TB and 24 TB model is around $37,000. Retail price on a 250 GB drive is $80. Retail price on a 500 GB drive is $250. A difference of $170. Multiple that times 48 and you get $8,160. That seems to indicate to me that the markup on the drives is about 450%. So, take the 12 TB model and subtract what Sun is selling the drives for with the 450% markup, and you are left with about $17,000. Mr. Schwartz, I would LOVE to buy 2 or 3 of these systems from you at $17,000 (an equivalent system without the backplane goes for ~$5-10k) but there is NO WAY I am paying 450% markup on 48 drives which are as much a commodity in this world as network cards and ethernet cable.
    Sun is playing a game I fear it is going to lose. I buy sun products because they are superior in support and engineering. However, I do not buy storage from sun for the simple reason that they charge exorbitant prices for a commodity product. My seagate drive bought up the road is the same seagate drive that sun puts in their system for 4 times the price. I would gladly buy several x4500s in a bare configuration, but instead Sun will not make a penny from me on this wonderful system because of screwy pricing. Too bad such a wonderful system for Solaris 10 6.06 & ZFS will go completely unused around here. It’s a shame too. Our research project could really use the storage.
    Jonathan, I really hope you are reading the comments to your blog, as it seems I am not the only one that is of this opinion.

  44. Bharath R

    Speaking of (general purpose) systems, the fish4.co.uk fiasco mentioned above is giving Sun enough negative publicity to last a decade. Described by Sun as an “enterprise powerhouse can satisfy even your toughest mission-critical service level agreements”, it didn’t quite last that long for fish4. I’d love to see Sun start the amazon-like user rating that was mentioned in an earlier blog entry. I suspect the ratings won’t be all that flattering in the light of such customer experiences.

  45. CB

    Bit unfair. i know i wouldnt wantmy spare parts turning up in back of some guys car. Thats what couriers/ experet delivery people are for.

  46. AL

    What if more than 6 out of the 48 drives fail over a span on one year? Shouldn’t there be a way to replace the drives without any downtime at all? Even the low end A1000’s are hot swappable and I have worked on these which kept on running for more than a year…

  47. From the eco-responsbility perspective, Sun consistently whacks the ball out of the park. It’s not just that the equipment is great, but it’s low energy, it’s built to epeat standards, and it consistently simplifies the overlying architecture of the solution. Thumper seems like the next great thing. Wow.

  48. Petilon

    How long before Microsoft clones ZFS? Microsoft has rights to all of Sun’s patents. (This was part of the settlement of Sun’s antitrust lawsuit.) Basically this means Sun is Microsoft’s research department. Hopefully Sun will make a few bucks off of ZFS before Microsoft clones it.

  49. Sanford Yoder

    Its a DIABLO: Datacenter In A Box of Low Overhead

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s