A Simple Comparison

For those that have been leaving comments, I first want to say I appreciate your time and energy. I read them all (both gruff and gracious).


But let me try to clarify a simple point.


I was talking to the CIO of a large financial institution last week. He told me he was in the midst of building out two new datacenters, spending $250,000,000 (yes, a quarter of a billon) on one, more than that on the other. He was beyond frustrated (as I’m sure was his CFO).


I asked him how long it was going to take, he said nearly three years. Years.


And then Dave Douglas reminded me that two to three years is longer than it took for YouTube to incorporate, build out their infrastructure, scale their business to serve the entire planet – and get sold.


So if time to market matters – in responding to market opportunity or a natural disaster – surely something’s got to change. It’s not as though datacenter’s are at any risk of disappearing – like I said, there has to be a place to put network infrastructure – but no one could deny it’s time for a tectonic shift in how we think about the problem.

20 Comments

Filed under General

20 responses to “A Simple Comparison

  1. Hi Jonathan,
    I found out about your blog about a month ago, and getting insight into the way your mind operates, the technology you are looking at and the challenges you experience is fantastic.
    The business challenges you and your colleagues experience are great to learn as few of us will experience them.
    Thank you for investing the time into your blog posts. I’m from Australia and there is not a single CEO I know of with a blog, and I am not sure there is even a company in the ASX 200 that posts.
    Keep pushing for the change in the SEC ruling about posting earnings reports on your blog.

  2. I think it’s hard for most people to comprehend how much infrastructure is needed, how much it costs, and why it is needed. Further, I think it’s hard for most people to comprehend a quarter of a billion dollars. To that end, this still sounds like a bunch of obscure references.
    When more leaders of organizations (very small, up to very large) understand this, then the people pushing the ideas will get more traction and will in turn get more results. As has often been the case with “modernization” and “growth” – there is a lot of push back from those who don’t understand. I don’t think that technology is the biggest hurdle – it’s all the communication around it, and all the education that’s needed to breed more acceptance and encouragement of R&D and other tech [r]evolutions.

  3. Speed

    To put some perspective on that datacenter’s cost and time to build — At the end of 2005, Nipon Cargo Airlines ordered eight Boeing 747-8 cargo planes paying about $277 million each. The first is scheduled to enter service in the first quarter of 2009.
    If history is any guide, those 747s will be in service for 25+ years and will require several overhauls of major components. Is there any data on the service life of a datacenter?

  4. Its only time for a tectonic shift (as opposed to a teutonic shift, we’ve had enough crusading😉 ) for people who saw monolithic data centres as the only answer. That’s a blight that has beset the IT industry since it kicked off – or at least in the past 20 years that I’ve been involved. Consolidation projects are a wise move with a valid business case, and I’m sure you’re delighted at the revenue. Meanwhile however, research suggests IT fragmentation outside the data centre; older blades are incompatible with newer models; M&A acquisition causes the same old integration issues; application rationalisation comes up against as many political issues as technical ones. Perhaps the main shift that needs to happen, is in the thinking that there will ever be a single solution to what should not be considered as a solvable problem. IT is a continuous process of change, and should be treated as such.
    Best, Jon

  5. Jason

    I bet the CFO is tweaked. So they are taking up to 3 years to build a datacenter…..how do you accurately calculate the capacity of such a beast? Sure, there are different ways to measure: watts/sq. ft., kW per rack and so on. From a cost perspective, if it takes three years to build, and it is expected to serve 10 years, how much oversizing has been added based on the unknown??? Smaller, right-sized datacenters would certainly shorten the design, build, operate cycle (time to market)as well as curb the financial risk.

  6. Sys Admn

    Like many companies, we saw a tremendous drop in costs when we were able to set Service Level Agreements around applications, rather than servers. It requires applications built to understand redundancy and tolerate failures, but can be done. Compare the price of 24 cpus in a high end Sun with the price of twelve volume servers – it can easily be an order of magnitude if you factor in backup, cluster managers, redundant hardware, dual power sources.
    If the reliability of an individual server doesn’t have much impact on my SLA, why am I aggregating them in gold-plated datacenters? At what point is networking, labor, and power cheap enough that I have 4-5 inexpensive, slightly-better-than-a-warehouse data centers with smart infrastructure and applications replacing fully-redundant servers in fully-redundant machine rooms in the same building?
    I wonder which future billionaires are working on this problem right now…

  7. This comment is for your previous entry (Oct. 10th), mainly your description of collecting data from automated drills. I thought you might be interested in knowing that in addition to simply collecting data, research is now being done at the University of Tennessee to apply methods similar to those implemented by your researchers for monitoring your enterprise class servers (Dr. Kenny Gross – http://research.sun.com/minds/2004-1007/) to detect and diagnose faults; and, eventually, predict the remaining useful life of the drill’s steering system from the collected data.
    I also wanted to let you know how much I appreciate this blog. Two years ago I didn’t know anything about Sun. I was drawn to learn more when I found out that Dr. Kenny Gross had moved from the nuclear industry to your company (I am currently a Ph.D. student in the nuclear engineering department at UT). Since then, your blog has helped me understand your company more and gotten me very excited about what you are trying to accomplish.

  8. There might be some comparing of apples and oranges in comparing the requirements for a Finacial instituion to something like YouTube. A finacial institution has a completely different level of responsibility to protect its data than something like YouTube. Losing a server at a bank, and all of the records of any transaction made through that server since it was last backed up is much different than losing a server with user submitted movies. You can ask people to repost their movies when they are using a free service. If a bank asked all of the people using its services to redo the last 24 hours, they would suffer a serious loss, in both money and reputation. I also suspect that a financial instition has a great number of legacy applications running on older hardware that must be supported than a completely new organization that can take advantage of being able to buy from different vendors and run on a wider range of hardware. This is assuming something like YouTube is actually hosting everything themselves and not just leasing (or paying for amounts used or any of a number of options) drive space and network access from someone else.

  9. The below mentioned New York Times article about Freindster was an eye opener. They let slow web services drive them out of business!
    Wallflower at the Web Party
    By GARY RIVLIN
    Published: October 15, 2006
    How Friendster missed a billion-dollar break.

  10. Manuel Vidal

    Hi Jonathan,
    I am spanish and when I first saw the investment it seemed a huge one.
    Then I have been looking for the net margin of the two biggest spanish
    banks for 2005, BBVA and Santander, BBVA about 5 Billion dollars (almost
    4 Billion Euros) and Santander about 6 Billion dollars (more than 5
    Billion dolars), so they could invest in 20 DC every year.
    Taking into consideration that Data Centers are necessary to support
    most of the business processes, now I can not see huge investment as
    I first saw.
    But if we think of the time, three years are too long for businesses
    in the 21st century. Santander has doubled its business (and margin)
    during last three years! Has Sun (or the industry) any solution to
    cut dramatically that amount of time?

  11. How much is the business of this particular customer changing with all this technology update? Some financial institutions generate annual revenues of $70 Billion and small optimization of business processes can justify these data centers. Do you see a lot of companies confusing technology updates with business updates? It would be nice to know what the source of frustration is.

  12. Lee Hepler

    Jonathan I think I know one of the best answers to this problem. Use the Sun Grid. But what I don’t understand is why you aren’t offering the entire Sun software stack available as a service on the grid. It could be offered as a disaster failover and recovery service for anyone that uses Sun software and could also be offered as a temporary capacity expansion for those with surges in computer usage. You just have to make it very easy to implement from an already installed server running the software at the local site. Make it transparent to the user as just filling out a JAVA form with a contract number and minimal configuration preferences. The grid container should be able to get most of the configuration data from the currently running system. I bet Larry Ellison would be willing to host the Oracle stack on it too if your customers are happy with the service.

  13. It’s getting faster and faster.
    Is it good or bad?
    I made a small cartoon.
    Bye,
    Oliver

  14. UX-admin

    Hello Mr. Schwartz

    I see that there is now an option on your blog to read it in several different languages.
    May I translate Your blog to yet another language (I’m both willing and able to do it for ‘free-as-in-beer’)?
    Should You be interested, please have Your staff contact me at the e-mail address listed.

  15. Great idea! I think with the new Technology plus the Virtualization SW, All the windows could keep in the trailer. If you need volunters for this project, please let me know. Best regards

  16. John

    These posts were disingenuous in light of today’s Sun datacenter announcement… if Mr. Schwartz knew that Sun was going to make an announcement of this type, he should have just said what Sun was working on. Instead, these two posts seem to be part of an orchestrated marketing campaign, and go against the spirit of this blog.

  17. anaborg

    What I said it could be very real, I do not want to make advertising to x64 multicore Technology, neather the new HW virtualization, neather the possibility to put 25-50 VMs (windows NT) in a 4 sockets Blade server with the new virtualization SW for x64, you should know it! I am working as a customer with virtualization Technologies and Datacenters and I really appreciate this idea, sorry if you don’t think like me.

  18. Ex-Sun lover...

    <SOLARIS SPARC SOLUTIONS COST *DOUBLE* FOR *HALF* THE PERFORMANCE!>
    Dear Jonathan,
    We’re a large Sun shop with thousands of Sun servers and have had a long fruitful relationship with Sun. Unfortunately we’re considering a divorce, to finally consider dating someone else (Dell,IBM,HP,etc.)
    Why? Sun still doesn’t get it enough… Or if it does, it’s not executing good enough… Sun has one last chance with Solaris X86, but doesn’t appear to be pushing it hard enough with the ISV’s.
    (maybe Sun’s distracted by all those other fun things like big black container boxes?)
    We want a lot of different things from a server vendor, but 1st! the applications have to run on the OS…
    From: http://store.sun.com
    X4600 – 4 CPU (dual core), 16GB RAM, 2x 73GB w/ 6x PCI-E slots = $36K List.
    V490 – 4 CPU (dual core), 16GB RAM, 2x 146GB w/ 6x PCI slots = $79K List.
    From: http://www.spec.org/cpu2000/results/rint2000.html
    Sun Microsystems Sun Fire V490(2 processor) 4 cores, 2 chips, 2 cores/chips 29.8 32.9
    Sun Microsystems Sun Fire V490(4 processor) 8 cores, 4 chips, 2 cores/chips 59.0 65.2
    Sun Microsystems Sun Fire X4200 4 cores, 2 chips, 2 cores/chip 64.8 70.0
    Sun Microsystems Sun Fire X4600 16 cores, 8 chips, 2 cores/chip 239 279
    Estimated performance:
    Sun Microsystems Sun Fire X4600 8 cores, 4 chips, 2 cores/chip 119.5 139.5
    Problem:
    Solaris X86 doesn’t run the applications we need to run. (No PeopleSoft, No hundreds of applications)
    Application profile precludes running on T2000, and US-IIIi performance is a joke.
    PROBLEM:
    Solaris sparc solutions cost double for half the performance.
    SOLUTION:
    Ditch Solaris / Sun, go Linux and let Sun play price-war with Dell/IBM/HP etc. on X86 box choices. No more special favors for Sun no matter how sexy D-Trace may look.
    Good bye…

  19. zoly

    this comment does not have anything to do with your post, but I felt like responding to the comment from Ex-Sun lover ..
    There is a lot of people like Ex-Sun lover out there …
    This is the reason why every penny that SUN spends on the teams that compete on benchmarks with IBM and HP is well spent. I am not sure if any benchmark is useful at all in giving you a idea of real life application performance. However IMAGE IS EVERYTHING, and it applies to technology like anything else in our life. (why did BMW enter formula 1 racing some years ago?, it was because Mercedes did that 2 years before them and started to eat out of their market share).
    Man only has to look at the detail of the benchmark system configurations, are those real life systems? How many people deploy databases with: Total Storage/Database Size Ratio: 11.78 (the ratio used on the TPC-H benchmark by the top system from IBM, on a 10 TB database).
    Jonathan continue to spend money on kicking competitor ass in benchmarks. Not only will improve the image of your company, but it will improve your products too. And that will show up in more sales guaranteed. The TPC-H 10Tb top result is from IBM right now, kick them out from the top spot.

  20. TheProblemIsPeopleNotBoxes

    Comparing YouTube scaleout to the needs of a large financial institution?
    One has zero reliability or legal constraints,
    the other is incredibly constrained.
    Why is he spending $250 million on the data center? To meet all the required goals, because if he fails, I’ll be leading
    the class action suit against him for $500 million.
    Can you acheive all the requires of the $250 million center
    with steel boxes? maybe. But I’m not sure you’ve listed all
    the requirements for that center yet. Throw away half of them,
    and I’m sure the cost will come down.
    So the real problem is not the $250 million. The real problem
    is there are $250 million of requirements that need to be dismissed.
    I mean assuming you have employees managing the machines, you have to build bathrooms. And bathrooms have building code requirements.
    You can’t just pee in the parking lot.
    etc. (extrapolate from my simple case)
    Better solution: figure out how you can fire everyone.
    #1 problem: none of the computing infrastructure is as reliable or stable over time, feature wise,
    as vendors advertise. Therefore people get jobs, and bathrooms get
    built.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s