The Death of Yesterday’s Datacenter

An executive from a mobile phone company recently told me the feature most requested by buyers in their fastest growing geography (India) was an LED flashlight. Not a camera, but a flashlight. Edison would never have guessed (obviously). Nor that electricity would one day be on airplanes, lunar landers or deep sea submarines.


Nor would we have imagined that network connectivity and computation would end up on drill bits. Or on ocean going supertankers. And that’s exactly what I was told last week by the CTO of a global energy company. They use sensors on spinning drill bits to extract seismic data, which then guides the bits as they descend into the earth (I had no idea you could actually steer a drill bit). And they do this on offshore drilling platforms. And after they pump crude into supertankers, they use data from sensors spread throughout the ships to monitor vibration, fluid dynamics and rotational physics – to keep the ships, and their precious sloshing cargo, moving safely in the right direction.


I was similarly surprised to hear a global relief agency describe the IT challenges of managing a disaster – starting with a need to supply computing capacity to remote disaster locations without power. More painfully, without desktop system administrators.


And then there’s what Disney’s up to, passing out trackable stuffed dolls to kids in their theme parks, so parents can follow them (as Scott would say, “that’s not Big Brother, that’s Dad…”). By tracking clusters of dolls, the operator can tell parents how long the lines are for a ride, and determine where to place concession stands (in front of waiting patrons, of course). And there’s the wave of DVD players and consumer electronics all becoming network computers – check out the logo on the far right of Panasonic’s new DVD player…




All of the above are examples of putting computing closest to the source of value – and responding in near real time to a changing physical world. No one projected those applications 50 years ago.


So where’s it all headed?


As usual, Greg has some really good thoughts about the biggest issues. But among the most interesting questions of where is computing headed relates to the value of the one place we all thought to be the perfect location for computing: the datacenter.




Now I understand that IT infrastructure has to be put somewhere. But the whole concept of a datacenter is a bit of an anachronism. We certainly don’t put power generators in precious city center real estate, or put them on pristine raised flooring with luxuriant environmentals, or surround them with glass and dramatic lighting to host tours for customers. (But now you know why we put 5 foot logos on the sides of our machines.)


Where do we put power generators? In the engine room. In the basement. Or on the roof. And we don’t host tours (at least in the developed world).


The original intent of the datacenter was to accomodate not computer equipment, but the people who managed it. Operators who needed to mount tapes, sweep chad, feed cards, and physically intervene when things went wrong. Swap a failed board or disk drive, or reboot a system.


Therein lies an interesting quandary – at least from our internal analysis, the availability of IT infrastructure is inversely correlated to foot traffic. The more people allowed in a datacenter, the more likely they are to kick a cord out of the wall, break something trying to fix it, or just bump into things trying to add value.


As the best systems administrators will tell you, the most reliable services are built from infrastructure allowed to fail in place, with resilient systems architecture taking the place of hordes of eager datacenter operators. Instead of sweeping chad, they do periodic sweeps of dead components – or simply let them occupy space until the next facility is brought on-line. (Known as “failing over a datacenter.”)


Very few operators involved, yielding very high service levels. (When there are nearly 1,000,000 people in the world who make their living off eBay, downtime goes well beyond an annoyance.)


Which again begs the question, where’s computing headed? As highlighted by some of the scenarios above, into the real world, certainly.


Perhaps a more interesting question should be – why bother with datacenters at all? Surely it’s time we all started revisiting some basic assumptions…

58 Comments

Filed under General

58 responses to “The Death of Yesterday’s Datacenter

  1. The LED light is there (at least on my cellphone) to act as a flash for the camera — it comes on just before a photo is taken. So “Not a camera, but a flashlight” seems to be missing the point, if (as on my phone) the light is wanted to make the camera more useful.

  2. Prince

    Jonathan, last month i purchased LED flash light from Costco. One needs to crank up
    a handle and generate electricity. The unit also has an alarm & FM radio. This contraption
    is a big hit with my parents in India ( where there are scheduled and unscheduled
    blackouts ).

  3. Kevin Hutchinson

    According to news.com, Saleforce shares your vision for a hosted datacenter – with its new “Apex” Java-like language enabling fancy customizations. But Salesforce seems to be opting for a whole bunch of big slow Dell servers running Intel chips? I thought these guys were your buddies?

  4. MJ

    hey I’m not the sharpest tool in the shed but it took me 45 seconds to find the Java logo on the Panasonic box, a black background is something I normally block out when doing a quick glance search. Just a marketing tip, don’t let the manufactures put a dark background on Sun’s logo’s. You’ll notice the bar code has a white background, black trim with black bars so the cashies can locate it quickly and get to lunch break. MJ in Sunnyvale

  5. Christopher Mahan

    What you’re saying is that a bunch of waterproof sheds with a 5 year life expectancy with a bunch of servers in the Wyoming plains next to 200 feet wind-powered generators right alongside continental fiber connections makes more sense than servers in office buildings? Can’t argue with that.

  6. Haven’t you ever been on a tour at Hoover Dam?🙂

  7. Bharath R

    RE: The comment on the camera, the cell phone with the flashlight (without the camera or any other fancy paraphernalia) is indeed the most useful (low cost) phone here in India.

  8. Cell phones with a flashlight are very useful, my girl friend has one, and she finds it immensely useful, and we live in the USA! The only problem she has is finding a new phone with one… Convergence is very nice at times, and there is no reason not to have something so simple and useful on a phone. More annoying is phones that had an LED flash for the camera and don’t let the user use them as a flash light!

  9. Pedant

    Which again begs the question, where’s computing headed?

    No, it doesn’t. It raises the question.

    For more info:
    http://www.qwantz.com/index.pl?comic=693

  10. HAC

    Once again, you show yourself typical of the “back of the seat the airline magazine” educated CEO, with no concept of what a datacentre offers. To quote Inigo de Montoya, “is no time to explain, let me sum up”
    – Security
    – Redundancy
    – Storage
    – Backup
    Mainframes are still around, even though they now look a lot different. Same with the centralized datacenter. Until you can meet the service levels demanded by most large business customers, distributed computing at the scale you suggest is simply a pipe dream.
    I’ve spent 33 years in IT, mostly in mainframe architecture and support, but also most recently in the open systems side of things. You don’t really seem to have any grasp of what a datacentre actually does. It can provide the kind of reliability, availability and security that the kind of looosely connected systems you envision can’t
    Oh – and the datacentre really doesn’t need that much in terms of staff anymore. Folks don’t mount tapes, robots and virtual tape replaced that years ago (I know, I installed and managed the first robotic tape library in Canada). “lights out” operations are coming much closer to being a 100% thing.
    I suggest you spend more time learning, and less time blogging.

  11. Lazy Genius

    why bother with datacenters at all? because i need somewhere with redundant power and 100% uptime to host my business critical systems!! I suppose I could host them on my PDA but what happens when I forget to charge it over the weekend… seriously, what a ridiculous thing to say.

  12. Rush

    I can somewhat agree with what you are saying, but some I cannot. I am currently involved in a project to build a 20,000sq.ft. co-location datacenter in Montana and have found the demand to be huge. All research points to the fact that many companies (such as American Express, government offices, Fortune 500, etc) have a need for these co-locations for data security and backups. Katrina and 9/11 have underlined the need for such facilities.

  13. IT Matters

    I think Jonathan is forward-thinking (as he does so well). There are also items above that I don’t entirely subscribe to, like “why bother with datacenters at all?” The datacenter will always exist, just as a bank maintains a vault to secure $$/private information.

    The term “datacenter” will likely change in its definition, as Jonathan is implying. And hopefully in the directions that he has pointed out above.

  14. Brian Summers

    I think someone has been reading too many “CEO Pipedreams^H^H^H^H^H^H^H^H^H^HArticles”. Yes, we know you want to cut costs. IT has lots of costs for facilities that actually “work”. But the “data centers” certainly will not go away if you want “data”.Hear me out a bit.Yes, data centers started because lots of people were needed to monitor and maintain the original systems that existed and operated. Heck, people use to need to physically “load” the punch cards/paper that contained the program that would run (i.e. pickup the box that stored all the cards, place them in a hopper of some sort, and set the switches, buttons to tell the computer to follow that program). We don’t do that anymore. But we do a LOT more then that. Sure we can get rid of the data centers, but how will you handle a power outage? Oh that is correct, you are working in an environment where the data is on the desktops. I guess you don’t have people from one building working with people in another building who share common data (oh you do have that? Well I guess that means you need to be able to store and transmit all that common data reliably realtime to every system that needs it, so I hope you have lots of network bandwidth and even more disk in each of those computers… Oh you don’t have that, well, I hope that no one else needs access to that data the day power goes, or the individual system is down…)Think about this for a moment. As the number of systems increase, so does the probability that you will have systems in a failed state (be it hardware, software, act of nature). Having the “data” at the “ends” of the network is a disaster in the making (the “ends” being individual desktop or other computing systems). You would then need “disaster recovery”, “backup”, and “redundancy” needed for all the critical “ends”. So instead of needing to spend money on a few “data centers” that money will now need to be spent on creating hundreds of “micro data centers”. Worse, you will need to split the IT personnel to cover all the other locations instead of being able to consolidate and co-locate personnel with the major data centers. You will lose the benefits of a pool of personnel who collectively together know how to easily fix different problems as they arrise. You will now be relying on one or two personnel to solve the problems as they come up. But because of having the data center dispersed, there will probably be more physical number of systems, which means more of them will be broken and need to be fixed at any given time, which means more personnel needed to fix them.
    Good data centers are still a necessary cost of using computing power. The power (redundancy), heating/cooling, air handeling, data redundancy, system redundancy, and backup/recovery requirements for reliable computing still exist and will exist for the forseeable future (until a major paradim shift occurs, much like what the introduction of networking did to old mainframe systems making today’s micro, blade, and cluster computing possible).

  15. For the Naysayers

    Jonathan, thank you for doing two things: 1. Actually daring people to think beyond current paradigms (and citing real world examples); and 2. Looking at potential new opportunities for Sun Microsystems. THAT’S what a CEO does best: Keeping his/her company moving forward into new growth areas, new monetary opportunities.

    Keep on blogging. You’re continuing to create transparencies that Sun desperately needs to help some folks understand company/industry direction. Naysayers are welcome to their opinion. In fact, bring it on! The dialogue is refreshing!

  16. james

    I really wish you’d stop talking… Reading your blogs has made me go from thinking Sun was pointless to thinking Sun is both worthless and ignorant. You simply have no idea what you’re talking about… Aren’t data centers the only market where something sun sells is still relevant?!
    What about security? What about the volume of data most web 2.0 app providers rely on?
    Get a clue man…

  17. Data centers as a point on the walking tour should be a thing of the past but I think there are a few more hurdles before datacenters go away entirely. Distributed shared processing and distributed databases at the very least. Moving some computing power to the edge is another step, but there are still going to be services accessed via a network that the edge devices rely upon. Using the same tools for the edge devices as is used to develop for the tradional computing environment is not the same as there only being edge devices. Until then having collections of hardware concentrated in one or more locations makes sense for efficiently planning the power, networking, disaster recovery, etc that is required to protect the information assets of a company and guarantee the service they provide to the public will always be available.

  18. Elmer

    Since data centers are going away, sun should stop making servers, to get the ball rolling.

  19. me

    mmmm, not to nitpick, but if you have a lot of data in one source, than that is what a datacenter is. eBay is a datacenter because data is stored in a place to be easily looked up. That’s called centralization, if you decentralize it – you push the data to the clients instead. if eBay didn’t have a datacenter, than your lookups of items would have to go out to the clients – which means that everytime you searched for “lamp” you’d only get results of the person who posted “lamp” if he was currently logged into eBay. as an example, no one would actually do that.

    to make a point non-technically, lets switch electronics for hardware, if i were to notice a lot of people had screwdrivers and toolkits at home, would that lead to the assumption that the modern assembly line was nearing it’s end? that sounds foolish, but to me it sounds like what you were saying.

  20. Matthew Simmons

    I can assure you that datacenters, as in centralized locations for servers, are going away no time soon.

    You spoke of the pervasive technology, such as the tracking of stuffed animals at Disney, and DVD players having Java, presumably to interact with online data.

    Those are revolutionary ideas made commonplace, and computers are getting infinately smaller, but there’s a missing variable in the equation. Where does the data go?

    Almost every new gee-wiz technology that fits in the palm of your hand has multiple hard core servers behind it, somewhere in a datacenter. Phones with flashlights are great, but as long as there are cellphone companies, there will be databases of customers, and it makes no sense for those databases to live outside the datacenter.

    In the disney example, don’t you think the tracking information is stored somewhere? I’m willing to bet Disney’s datacenter is outrageously impressive, just because of all the systems they have online.

    Datacenters will be around for a long, long time.

  21. Working Genius

    Lazy Genius you’re missing the point. If you’re network is dispersed enough and redundant enough, you mitigate the requirement for redundant power and 100% uptime of any given component. Ask Google how worried they are about loosing a node, or 100 or a thousand. Does their business stop? Nope. Have you seen the photos of the new Oregon facility? It’s a huge development, and will o doubt contain thousands of servers, but if you examine the construction, it’s not anything like your typical data center bunker. It’s inexpensive, because even at it’s huge scale, is not individually critical for Google. Likewise if your data center ran on 1,000,000 PDA’s (an extreme example), then a few with dead batteries is not an issue.

  22. Michael

    Let’s take a step back and start with the saying, “The network is the computer.” Outside of the RAS issues that datacenters attempt to address (and, yes, I did say attempt, owing to years in the industry) one of the main things they provide is high-speed connectivity. Having worked with everything from enterprise-class systems, open systems and Grid computing, I would argue that it’s the plumbing that counts most. And until a high-speed network is ubiquitous I don’t see us moving from the datacenter. Certainly it will happen, but considering how long it has been in development and deployment (FIOS? WiMAX? HFC? anyone?) I don’t see it happening in the next 5-10 years. And given that those numbers represent several technology cycles per Moore’s law, who knows what we’ll be doing then and what the requirements will be.

  23. Bosco Hern

    Central power generation is doomed with internal combustion engines so small and easy to implement, they’re showing up in personal vehicles and even handheld devices like weed-whackers. There’s no reason to build all that infrastructure of central powerplants any more — anyone who wants electricity can just run a small motor to generate it locally.

  24. Jonathan, what you’re describing is the slow move towards ubiquitous computing that was put forward by Mark Weiser (the ‘father’ of ubicomp) in 1991.
    The underlying systems that will make Weiser’s vision a reality is the availability of computing devices that range from ‘inches’, ‘feet’ to ‘yards’ (mm, cm, m for us metric kids). What Weiser is saying here is that there isn’t going to be one major form factor for computing (as was the case with the mainframe, the desktop PC and the laptop), but multiple form factors.
    Some devices will be measured in inches and be able to perform a specific task, and others will be measured in feet and perform various other tasks and so on. And yes, all of these devices will be networked. However some devices will be better at certain things like sensing information and others better at things like processing the sensed information to make a decision. As a result, there will be room for the drill bits, the Disney dolls and the datacenter. And as all these devices are networked, you’re going to rely on the datacenter even more. You’ll be able to create smaller endpoint devices, for cheaper, using less power, if you can offload the real computing work to a datacenter, and rely on the network (vs. doing all the work on the device if the network is not fragile).
    Savio

  25. OK, so I know something about the company which does connectivity and computation on drill bits.🙂 Sure, we are going to see a tremendous increase in processing power and data throughput across a multitude of distributed, decentralized systems. But, there is a still a place for some centralized application hosting – we need those networked drill bits and logging tools to send their data somewhere, right?
    I think the old corporate data center concept is a relic of the past as it really is more about bringing the support staff under one roof as opposed to facilitating the highly networked and decentralized Internet that we know and love and depend upon every day. Mainly we need to centralize hosting to aggregate supply-side connectivity, i.e. maximize the performance that is made available for end users. We also want to ensure that applications are source network agnostic, this means that we don’t care what network you the user are connected to but we want to make sure that you get the best performance that we can give you. Another consideration is the ability to run a “lights out” operation – meaning that you don’t want any full time application specialists co-located with your servers – touching equipment unecessarily reduces uptime.

  26. Mark

    Boy, some people are so narrow-minded they don’t understand a proposal designed to get one to think.
    Jonathan is exactly correct. There are many physical datacenter facilities out there which have been there since the mainframe days, and are ill-suited for today’s computing. The best example of this is in power and cooling. What good is all of that raised floor space if you cannot you can only half-populate a rack because the air cooling system is insufficient to support otherwise? What good is it to have to leave two tiles of empty floorspace between each fully populated rack? What good is it to have to spread out computers over the whole datacenter because the electrical wiring was last done 15 years ago and cannot provide the AMPS per tile required for fully populated racks filled with modern blade servers?
    Why is it computing is purchase piecemeal? Power generation, or telco switches provide a good comparison. Power companies buy a big, honkin’ generator, and run it at the capacity matched to demand. They don’t buy a little generator, and then when it gets tapped out, add another one, then add another one.
    What Jonathan is saying is, the massive datacenters run by service providers look very different from those run by enterprises. eBay and Salesforce.com have not been around forty years. So they use new buildings to hold their computers. They don’t have legacy software, so their apps are built on an n-tier, Java based architecture. They are indeed compute utilities, more akin to mobile phone voicemail services than enterprise client-server applications.
    And the truth is, while there are plenty of lights out datacenters, there are many, many enterprises out there who still run a very hands on datacenter. Fiddling, experiementing, piecemealing their way along. Too hands-on where they should not be, and too hands-off where they should be more involved (patch management, anyone?).
    I know of a situation which occurred about 10-12 years ago where a cluster suffered downtime because somebody “borrowed” a SCSI cable connecting the standby server to the disk array, and after being rendered non-redundant, the primary server, still connected to the storage, failed. The cable was replaced by the culprit, who reconnected the cable unaware of what had happend while he was borrowing it. It took months to figure out what went wrong (the culprit finally ‘fessed up). A classic case of human interaction causing a failure, albeit unintended.

  27. Winston

    Personally, I believe the future is rosy for datacenters . . . services such as http://www.xdrive.com/, http://www.mightykey.com/ and http://www.omnidrive.com/, and http://www.anonymizer.com/ offer personal storage and protection for public internet use and is a huge untapped market which require secure, redundant, remote backup facilities. Given the personal nature of the information this business market seems to be at an inflection point and poised for dramatic growth with a related rise in datacenter needs.

  28. Eric Ross

    There is another aspect that is being neglected. What is the value of what is in the datacenter. Sure, there is some value in the hardware and software but the primary value is in the intellectual property and other informational assets. These need to be protected and secured. You don’t scatter your gold all over the place you tend to keep it in well reinforced locations. Yes there is quite a bit of data on the desktops, laptops, and drill presses, but the soul of the company/business is contained on these informational assets and they need to be protected. Only the datacenter(multiple locations) can, today, provide this level of security(both integrity and from unauthorized disclosure).

  29. sebnem

    Hey Guys, don’t shoot the messenger, since there is something as real as P2P – serverless communication is possible and should become secure. Shall we stop ourselves and others from envisioning just because we get goood money and have been doing something for 30+ years?
    Chapeau! What a marvellous company that does not squeeze their thinking into the square of a logo.

  30. Passerby

    Regarding Sun should stop making servers, Sun just will change his slogan from “The Network is the Computer” to “Sun Grid is the Computer”, to get the ball rolling. The same will apply to IBM and HP. So, your advice doesn’t work at all…

  31. Daiichi

    I read the comments and sometimes wonder if people actually read articles carefully… or for that matter, whether I do. It seems to me that what Jonathan is saying is that the datacenter ROOM will change. I agree with him: most datacenters have expensive lighting, expensive flooring, expensive cooling, expensive locks and expensive people to run around inside the room feeding the iron dragons. But eventually, I should hope, servers will become truly ubiquitous and the environmental needs of a datacenter become less important.

    To use an analogy… when I first started playing with Linux in 1995-ish, the first really useful thing I did with it was share my ISDN (yes, 128 Kbps rocked in those days)–and later, the cable modem–connection with the other computers in the house. Remember IP Masquerading anyone? I had to carefully care for and feed that Linux server to make the other users of my household computers happy. It was an expensive solution to a problem in both capital and time.

    Do I use ipmasquerading (or ipchains) anymore? No, does anyone? You can get the exact same functionality nowadays by buying a cheap $30 Linksys router. The server that used to connect my household to the outside world is now relegated to a 2x7x4 inch box stuck under a pile of boxes somewhere. Does it every go down? Well, occasionally I just fumble about and flip the switch and go on my merry way again.

    The same thing happened to the server that was my “Printer Server.” Who uses printer servers anymore? (I’m sure there are some out there–dinosaurs each and every one of ya!)

    The same thing appears to be happening to my “File Server…” 2 of the three “file servers” on my network are now individual appliances with no monitor or keyboard.

    Can’t anyone else foresee a day in which servers in general may become so reliable, so well-defined, and so ubiquitous that we no longer have to put it in an expensive room to showcase it, but we just need to find a dark corner in the office building somewhere where no one will kick the wires?

    ——
    Good job with tbe blog. It’s the first time I’ve been here. I think I’ll stay awhile.

  32. Ted Kosan

    Jonathan wrote:
    “…they use data from sensors spread throughout the ships to monitor vibration, fluid dynamics and rotational physics”
    How unfortunate that Sun’s upper management seems to understand that the interface between the edge of the net and the physical world is going to be extremely important going forward, and yet the organization as a whole seems unable to take advantage of this opportunity.
    In 2002, Greg Papadopoulos published this excellent projection of how the telemetry wave was going to be the next big technology wave to hit the Internet:
    https://embeddedjava.dev.java.net/resources/waves_of_the_internet_telemetry.pdf
    To me, this lucid document indicates that Sun’s upper management understands the opportunities that the telemetry wave will provide. Unfortunately, the other parts of the organization have been pursuing this opportunity using ineffective methods.
    I think this because I have been the community leader for the soon to be deleted Embedded Java community on java.net for the past couple of years. This position has given me a good opportunity to observe the internal workings of Sun’s organization and analyze why Sun has been unable to meet the opportunities that the telemetry wave is providing. I have also been able to develop some unconventional strategies for fixing these problems.
    So far the research I have done, and the surprising conclusions I have drawn, have not been completely understood by the parts of your organization that I have attempted to explain them to. I think, however, that the information might be better received by the people who are responsible for Sun’s strategy. If the research and conclusions are correct, they indicate that Sun is going to miss most of the opportunities that the telemetry wave has to offer unless it changes its current approach.
    Respectfully,
    Ted Kosan
    java.net Embedded Java community leader
    tkosan@dev.java.net

  33. Are you implying that putting more processors “closer to the source of value” eliminates processors elsewhere. I would argue the reverse. The main reason we put processors in drill bits and Disney swag is to gather lots more data with higher accuracy. That implies more computing capacity elsewhere to aggregate and interpret results. The stuffed toy leaves Disneyland, but the data stays behind. I do agree that we need to make data center ecology a high priority and there are lots way to improve efficiency. Thanks for pointing that out.

  34. Abhishek

    Even though more and more computing power will move to the edge the more there will be a need to have a central place to churn the data coming off the edges and to distribute it elsewhere in uniform fashion. I only see need for data centers increasing not decreasing.Even if data centers decrease the need for system integrators are only going to increase. Just count the number of computing applications we have now compared to 5 years back.

  35. Folks,

    some small additions/thought-provoking facts to those, that do believe, DCs are the way of the future…(yes, I still agree, they will stay here for much longer, but is this the way to KEEP it?):

    We (Sun) invented Jini way back. Jini was supposed to enable communications among “pieces” on the network, without “knowing” these pieces and their locations and their capabilities in advance. Still a concept, that needs much further exploration!
    In a talk to Adrian Cockroft, now at Ebay, and partially responsible, for example, for Skype, he explained, that Skype is a network app. The talk-routing, hosting, et.al. is done on CUSTOMERS systems, so there is no real need for a DC anywhere!
    And, one of my famous anachronisms, which I again and again like to remind everybody of: With the compute power of every single cell-phone nowadays, NASA landed people on the moon savely! So, as every single one of us TODAY has the compute power in his hands multiple minutes (some even hours!) a day, that NASA needed to fly to the moon, WHY NOT MAKE USE OF THAT IMMENSE CAPABILITIES? So, yes, RE-THINK the idea of centralizing the compute power (and LIMITING IT’S USE) in DCs, as we ALREADY have way more power OUTSIDE the DCs then INSIDE. And, in an eco-responsible environment, we should all gladly grow to the notion, that building new DCs is ONLY neded, BECAUSE our APPS are not smart enough to take advantage of all the compute power that is already out there, un-harvested!

    So, yes, re-thinking is needed!

    Thanks, Jonathan, for pointing this out to the public!

    Matthias

  36. Steve

    As a network administrator, I’ll have to agree with some of this theory, and point out some holes from my (your potential customer) point of view.

    Distributed and clustered is indeed the way to go. Unfortunately, the technology just isn’t there to meet the end user desires. It is getting there, and is way ahead of where it was 10 years ago.

    However, there is a major fallacy here as well.
    Environmental controls and backup generators are a necessity to 24×7 operations.
    I can recount tales of HVAC techs turning off AC units for maintenance and forgetting to turn them back on. 2-3 hours later you starting getting alarms on equipment. Then for the next 2-3 weeks you are swapping out hard drives and fans on equipment that is only 2 years old!

    The reality here is that the hardware currently being sold for use in servers is high performance, with very tight specifications. Read your SCSI drive specs. You’ll find those drives are only rated for 1M hours MTBF (Mean Time Between Failures) when the temperatures do not exceed about 78F (varies a little) as measured not at the air inlet to the server, but at the hottest point, which is the SCSI connector at the back of the drive, halfway through the server. I assure you from repeated personal experience, (as well as commiserating with other server administrators) that if your air inlet temperature exceeds about 70F, and your racks are full or near full, you will see a significant decrease in drive life.

    You will also have a very unpleasant experience later. Shutdown a data center for a prolonged outage (like an ice storm that knocks out power for days, and you can’t get a tanker in to refill the deisel generator). Now warm your servers back up, then turn them on. Expect a lot of drive failures.
    Reason? Grease a bearing, even a sealed bearing, then bake it at 85-90F for a year or two while turning at 10K or 15K. You’ll find that when cooled, that grease will congeal. Result, the drive won’t spin up to speed in time before the drive controller throws an error.

    Nope, when better components are available, you can look at getting rid of environmentally controlled, power conditioned data centers. Until then, goals are nice, but I’m not prepared to move yet.

  37. Bo Weaver

    Boy this is the dumbest thing I have ever heard. I had to check the address bar to be sure this was coming from Sun. You REALLY work for Sun? You are the guys that came up with my favorite phrase “The network is the computer”. In the future data centers will be needed more not less. Where do you think the data on your Blackberry or PDA comes from? Where exactly is that Sun Mobile Messaging Server? In a secure data center I bet. Or it should be.
    The need for a data center is even more so today with people expecting to be able to retrive their data from thier phone, PDA, laptop, desktop or drill bit. (yes check it out that drill you are talking about its data is going to a DB in a data center.) the data has to be stored in a central location. All these things need servers to make them all work together.
    I work for a company and this is what we sell. Managed services. Nothing is on the desktop machine. All the work from an office is either done on a Terminal Server or Tarantella Server at our office. We’ll even give you a CD that you can pop into any machine with an Internet connection boot up and you get your desktop. This will even work on a machine without a hard drive! Yes you can even get your desktop on your phone.
    Where is the magical desktop really at? On a server at the NOC on a fiber connection, protected by halon fire systems, backed up in realtime to servers at a NOC in another location, and protected by 3 security check points and cameras everywhere.
    No this NOC is not filled with techs running around anymore like the old days. We manage all this from our homes. When something breaks a redundent system takes over until someone goes down to the NOC to do the repair. Because the system is fault tolerant this repair can wait nothing really goes down. We fix it at our leasure. We may have a tech there once a week other than that all mamagement happens over the network.
    Maybe we can get more servers into a cabinet these days but NOC aren’t going away. Maybe they won’t be “Downtown” but they will be somewhere.
    My advice to you is to come down off of your lofty CEO cloud and hang with the engineers down stairs that makes it all happen. You might learn something. Or with this kind of thinking maybe Microsoft has a job for you. PLEASE keep this line of thinking OUT of Sun’s development process. Your stuff works great don’t screw it up.
    Remember… The computer is the network!
    NOC = Network Operations Center (Maybe this is where it all operates from)
    Bo Weaver
    Senior Engineer
    Astila Corp
    Atlanta, GA
    Again I am amazed that someone from Sun of all places wrote this.

  38. Paul Boudreaux

    Any new laptop could be the datacenter of the past. The datacenter model worked and stuck. Just look to the internet, BitTorrent like technologies potentially offer a cheaper, faster, and more reliable way to share information. Yet, the technology is hardly being used to its potential. Encryption is strong enough to protect data not stored centrally. I wonder if the technology was already in place and if you distribute information redundantly across a span of 1000 computers, would you get more storage capacity from a datacenter? Would the centralized datacenter provide more stability than the distributed grid? If the data is encrypted and stored in the equivalent of a RAID array, which model is more secure? Have distributed computers demonstrated they can do advanced calculations? Perhaps the best question is which model costs less in the end? It’s nice to see that we can distribute a J2EE application over multiple servers, but why shouldn’t a company share the load over every computer that’s connected to the network?

  39. Jonathan, from the number of comments and the apparent astonishment your blog seems to have provoked, it seems you’re on the right track. If there are people arguing about a topic, you’re almost certainly onto something.
    We’ve had the same discussions on our campus (conveniently *or* ironically the one that just got the new big Sun teraflop thingamajigger) about the need for a second “datacenter” as it was originally presented (naturally by the folks charged with operating the first datacenter). It was clear to most of campus IT folk that the old way just isn’t necessary. Most of the space in the raised-floor “center” is used up by people’s desks and chairs (no more room to put them and their red staplers). The “mission critical” stuff literally fits in a small office. Oh, and since everything vents front-to-back, the raised floor is nothing but a very cold cable chase. So when we were “warned” that we needed to pay more for services in order to afford a new datacenter, most of us balked. A far better play is to distribute resources, carving out smaller spaces in the many new buildings popping up on campus. Thus, we have instant campus redundancy. There are multiple power generators and chilled water stations on campus, so an outage in any one will only take out the most local facilities. The “fail in place” idea makes sense.
    For those of you who claim that power stations and such aren’t the same because datacenters need “security” and “redundancy” obviously are not familiar with such utilities. You better believe there are redundant systems throughout our campus for power, chilled water, communication (voice & data), fire supression, etc. And those installations have armed guards just yards away.

  40. charlesbutton

    Do I need to provide my email address to comments

  41. I think a lot of the comments to this blog come from those who feel their jobs are jeopardy if, someday, the data center were to CHANGE not disappear. I think the point is…to assume the data center, as we know it, will remain on high-priced urban real estate, dedicated to one enterprises tasks, and close enough to comfortable living so that data center jobs are not outsourced, is myopic. That would be like assuming the power of the global internet will always be for downloading webpages, checking bank balances, and loading ipods with music.
    By the way, I want the CEO of a company to always be assessing the threats and challenges to the company’s business. Last thing we need is a CEO to put his/her head in the sand and push thoughts that only generate sales. Can you imagine if IBM only pushed typewriters and did not consider the mainframe or PC? They did make some mistakes (telling Bill to go on his own in 1980 was a big one). I am sure there was someone typing/mimeographing paper memos that read:
    “I really wish you’d stop talking… Reading your memos has made me go from thinking IBM was pointless to thinking IBM is both worthless and ignorant. You simply have no idea what you’re talking about… Aren’t typewriters the only market where something IBM sells is still relevant?! What about JOB security? What about the volume of typewriter ribbons IBM can sell? Get a clue man…

  42. Servers may not correlate so well with power generators. A generator makes only one thing: alternating current. If one generator breaks, another can take its place and doesn’t have to have many, or any, of the same attributes that the first one had- all it has to do is generate AC current.
    Servers however are totally unique- from the configuration of the OS to the data that resides on them. So while the idea of ‘distributed servers’ throughout the world is noble, I don’t think it is feasible given how unique each server is.
    Even companies that specialize in highly redundant server architectures like Google find that they need huge datacenters.
    There is also the issue that servers are power hungry- and power is less expensive when delivered in a centralized facility than in a decentralized one.
    Steve Lerner, Practice Leader, Content Delivery and Hosting
    RampRate Sourcing Advisors / Strategic Research
    http://www.ramprate.com
    steve@ramprate.com

  43. I think Jonathan is not suggesting that we do away with data centers altogether but that they do away with data centers outside of utilities. Think Sun Grid. Why maintain your own data center when you can pay someone else to do it better? That’s the question. Yes you’d still need networking equipment, or maybe not, what with wireless technology advancing as it does.

  44. Prince

    Hi Jonathan,
    Nobody imagined downloading cell phone ring tones will
    become a multi-billion dollar business. What all can happen in
    the next 10 years. May be my cell phone will have tera bytes of
    storage ? So, people can’t completely trash out your opinions.

  45. Steve, you prove the point by mentioning Google — we could argue that Google’s datacenters are amongst the first compute grids, they just aren’t generic — you are using Google’s data centers and they are replacing functionality in yours. The Sun Grid is a step beyond: it’s a generic rent-a-data-center.

  46. “We certainly don’t put power generators in precious city center real estate, or put them on pristine raised flooring with luxuriant environmentals, or surround them with glass and dramatic lighting to host tours for customers. (But now you know why we put 5 foot logos on the sides of our machines.)”
    We certainly do put power generators in precious inner-city real estate. For some reason I seem to recall Larry Ellison’s “Network Computer” hype a few years back.
    After re-reading this blog posting 1/2 dozen times, the major points that I’m reading between the lines are:
    – Sun’s grid service is still bleeding money
    – Sun would like to brag about it’s better power supplies and lowpower Niagra processor
    – Sun likes rfid tags

  47. Most IT people just don’t get it. Data centers aren’t going to just disapear. They will however slowly fade away as it becomes no longer necessary to manage servers physically or have to concentrate them in one location for raw cpu/disk/network power.
    I think Jonathan is pointing to the coming of age of clusters of cheap easy to deply and easy to manage virtual servers running on cost effective 2, 4, 8, and 16 way servers that can be stuck just about anywhere.
    All you have to do is have 2 electrical closets at each end of the building. Each closet fans out in a star topology. Where there is a server located, you place 2 power drops from 2 different parts of the electrical grid. You can then run dual loop fiber links at 10Gige (or the upcoming 100Gige) to a networking closet (or even between major nodes of need, like a cube farm, or call center).

    Concentrating a LOT of power, heat, and cooling in one room is no longer necessary. The quandry of power and cooling per sq ft. becomes yesterday’s problem. You have effectively spread out the loads in a distributed manner.

    Some will say “but that’s no good for a web server farm!” You should ask yourself, how many servers does each website for each branch, or department really need? Chances are, not really all that much.

    I’ll bet your next question is “But what happens when a server fails?” Easy answer. We are slowly moving towards 100% hot plugable systems in whitebox formats that will work with just about any brand (or non-brand) of particular hardware. A cpu fails? just turn off that core. The entire chip fails? Turn off that chip and leave the other one or 3 running. Memory fails? same thing. Disk fails? the hot spare takes over till the tech can come by and swap it out. In the next few years you won’t even need to power the machine off to do this. Downtime becomes a thing of the past. Heck, we even have failure prediction that can take that virtual server and migrate it in REAL TIME to a different physical server TODAY. How many mainboards today come with DUAL gig-e? Think it’s there for geek points?
    Ok, let’s talk major disaster. With hot remote replication to a backup server hosted in a different department, or a different branch office, no biggie if the office burns down. Since hot backups don’t take anything more than idle ram and disk space, they are dirt cheap to deploy.
    So how does one manage all of this? Well, XenSource is one of the few companies that has already taken the first step by making new virtual server deployment brain dead point and click easy. I’ll bet my bacon to dollars that someone else out there is working on the storage system managment and integration into Xen, VMWare, or Windows Virtual Server right now.
    SO before anyone says “what a kook! Go get a real brain!” take a second and think about things. If you can’t see the potential for this, then you have NOT been keeping up with the computing industry nearly as well as any good IT person must.
    He’s right to ask the question…. why DO we still have datacenters?
    I built a server closet for ~$1,250 in parts and 1 week full time work. AC cooling, 2x 20A circtuits, 1 full rack, and 1 1/2 depth rack. Total of maybe 6 sq ft. It’s used by 6 different people around the world every day for their daily business. It also provides remote backup for a couple other friends.

    I could add a SONET double loop of 100Mbps for $2.5k/mo (or two links from two different companies at $1.5k each/mo for multipath redundancy vs loop redundancy), a pair of 40A portable generators $1,000/ea. and match the reliability of most any colocation center. Look ma! A mini-datacenter!

    All my servers are now virtual. If any one of them failed I would redirect to a backup mini-datacenter somewhere else. Or I could instantly lease a couple machines at one ofth well known dirt cheap companies around the world and shuffle the images there in a matter of hours non-live, or seconds if live backup.
    Simply put, if it’s that cheap for me to deply a mini-center then imagine how cheap it will be once the parts are standardized and offered for puchase out of a magazine?
    So, repeat after me… replication, hot-swap, distributed, modularized, virtualized, and comoditized.
    These are the magic words that will set us all free from the dreaded datacenters. I really do believe that eventually (maybe before I retire) that they will go the way of the dinosaurs.

  48. Rush

    I can somewhat agree with what you are saying, but some I cannot. I am currently involved in a project to build a 20,000sq.ft. co-location datacenter in Montana and have found the demand to be huge. All research points to the fact that many companies (such as American Express, government offices, Fortune 500, etc) have a need for these co-locations for data security and backups. Katrina and 9/11 have underlined the need for such facilities.

  49. Daiichi

    I read the comments and sometimes wonder if people actually read articles carefully… or for that matter, whether I do. It seems to me that what Jonathan is saying is that the datacenter ROOM will change. I agree with him: most datacenters have expensive lighting, expensive flooring, expensive cooling, expensive locks and expensive people to run around inside the room feeding the iron dragons. But eventually, I should hope, servers will become truly ubiquitous and the environmental needs of a datacenter become less important.
    To use an analogy… when I first started playing with Linux in 1995-ish, the first really useful thing I did with it was share my ISDN (yes, 128 Kbps rocked in those days)–and later, the cable modem–connection with the other computers in the house. Remember IP Masquerading anyone? I had to carefully care for and feed that Linux server to make the other users of my household computers happy. It was an expensive solution to a problem in both capital and time.
    Do I use ipmasquerading (or ipchains) anymore? No, does anyone? You can get the exact same functionality nowadays by buying a cheap $30 Linksys router. The server that used to connect my household to the outside world is now relegated to a 2x7x4 inch box stuck under a pile of boxes somewhere. Does it every go down? Well, occasionally I just fumble about and flip the switch and go on my merry way again.
    The same thing happened to the server that was my “Printer Server.” Who uses printer servers anymore? (I’m sure there are some out there–dinosaurs each and every one of ya!)
    The same thing appears to be happening to my “File Server…” 2 of the three “file servers” on my network are now individual appliances with no monitor or keyboard.
    Can’t anyone foresee that servers in general may become so reliable, so well-defined, and so ubiquitous that we no longer have to put it in an expensive room to showcase it, but we just need to find a dark corner in the office building somewhere where no one will kick the wires?
    ——
    Good job with tbe blog. It’s the first time I’ve been here. I think I’ll stay awhile.

  50. I think the real questions is whether processing power and computing will be centralized ot distributed in the future. This in turn should structure the way we will build datacenters, if at all.

  51. I agree with Bharath… the phone is Nokia 1100.

    Nokia had launched the phone under its “Made for India” ad-campaign.

    http://www.thehindubusinessline.com/catalyst/2004/05/06/stories/2004050600100300.htm

    Check out the phone:
    http://www.nokia.co.in/nokia/0,,45346,00.html

  52. I’ve got one of those phones with flashlights. No camera, just an extremely handy way to carry around a rechargable light. Particularly good trying to get into your house at night when you can’t see the lock.

  53. To Chris,
    Jonathan is not talking about the camera flashlight. He is talking about a LED flashlight which works like a torch and emits a light beam (to see in dark). It’s very useful in India where there are too many blackouts.

  54. Computing power is indeed showing up everywhere, closely followed by the question “yes, but does it run linux?” Your examples of computers in drill bits, cargo ships and stuffed animals are interesting. Although, I’m not sure I like Disney tracking my kids, but that is just the paraniod freak in me.

    Not to be contrary, but I must disagree with the assertion that datacenters are going the way of the buggy whip. It is a similar argument we used to hear about how Internet businesses were running bricks-and-mortar businesses out of business. Today, they can’t build Walmarts and lots of other shops fast enough to keep up with demand. Some brick-and-mortar shops went out of business, but so have a lot of Internet businesses. The brick-and-mortar shops have had to adapt, many having a business model that involves both a physical store front and a virtual one.

    Datacenters, while not needing to provide the service they once did, do still provide a needed services:
    Security. The data on our boxes is federaly mandated to be kept private. People have to stay out of the datacenter, only certain people even have access. That includes restricting disgruntled employees from having physical access to the boxes. Also prevents the “tripping over the power cord” syndrome.
    Power. The computers in the datacenter are supposed to be available always. The other computers, not in the datacenter, are not so important. The datacenter has a “reliable” power source not dependant on city power. We don’t need that sort of power for the entire facility, keeping the costs down for the not so important boxes, like my laptop.
    Environment. Until we are all able to move to green Sun boxes, computers are going to need to be cooled. This need may go away if manufactures start making computer cases that keep the computer running independent of the environment it is in. I don’t think this is the case due the costs to the manufacturers. There will always be some sort of environmental consideration running lots of computers in a secure manner.

    I having been joking, or half joking anyway, with the IT staff that they should rip out all the servers and replace them with laptops. They don’t take much space, use little power and will run along quite happily for sometime if the power ever fails, which it does in the real world.

  55. Brian Summers

    I think someone has been reading too many “CEO Pipedreams^H^H^H^H^H^H^H^H^H^HArticles”. Yes, we know you want to cut costs. IT has lots of costs for facilities that actually “work”. But the “data centers” certainly will not go away if you want “data”.Hear me out a bit.Yes, data centers started because lots of people were needed to monitor and maintain the original systems that existed and operated. Heck, people use to need to physically “load” the punch cards/paper that contained the program that would run (i.e. pickup the box that stored all the cards, place them in a hopper of some sort, and set the switches, buttons to tell the computer to follow that program). We don’t do that anymore. But we do a LOT more then that. Sure we can get rid of the data centers, but how will you handle a power outage? Oh that is correct, you are working in an environment where the data is on the desktops. I guess you don’t have people from one building working with people in another building who share common data (oh you do have that? Well I guess that means you need to be able to store and transmit all that common data reliably realtime to every system that needs it, so I hope you have lots of network bandwidth and even more disk in each of those computers… Oh you don’t have that, well, I hope that no one else needs access to that data the day power goes, or the individual system is down…)Think about this for a moment. As the number of systems increase, so does the probability that you will have systems in a failed state (be it hardware, software, act of nature). Having the “data” at the “ends” of the network is a disaster in the making (the “ends” being individual desktop or other computing systems). You would then need “disaster recovery”, “backup”, and “redundancy” needed for all the critical “ends”. So instead of needing to spend money on a few “data centers” that money will now need to be spent on creating hundreds of “micro data centers”. Worse, you will need to split the IT personnel to cover all the other locations instead of being able to consolidate and co-locate personnel with the major data centers. You will lose the benefits of a pool of personnel who collectively together know how to easily fix different problems as they arrise. You will now be relying on one or two personnel to solve the problems as they come up. But because of having the data center dispersed, there will probably be more physical number of systems, which means more of them will be broken and need to be fixed at any given time, which means more personnel needed to fix them.
    Good data centers are still a necessary cost of using computing power. The power (redundancy), heating/cooling, air handeling, data redundancy, system redundancy, and backup/recovery requirements for reliable computing still exist and will exist for the forseeable future (until a major paradim shift occurs, much like what the introduction of networking did to old mainframe systems making today’s micro, blade, and cluster computing possible).

  56. Michael

    Let’s take a step back and start with the saying, “The network is the computer.” Outside of the RAS issues that datacenters attempt to address (and, yes, I did say attempt, owing to years in the industry) one of the main things they provide is high-speed connectivity. Having worked with everything from enterprise-class systems, open systems and Grid computing, I would argue that it’s the plumbing that counts most. And until a high-speed network is ubiquitous I don’t see us moving from the datacenter. Certainly it will happen, but considering how long it has been in development and deployment (FIOS? WiMAX? HFC? anyone?) I don’t see it happening in the next 5-10 years. And given that those numbers represent several technology cycles per Moore’s law, who knows what we’ll be doing then and what the requirements will be.

  57. Jonathan, what you’re describing is the slow move towards ubiquitous computing that was put forward by Mark Weiser (the ‘father’ of ubicomp) in 1991.
    The underlying systems that will make Weiser’s vision a reality is the availability of computing devices that range from ‘inches’, ‘feet’ to ‘yards’ (mm, cm, m for us metric kids). What Weiser is saying here is that there isn’t going to be one major form factor for computing (as was the case with the mainframe, the desktop PC and the laptop), but multiple form factors.
    Some devices will be measured in inches and be able to perform a specific task, and others will be measured in feet and perform various other tasks and so on. And yes, all of these devices will be networked. However some devices will be better at certain things like sensing information and others better at things like processing the sensed information to make a decision. As a result, there will be room for the drill bits, the Disney dolls and the datacenter. And as all these devices are networked, you’re going to rely on the datacenter even more. You’ll be able to create smaller endpoint devices, for cheaper, using less power, if you can offload the real computing work to a datacenter, and rely on the network (vs. doing all the work on the device if the network is not fragile).
    Savio

  58. Abhishek

    Even though more and more computing power will move to the edge the more there will be a need to have a central place to churn the data coming off the edges and to distribute it elsewhere in uniform fashion. I only see need for data centers increasing not decreasing.Even if data centers decrease the need for system integrators are only going to increase. Just count the number of computing applications we have now compared to 5 years back.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s