I had no idea the Hubble telescope could see only 12 billion years into the past.
Frankly, I’d never really thought about telescopes looking into the past until Dr. Michael Norman, a researcher from UCSD gave me a basic education in astronomy – and explained the Hubble looks at celestial bodies whose light is just now reaching us. But it can “only” see 12 billion years into the past – and that was a veil he’d like to pierce. (I asked him what he did for a living, he said, “I simulate the universe.” Trump that job description.)
The question he was interested in answering was, “what about the prior 1.7 billion years?” The universe is roughly 13.7 billion years old, and given the Hubble’s limits, he was using the world’s largest supercomputer, the Ranger platform at the University of Texas at Austin’s TACC (Texas Advanced Computer Center) to simulate the prior 1.7 billion years. (He later confided he was most interested in the prior 1.5 billion years, the first 200-300 million were characterized by lots of hydrogen fog, yet to clump into the wells that enable stars to be born.)
I was asked to give a keynote to celebrate Ranger’s opening, and this was only one example of the flood of basic research and science that will now be performed on the world’s largest open computing platform. Open? The facility was funded by the National Science Foundation, and is committed to providing large scale supercomputing as a service to any researcher or scientist within the US (submit proposals here). Ranger is built entirely on Sun – to dip into geekspeak for a moment, here are the stats:
- In around 6,000 square feet datacenter space, consuming less than 3 Megawatts…
- More than 4000 quad core Sun/Opteron blades, 120+ Tb of DRAM, running CentOS
- Delivering more than 500 teraflops computing capacity
- Jobs scheduled by Sun’s Grid Engine
- Interconnected by two, 100 terabit non-blocking Magnum switches (horns optional)
- Data managed by the Lustre file system, on Thumpers
- More than 2 petabytes of storage
- Managed by our hierarchical data management SAM-FS product, archived to Sun tape platforms
- With overall systems managed and monitored by xVM OpsCenter (the world’s largest installation).
An enormous amount of engineering went into the construction of the facility and the technology behind it, which Sun can now replicate across the world in smaller (and larger, of course) installations (public and commercial). Beyond governments and research facilities, Industries across the planet are turning to high performance computing for business advantage, not simply scientific endeavor. This system consumes a fraction of the power budget required just a few years ago – making it among the greenest supercomputing facilities on earth, too.
To give you a sense of how significant Ranger actually is, check out this chart (click for live version):
Ranger’s capacity exceeds that of all other National Science Foundation granted supercomputing facilities combined. When they say big in Texas, they mean big.
As the director of Cyberinfrastructure at the NSF pointed out during his congratulatory speech, computational simulation is now considered a legitimate field of scientific exploration. From drug discovery to climate modeling, fluid dynamics to simulating the universe, epidemiology to materials science – a facility of this size will revolutionize science, in the US and across the world. To date, there are already more than 500 research projects using Ranger – it’s already changing the world. And because it’s part of the NSF Teragrid, output from the studies will be shared throughout the world. Open means open. Jay Boisseau, TACC’s director, let me know they’re dangerously close to receiving more applications for time on Ranger (they have about 500 million cpu hours to allocate each year, or 125m/quarter) than they have available. For folks like Jay and Dr.Norman, increasing capacity increases appetite – unlike much of enterprise computing, where surpluses are often consolidated away (the heart of Greg’s redshift theory).
How did Ranger come together? It resulted from a commitment to basic science from the National Science Foundation, a passionate set of people at the University of Texas, inspired by a driven technical leader in Jay, commitments from an exceptional (truly exceptional) team of TACC, Sun and AMD employees, with all three groups in a mad scramble to stand up the facility in record time – as the world’s largest open supercomputing facility. The world’s largest, by a factor of 4.
Ranger will transform academia, industry and ultimately society. Why do I believe that?
As I pointed out during my speech, there was a point at which the Niagara Falls power plant in the United States supplied fully 30% of America’s electrical requirements. The engineering and basic science that went into that work parallels the work required to build Ranger. It was truly fundamental research.
Did electricity transform society? Unquestionably. Will knowing what happened in the first 1.7 billion years of the universe transform our lives? We don’t know yet. That’s what Dr. Norman is figuring out. A question Sun, AMD and the University of Texas researcher will now be able to help him answer. With a platform Sun will now be making generally available to the commercial market. (I was going to write something like “parting the clouds of cloud computing,” but even I winced when I read that.)
(For those interested, this is a great summary of Dr. Norman’s approach to computational astrophysics – notably, like pretty much all the work I’m seeing in high performance computing across the world…
…predicated upon free and open source software.)