By 2020, supercomputers are expected to be about a thousand times faster than the quickest machines available today. But will software be able to keep up?
The current world record for computer speed is held by a system developed at China’s National University of Defense Technology. The machine, which is named Tianhe-2 (Milky Way-2), ran at 33.86 petaflops in a speed test last month. FLOPS stands for floating-point operations per second, and is a measure of basically how many calculations your computer can perform for every tick of the clock. Tianhe-2 can perform nearly 34 quadrillion calculations a second. For comparison, your desktop computer probably runs on the gigaflop level, able to perform billions of calculations per second.
Advanced computers are expected to cross the exaFLOPS threshold by the end of this decade -- getting up to quintillions of calculations per second. Today’s supercomputers have millions of processors; the exascale computer will likely have at least a billion processors humming along. When exascale computing is fully realized, many possibilities open up. Hurricane predictions could become more precise; experiments could be run faster, and provide better answers. But effectively putting all that power to work is going to require software that’s much more advanced than what we have now, and it needs to be developed soon, according to University of Tennessee, Knoxville computer scientist Jack Dongarra.
“We can’t wait until 2020 to start thinking about this,” Dongarra said in a phone interview.
But what’s so difficult about programming for a faster computer? To better visualize the problem, imagine you’ve hired a crew to build a house. One person would take a very long time to build a house himself. If there are 10 people on the crew, work will go much quicker, but you have to be sure the team is coordinated -- you don’t want someone putting up a roof that knocks over the walls. Now imagine the logistics of building a house with 100 (or 1,000, or one quadrillion … you get the picture) workers. If you work with bigger and bigger teams, you have to be very precise about instructing them.
“You have to be very careful on how you coordinate those workers, so that they don’t interfere with each other and take longer to build that house,” Dongarra said.
Dongarra and his team are working on building software that can run on existing computers but which can handle the hardware of the future. He just received a $1 million grant from the U.S. Department of Energy for the project, which is called the Parallel Runtime Scheduling and Execution Controller, or PaRSEC. Exascale software, Dongarra thinks, will have to get away from the current programming technique called loop-level parallelism, where different processors work on pieces of computer instructions. With the amount of processors involved in an exascale computer, it may be more efficient for programs to execute functions when single inputs are ready, not when some entire loop is complete.
“We may have to do things more asynchronously,” Dongarra said.
Another big concern is the potential cost of running the computer. There are supercomputers today that use up to 20 megawatts of power, which results in an electricity bill of about $20 million a year. If no new advances in technology are put in place, the exascale computer will likely require $200 million of electricity every year, according to Dongarra. But that challenge is more for the hardware designers to solve.
Roxanne has liked science ever since she started watching "Bill Nye the Science Guy" on Saturday mornings over a bowl of sucrotic O's. She especially likes writing about...