My first professional job was with IBM. In those days, the IBM 360 generation computer was on the way out to be replaced by the 370. IBM’s Mainframes, which were sold in the commercial market were General Purpose Computers. They were fast, but not very good at mathematical calculations of the types used to model weather, aircraft, or even some weapons systems. Control Data Corporation was king in that field, until their principal designer, Seymour Cray left to start Cray Computer. Cray is still around despite the tragic death of their founder in an auto accident. IBM had a habit of building special purpose computers, for big customers. From that came the first IBM Supercomputer, the IBM 360 – 195. It was blazingly fast for it’s day, operating at 10 Million Instructions per Second (MIPs). It also was the first supercomputer I actually got to lay hands on. IBM only built 28 of these, as I recall. It was the first IBM Computer to use a technique called “pipelining” which allowed the Operating System to predict where the next memory location would be that a program would use, and dynamically schedule memory allocation. Unless one of my “Geek Qualified” Readers asks, that is as far as I will get into Supercomputer internals and how they work.
The next real “Monster” came in terms of the Cray 1.
The new Cray operated at 80 MFOPS. This one was the most popular model of Supercomputer ever built selling over 80 units. The round design was based on minimizing the distance travelled between two points inside the computer by the wiring. It was supercooled to further increase the speed, as electrons travel faster a near zero temperature. The machine operated in the nanosecond range. Electricity has time only to travel about a foot through a copper or gold wire in that time.
The Cray 2 – was 100 times faster.
The next major step was to develop machines with multiple processors, instead of just one. The best in the US is the Titan have 552,960 processors running 17.6 petaflop/s. And yes, the like to make them “pretty” now. Here is a list of the “Bad boys” over the last 20 years.
The came the Chinese…
Since 2012, they have had the fastest Supercomputer on the block, the Tianhe-2 (MilkyWay-2), owned by the Chinese Army
Which isn’t something making the US Government happy.
To give a reference as to why these things are important, some years ago an associate of mine was doing work on calculating planets around a star located light years away. This was done then by measuring the “wobble” of the star (no I don’t totally understand it). At his University system, it took nearly 20 hours to run one calculation – and they had a fairly big system. It also stopped other users from doing their work. He got access to one of the supercomputers, which did the calculations in just under 1 second. This is critical to maintaining America’s global lead in research and science.
And if Obama gets this done…It is going to be a Monster.
PRESIDENT BARACK OBAMA has signed an executive order authorizing the creation of new supercomputing research initiative called the National Strategic Computing Initiative, or NSCI. Its goal: pave the for the first exaflop supercomputer—something that’s about 30 times faster than today’s fastest machines.
Supercomputers are at the heart of a huge number of important scientific and defense research projects. They’re used by aerospace engineers to model planes and weapons, and by climatologists to predict the the near-term impact of hurricanes and the long-term effects of climate change. Researchers involved in the White House’s Precision Medicine initiative believe exaflop speed supercomputers could aid the creation of personalized drugs, while the European Commission’s Human Brain Project hopes they will help unlock the secrets of the human brain.
Several government agencies, most notably the Department of Energy, have been deeply involved in the development of supercomputers over the last few decades, but they’ve typically worked separately. The new initiative will bring together scientists and government agencies such as the Department of Energy, Department of Defense and the National Science Foundation to create a common agenda for pushing the field forward.
The specifics are thin on the ground at the moment. The Department of Energy has already identified the major challenges preventing “exascale” computing today, according to a fact sheet released by the government, but the main goal of the initiative, for now, be to get disparate agencies working together on common goals.
It’s hard not to see the initiative as a response to China’s gains in supercomputing. Earlier this month TOP500, an organization that ranks supercomputers by performance,announced that China’s 33.86 petaflop Tianhe-2 is still the fastest supercomputer in the world. The US still has more computers on the TOP500 list than any other country in the world, but researchers have worried for years about falling behind China.
An exoflop is about 1,000 petaflops, and would represent a massive leap forward in computing power. But creating an exaflop computer is about more than just finding a way to build faster hardware. Creating applications that can take advantage of such an architecture is a challenge in its own right. NCSI will also prioritize the creation of supercomputers that can handle vast quantities of rapidly changing data.