Supercomputer Explained

A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985 - 1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".

Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. The IBM Roadrunner, located at Los Alamos National Laboratory, is currently the fastest supercomputer in the world.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.

Common uses

Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.

A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.

Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.

Hardware and software design

Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.

Supercomputer challenges, technologies

Technologies developed for supercomputers include:

Processing techniques

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).

Operating systems

Supercomputer operating systems, today most often variants of Linux, are at least as complex as those for smaller machines. Historically, their user interfaces tended to be less developed, as the OS developers had limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). These computers, often priced at millions of dollars, are sold to a very small market and the R&D budget for the OS was often limited. The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.

Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as AMD and NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.

Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos) and today's Linux.

In the future, the highest performance systems are likely to use a variant of Linux but with incompatible system-unique features (especially for the highest-end systems at secure facilities).

Programming

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is generally Fortran or C, using special libraries to share data between nodes. Most commonly, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPU's from wasting time waiting on data from other nodes.

Software tools

Software tools for distributed processing include standard APIs such as MPI and PVM, VTL and open source-based software solutions such as Beowulf, WareWulf and openMosix which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community which often creates disruptive technology in this arena.

Modern supercomputer architecture

As of November 2006, the top ten supercomputers on the Top500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:

As of November 2008 the fastest machine is IBM Roadrunner. This machine is a cluster of 3240 computers, each with 40 processing cores. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.

As of February 2009, IBM has announced work on "Sequioa" which will be a 20 petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in 2011. [1]

Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a ten-year old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4,000 US dollars. Supercomputing is taking a step of increasing density allowing for Desktop Supercomputers to become available, offering the compute power that in 1998 required a large room to require less than a Desktop footprint. i.e. Supermicro Blade server centre that can house 160 Cores in only 21.65”W x 34.65”D x 30.64”H (Up to 40 CPU's, 160 Processor Cores, and 640GB memory for 4-way version SuperBlade®)

Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.

Special-purpose supercomputers

Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking.Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.

Examples of special-purpose supercomputers:

The fastest supercomputers today

Measuring supercomputer speed

The speed of a supercomputer is generally measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (

1012

FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (

1015

FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.

"Petascale" supercomputers that can process 1000 trillion FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one million teraflops.

The Top500 list

See main article: article and TOP500. Since 1993, the fastest supercomputers have been ranked on the Top500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.

Current fastest supercomputer system

On June 8, 2008, the Cell/AMD Opteron-based IBM Roadrunner at the Los Alamos National Laboratory (LANL) was announced as the fastest operational supercomputer, with a sustained processing rate of 1.026 PFLOPS.[2] The Roadrunner hardware and software was then optimized and the benchmark was re-run and submitted for the November 2008 TOP500 with an Rmax of 1.105 PFLOPS, barely surviving a challenge from the Cray XT5 Jaguar to remain the fastest computer on the "official" list.[3]

Quasi-supercomputing

Some types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme.

One such example is the BOINC platform, a host for a number of distributed computing projects., BOINC recorded a processing power of over 1.7 petaflops through over 530,000 active computers on the network.[4] The largest project, SETI@home, reported processing power of over 508 teraflops through almost 317,000 active computers.[5]

Another distributed computing project, Folding@home, reported over 4.5 petaflops of processing power as of December 2008. A little over 1.5 petaflops of this processing power is contributed by clients running on PlayStation 3 systems and another 2.6 petaflops is contributed by their newly released GPU2 client.[6]

, GIMPS's distributed Mersenne Prime search achieves currently 29 teraflops.

Also a “quasi-supercomputer” is Google's search engine system with estimated total processing power of between 126 and 316 teraflops, as of April 2004.[7] In June 2006 the New York Times estimated that the Googleplex and its server farms contain 450,000 servers.[8] According to recent estimations, the processing power of Google's cluster might reach from 20 to 100 petaflops.[9]

The PlayStation 3 Gravity Grid uses a network of 16 machines, and exploits the Cell processor for the intended application which is binary black hole coalescence using perturbation theory.[10] [11] The Cell processor has a main CPU and 6 floating-point vector processors, giving the machine a net of 16 general-purpose machines and 96 vector processors. The machine has a one-time cost of $9,000 to build and is adequate for black-hole simulations which would otherwise cost $6,000 per run on a conventional supercomputer. The black hole calculations are not memory-intensive and are highly localizable, and so are well-suited to this architecture.

Research and development

IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".

Other PFLOPS projects include one by Narendra Karmarkar in India,[12] a CDAC effort targeted for 2010,[13] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[14]

In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012.[15]

Given the current speed of progress, supercomputers are projected to reach 1 exaflops (

1018

) in 2019.[16] Futurist Ray Kurzweil expects supercomputers capable of human brain neural simulations, for which according to Kurzweil 10 exaflops (

1019

) would be required, in 2025.

Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (

1021

) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[17] Such systems might be built around 2030.[18]

Timeline of supercomputers

This is a list of the record-holders for fastest general-purpose supercomputer in the world, and the year each one set the record.For entries prior to 1993, this list refers to various sources[19] . From 1993 to present, the list reflects the Top500 listing[20], and the "Peak speed" is given as the "Rmax" rating.

YearSupercomputerPeak speed
(Rmax)
Location
1942Atanasoff–Berry Computer (ABC)30 OPSIowa State University, Ames, Iowa, USA
TRE Heath Robinson200 OPSBletchley Park, Bletchley, UK
1944Flowers Colossus5 kOPSPost Office Research Station, Dollis Hill, UK
1946
 
UPenn ENIAC
(before 1948+ modifications)
100 kOPS <-- fully parallel, doing 20 additions per "add-time", however such programs were very difficult to design and debug so were rarely done (if ever) -->Department of War
Aberdeen Proving Ground, Maryland, USA
 
1954IBM NORC67 kOPSDepartment of Defense
U.S. Naval Proving Ground, Dahlgren, Virginia, USA
1956MIT TX-083 kOPSMassachusetts Inst. of Technology, Lexington, Massachusetts, USA
1958IBM AN/FSQ-7400 kOPS25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers)
1960UNIVAC LARC250 kFLOPS <-- Only single processor was built, dual processor would have been 500 kFLOPS -->Atomic Energy Commission (AEC)
Lawrence Livermore National Laboratory, California, USA
1961IBM 7030 "Stretch"1.2 MFLOPSAEC-Los Alamos National Laboratory, New Mexico, USA
1964CDC 66003 MFLOPSAEC-Lawrence Livermore National Laboratory, California, USA
1969CDC 760036 MFLOPS
1974CDC STAR-100100 MFLOPS
1975Burroughs ILLIAC IV150 MFLOPSNASA Ames Research Center, California, USA
1976Cray-1250 MFLOPSEnergy Research and Development Administration (ERDA)
Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide)
1981CDC Cyber 205400 MFLOPS(numerous sites worldwide)
1983Cray X-MP/4941 MFLOPSU.S. Department of Energy (DoE)
Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing
1984M-132.4 GFLOPSScientific Research Institute of Computer Complexes, Moscow, USSR
1985Cray-2/83.9 GFLOPSDoE-Lawrence Livermore National Laboratory, California, USA
1989ETA10-G/810.3 GFLOPSFlorida State University, Florida, USA
1990NEC SX-3/44R23.2 GFLOPSNEC Fuchu Plant, Fuchu, Japan
1993Thinking Machines CM-5/102465.5 GFLOPSDoE-Los Alamos National Laboratory
National Security Agency
Fujitsu Numerical Wind Tunnel124.50 GFLOPSNational Aerospace Laboratory, Tokyo, Japan
Intel Paragon XP/S 140143.40 GFLOPSDoE-Sandia National Laboratories, New Mexico, USA
1994Fujitsu Numerical Wind Tunnel170.40 GFLOPSNational Aerospace Laboratory, Tokyo, Japan
1996Hitachi SR2201/1024220.4 GFLOPSUniversity of Tokyo, Japan
Hitachi/Tsukuba CP-PACS/2048368.2 GFLOPSCenter for Computational Physics, University of Tsukuba, Tsukuba, Japan
1997Intel ASCI Red/91521.338 TFLOPSDoE-Sandia National Laboratories, New Mexico, USA
1999Intel ASCI Red/96322.3796 TFLOPS
2000IBM ASCI White7.226 TFLOPSDoE-Lawrence Livermore National Laboratory, California, USA
2002NEC Earth Simulator35.86 TFLOPSEarth Simulator Center, Yokohama, Japan
2004IBM Blue Gene/L70.72 TFLOPS <-- Technically the same system as the two neighboring entries -->DoE/IBM Rochester, Minnesota, USA
2005<-- Technically the same system as the next two entries -->136.8 TFLOPS <-- Technically the same system as next two entries -->DoE/U.S. National Nuclear Security Administration,
Lawrence Livermore National Laboratory, California, USA
280.6 TFLOPS <-- upgrade of prior system -->
2007<-- upgrade of prior system -->478.2 TFLOPS <-- upgrade of prior system -->
2008IBM Roadrunner1.026 PFLOPSDoE-Los Alamos National Laboratory, New Mexico, USA
1.105 PFLOPS

See also

Supercomputer Companies / Manufacturer

Supercomputer companies in operation

These companies make supercomputer hardware and/or software, either as their sole activity, or as one of several activities.

Defunct supercomputer companies

These companies have either folded, or no longer operate in the supercomputer market.

General concepts and history

External links

Information resources

Supercomputing centers, organizations

Organizations

Centers

Specific machines, general-purpose

Specific machines, special-purpose

Notes and References

  1. http://www.networkworld.com/news/2009/020409-ibm-to-build-new-monster.html?page=1
  2. Web site: June 2008. cnet.com. 2008-06-09.
  3. Web site: Jaguar Chases Roadrunner, but Can’t Grab Top Spot on Latest List of World’s TOP500 Supercomputers. 2008-11-14. 2008-11-18,2008-11-18. TOP500. }}
  4. Web site: BOINCstats: BOINC Combined. BOINC. 2008-12-22.
  5. Web site: BOINCstats: SETI@Home. BOINC. 2008-12-22.
  6. Web site: Folding@home: OS Statistics. Stanford University.
  7. http://www.tnl.net/blog/2004/04/30/how-many-google-machines/ How many Google machines
  8. Web site: Hiding in Plain Sight, Google Seeks More Power. Markoff. John. John Markoff. Hensell, Saul. June 14, 2006. New York Times. 2008-03-16.
  9. http://blogs.nmscommunications.com/communications/2008/05/google-surpasses-supercomputer-community-unnoticed.html Google Surpasses Supercomputer Community, Unnoticed?
  10. http://www.msnbc.msn.com/id/28895353/ "PlayStation 3 tackles black hole vibrations", by Tariq Malik, January 28, 2009, MSNBC
  11. http://gravity.phy.umassd.edu/ps3.html PlayStation3 Gravity Grid
  12. Web site: "Tatas get Karmakar to make super comp". Athley. Gouri Agtey. Rajeshwari Adappa. 30 October, 2006. The Economic Times. 2008-03-16.
  13. http://www.flonnet.com/stories/20070518003711400.htm C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010.
  14. Web site: "National Science Board Approves Funds for Petascale Computing Systems". August 10, 2007. U.S. National Science Foundation. 2008-03-16.
  15. News: NASA collaborates with Intel and SGI on forthcoming petaflops super computers. Heise online. 2008-05-09.
  16. News: Patrick. Thibodeau. IBM breaks petaflop barrier. InfoWorld. 2008-06-10.
  17. Book: DeBenedictis, Erik P.. Reversible logic for supercomputing. Proceedings of the 2nd conference on Computing frontiers. 2005. 1595930191. 391–402. http://portal.acm.org/citation.cfm?id=1062325.
  18. News: IDF: Intel says Moore's Law holds until 2029. Heise Online. 2008-04-04.
  19. http://www.computerhistory.org/VirtualVisibleStorage/artifact_main.php?tax_id=03.04.01.00#4 CDC timeline at Computer History Museum
  20. http://www.top500.org/lists Directory page for Top500 lists