Supercomputer Explained

A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), and later at Cray Research. While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm.[1] [2]

Systems with a massive number of processors generally take one of two paths: in one approach, e.g. in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[3] In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. The use of multi-core processors combined with centralization is an emerging direction.[4] [5] Currently, Japan's K computer (a cluster) is the fastest in the world.[6]

Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).

History

See main article: History of supercomputing. The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[7] The CDC 6600, released in 1964, is generally considered the first supercomputer.[8] [9]

Cray left CDC in 1972 to form his own company.[10] Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it became one of the most successful supercomputers in history.[11] [12] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990.[13]

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaflops per processor.[14] The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[15] [16] [17] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[18]

Hardware and architecture

See main article: Supercomputer architecture.

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance.[7] However, in time the demand for increased computational power ushered in the age of massively parallel systems.

While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by fast connections.[1] [2]

Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[19] [20] [21] The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[22] There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures.[13]

Systems with a massive number of processors generally take one of two paths: in one approach, e.g. in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[3] In another approach, a large number of processors are used in close proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects.[23] [24] The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.[4] [5]

As the price/performance of general purpose graphic processors (GPGPUs) has improved, a number of petaflop supercomputers such as Tianhe-I and Nebulae have started to rely on them.[25] However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general purpose high performance computing applications has been the subject of debate, in that while a GPGPU maybe tuned to score well on specific benchmarks its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it.[26] However, GPUs are gaining ground and in 2012 the Jaguar supercomputer was transformed into Titan by replacing CPUs with GPUs.[27] [28] [29]

A number of "special-purpose" systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle,[30] Deep Blue, and Hydra,[31] for playing chess, Gravity Pipe for astrophysics,[32] MDGRAPE-3 for protein structure computationmolecular dynamics[33] and Deep Crack,[34] for breaking the DES cipher.

Energy usage and heat management

See also: Computer cooling and Green 500. A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 Megawatts of electricity.[35] The cost to power and cool the system can be significant, e.g. 4MW at $0.10/KWh is $400 an hour or about $3.5 million per year.

Heat management is a major issue in complex electronic devices, and affects powerful computer systems in various ways.[36] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.[37] [38] [39]

The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure.[13] However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.[40]

In the Blue Gene system IBM deliberately used low power processors to deal with heat density.[41] On the other hand, the IBM Power 775, released in 2011, has closely packed elements that require water cooling.[42] The IBM Aquasar system, on the other hand uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[43] [44]

The energy efficiency of computer systems is generally measured in terms of "FLOPS per Watt". In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt.[45] [46] In November 2010, the Blue Gene/Q reached 1684 MFLOPS/Watt.[47] [48] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.[49]

Software and system management

Operating systems

See main article: Supercomputer operating systems.

Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as sea changes have taken place in supercomputer architecture.[50] While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux.[51]

Given that modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes.[52] [53] [54]

While in a traditional multi-user computer system job scheduling is in effect a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully dealing with inevitable hardware failures when tens of thousands of processors are present.[55]

Although most modern supercomputers use the Linux operating system, each manufacturer has made its own specific changes to the Linux-derivative they use, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.[50] [56]

Software tools

See also: Parallel computing and Parallel programming model. The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed.

In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA.

Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf.

Distributed supercomputing

Opportunistic approaches

See main article: Grid computing.

Opportunistic Supercomputing is a form of networked grid computing whereby a “super virtual computer” of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing can not handle traditional supercomputing tasks such as fluid dynamic simulations.

The fastest grid computing system is the distributed computing project Folding@home. F@h reported 8.1 petaflops of x86 processing power . Of this, 5.8 petaflops are contributed by clients running on various GPUs, 1.7 petaflops come from PlayStation 3 systems, and the rest from various CPU systems.

The BOINC platform hosts a number of distributed computing projects., BOINC recorded a processing power of over 5.5 petaflops through over 480,000 active computers on the network[57] The most active project (measured by computational power), MilkyWay@home, reports processing power of over 700 teraflops through over 33,000 active computers.[58]

, GIMPS's distributed Mersenne Prime search currently achieves about 60 teraflops through over 25,000 registered computers.[59] The Internet PrimeNet Server supports GIMPS's grid computing approach, one of the earliest and most successful grid computing projects, since 1997.

Quasi-opportunistic approaches

See main article: Quasi-opportunistic supercomputing.

Quasi-opportunistic Supercomputing is a form of distributed computing whereby the “super virtual computer” of a large number of networked geographically disperse computers performs huge processing power demanding computing tasks.[60] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.

Performance measurement

Capability vs capacity

Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex weather simulation application.

Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve a small number of somewhat large problems or a large number of small problems, e.g. many user access requests to a database or a web site.[61] Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.[61]

Performance metrics

See also: LINPACK benchmarks. In general, the speed of supercomputers is measured and benchmarked in "FLOPS" (FLoating Point Operations Per Second), and not in terms of MIPS, i.e. as "instructions per second", as is the case with general purpose computers.[62] These measuremens are commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) "Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).

No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry. The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists) which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as "Rmax" in the TOP500 list. The LINPACK benchmark typically performs LU decomposition of a large matrix. The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.

The TOP500 list

See main article: TOP500. Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.

This is a recent list of the computers which appeared at the top of the Top500 list,[63] and the "Peak speed" is given as the "Rmax" rating. For more historical data see History of supercomputing.

YearSupercomputerPeak speed
(Rmax)
Location
2008IBM Roadrunner1.026 PFLOPSNew Mexico, USA
1.105 PFLOPS
2009Cray Jaguar1.759 PFLOPSOak Ridge, USA
2010Tianhe-IA2.566 PFLOPSTianjin, China
2011Fujitsu K computer10.51 PFLOPSKobe, Japan

The K computer is the worlds fastest supercomputer at 10.51 petaflops. It consists of 88,000 SPARC64 VIIIfx CPUs, and spans 864 server racks. In November 2011, the power consumption was reported to be 12659.89 kW[64] The operating costs for the system are about $10M per year.[65]

Applications of supercomputers

The stages of supercomputer application may be summarized in the following table:

DecadeUses and computer involved
1970sWeather forecasting, aerodynamic research (Cray-1).[66]
1980sProbabilistic analysis,[67] radiation shielding modeling[68]

Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[73]

In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.[74]

Research and development trends

IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip". IBM is also constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, based on the Blue Gene architecture which is scheduled to go online in 2012.

Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[75] Using the Intel MIC multi-core processor architecture, which is Intel's response to GPU systems, SGI plans to achieve a 500 times increase in performance by 2018 to achieve an exaflop.[76] Samples of MIC chips with 32 cores which combine vector processing units with standard CPU have become available.

On October 11, 2011, the Oak Ridge National Laboratory announced they were building a 20 petaflop supercomputer, named Titan, which will become operational in 2012, the hybrid Titan system will combine AMD Opteron processors with Nvidia GeForce 600 "Kepler" graphic processing unit (GPU) technologies.[77] At about the same time Fujitsu announced that the 20 peta flop follow up system for the K computer, called the PRIMEHPC FX10 will use the same 6 dimensional torus interconnect, but still only one SPARC processor per node.[78]

Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[79] Such systems might be built around 2030.[80]

The Indian government has committed about $940 million to develop the world's fastest supercomputer by 2017. The Planning Commission of India has agreed to provide the funds to ISRO and to the Indian Institute of Science (IISc), Bangalore to develop a supercomputer with a performance of 132.8 exaflops, about 1,000 times faster than the 2012 fastest computers.[81]

See also

Notes

Notes and References

  1. Supercomputers: directions in technology and applications by Allan R. Hoffman et. al National Academies, 1990 ISBN 0309040884 pages 35-47
  2. Readings in computer architecture by Mark Donald Hill, Norman Paul Jouppi, Gurindar Sohi 1999 ISBN 1558605398 pages 40-49
  3. Grid computing: experiment management, tool integration, and scientific workflows by Radu Prodan, Thomas Fahringer 2007 ISBN 3540692614 pages 1-4
  4. Performance Modelling and Optimization of Memory Access on Cellular Computer Architecture Cyclops64 K Barner, GR Gao, Z Hu, Lecture Notes in Computer Science, 2005, Volume 3779, Network and Parallel Computing, Pages 132-143
  5. Analysis and performance results of computing betweenness centrality on IBM Cyclops64 by Guangming Tan, Vugranam C. Sreedhar and Guang R. Gao The Journal of Supercomputing Volume 56, Number 1, 1–24 September 2011
  6. http://www.nytimes.com/2011/06/20/technology/20computer.html
  7. Hardware software co-design of a multimedia SOC platform by Sao-Jie Chen, Guang-Huei Lin, Pao-Ann Hsiung, Yu-Hen Hu 2009 ISBN pages 70-72
  8. History of computing in education by John Impagliazzo, John A. N. Lee 2004 ISBN 1402081359 page 172 http://books.google.com/books?id=J46GinHakmkC&pg=PA172&dq=history+of+supercomputer+cdc+6600&hl=en&ei=PeAcTv_eI8uf-wb3y9jvCA&sa=X&oi=book_result&ct=result&resnum=7&ved=0CEYQ6AEwBjgK#v=onepage&q=history%20of%20supercomputer%20cdc%206600&f=false
  9. The American Midwest: an interpretive encyclopedia by Richard Sisson, Christian K. Zacher 2006 ISBN 0253348862 page 1489 http://books.google.com/books?id=n3Xn7jMx1RYC&pg=PA1489&dq=history+of+supercomputer+cdc+6600&hl=en&ei=nt8cTo-RFc2r-gaDiPHLCA&sa=X&oi=book_result&ct=result&resnum=6&ved=0CEkQ6AEwBQ#v=onepage&q=history%20of%20supercomputer%20cdc%206600&f=false
  10. Wisconsin Biographical Dictionary by Caryn Hannan 2008 ISBN 1878592637 pages 83-84 http://books.google.com/books?id=V08bjkJeXkAC&pg=PA83&dq=cdc+6600+7600+cray&hl=en&ei=7LMZTozDIInX8gP0xIkM&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCgQ6AEwAA#v=onepage&q=cdc%206600%207600%20cray&f=false
  11. Readings in computer architecture by Mark Donald Hill, Norman Paul Jouppi, Gurindar Sohi 1999 ISBN 9781558605398 page 41-48
  12. Milestones in computer science and information technology by Edwin D. Reilly 2003 ISBN 1573565210 page 65
  13. Parallel computing for real-time signal processing and control by M. O. Tokhi, Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202
  14. http://www.netlib.org/benchmark/top500/reports/report94/main.html TOP500 Annual Report 1994.
  15. H. Fujii, Y. Yasuda, H. Akashi, Y. Inagami, M. Koga, O. Ishihara, M. Kashiyama, H. Wada, T. Sumimoto, Architecture and performance of the Hitachi SR2201 massively parallel processor system, Proceedings of 11th International Parallel Processing Symposium, April 1997, Pages 233-241.
  16. Y. Iwasaki, The CP-PACS project, Nuclear Physics B - Proceedings Supplements, Volume 60, Issues 1-2, January 1998, Pages 246-254.
  17. A.J. van der Steen, Overview of recent supercomputers, Publication of the NCF, Stichting Nationale Computer Faciliteiten, the Netherlands, January 1997.
  18. Scalable input/output: achieving system balance by Daniel A. Reed 2003 ISBN 9780262681421 page 182
  19. The TianHe-1A Supercomputer: Its Hardware and Software by Xue-Jun Yang, Xiang-Ke Liao, et al in the Journal of Computer Science and Technology, Volume 26, Number 3, pages 344-351 http://www.springerlink.com/content/h70244371pr727g0/
  20. The Supermen: Story of Seymour Cray and the Technical Wizards Behind the Supercomputer by Charles J. Murray 1997 ISBN 0471048852 pages 133-135
  21. Parallel Computational Fluid Dyynamics; Recent Advances and Future Directions edited by Rupak Biswas 2010 ISBN 160595022X page 401
  22. Supercomputing Research Advances by Yongge Huáng 2008 ISBN 1604561866 pages 313-314
  23. Knight, Will: "IBM creates world's most powerful computer", NewScientist.com news service, June 2007
  24. N. R. Agida et al. 2005 Blue Gene/L Torus Interconnection Network, IBM Journal of Research and Development, Vol 45, No 2/3 March–May 2005 page 265http://www.cc.gatech.edu/classes/AY2008/cs8803hpc_spring/papers/bgLtorusnetwork.pdf
  25. Prickett, Timothy Top 500 supers – The Dawning of the GPUs The Register May 31, 2010 http://www.theregister.co.uk/2010/05/31/top_500_supers_jun2010/
  26. "Considering GPGPU for HPC Centers: Is It Worth the Effort?" by Hans Hacker et al in Facing the Multicore-Challenge: Aspects of New Paradigms and Technologies in Parallel Computing by Rainer Keller, David Kramer and Jan-Philipp Weiss 2010 ISBN 3642162320 pages 118-121 http://books.google.it/books?id=-luqXPiew_UC&pg=PA118&dq=GPGPU+supercomputer&hl=en&sa=X&ei=NKYyT-XTCYSk4gTf342XBQ&redir_esc=y#v=onepage&q=GPGPU%20supercomputer&f=false
  27. Cray's Titan Supercomputer for ORNL Could Be World's Fastest by Damon Poeter PC Magazine, October 11, 2011 http://www.pcmag.com/article2/0,2817,2394515,00.asp
  28. GPUs Will Morph ORNL's Jaguar Into 20-Petaflop Titan by Michael Feldman HPC Wire, Oct 11, 2011 http://www.hpcwire.com/hpcwire/2011-10-11/gpus_will_morph_ornl_s_jaguar_into_20-petaflop_titan.html
  29. Oak Ridge changes Jaguar's spots from CPUs to GPUs by Timothy Prickett Morgan, The Register Oct 11, 2011 http://www.theregister.co.uk/2011/10/11/oak_ridge_cray_nvidia_titan/
  30. Condon, J.H. and K.Thompson, "Belle Chess Hardware", In Advances in Computer Chess 3 (ed.M.R.B.Clarke), Pergamon Press, 1982.
  31. C. Donninger, U. Lorenz. The Chess Monster Hydra. Proc. of 14th International Conference on Field-Programmable Logic and Applications (FPL), 2004, Antwerp – Belgium, LNCS 3203, pp. 927 – 932
  32. J Makino and M. Taiji, Scientific Simulations with Special Purpose Computers: The GRAPE Systems, Wiley. 1998.
  33. RIKEN press release, Completion of a one-petaflops computer system for simulation of molecular dynamics
  34. Book: Cracking DES - Secrets of Encryption Research, Wiretap Politics & Chip Design. Electronic Frontier Foundation. 1-56592-520-3. Oreilly & Associates Inc. 1998.
  35. NVIDIA Tesla GPU]s Power World's Fastest Supercomputer]. Nvidia. 29 October 2010.
  36. Better Computing Through CPU Cooling by Alexander A. Balandin in IEEE Spectrum, October 2009 http://spectrum.ieee.org/semiconductors/materials/better-computing-through-cpu-cooling/0
  37. Web site: The Green 500.
  38. Web site: Green 500 list ranks supercomputers. iTnews Australia.
  39. Wu-chun Feng, 2003 Making a Case for Efficient Supercomputing in ACM Queue Magazine, Volume 1 Issue 7, 10-01-2003 doi 10.1145/957717.957772 http://sss.lanl.gov/pubs/031001-acmq.pdf
  40. Computational science -- ICCS 2005: 5th international conference edited by Vaidy S. Sunderam 2005 ISBN 3540260439 pages 60-67
  41. Web site: IBM uncloaks 20 petaflops BlueGene/Q super. The Register. 2010-11-22. 2010-11-25.
  42. http://www.theregister.co.uk/2011/07/15/power_775_super_pricing/ The Register: IBM 'Blue Waters' super node washes ashore in August
  43. http://www.hpcwire.com/hpcwire/2010-07-02/ibm_hot_water-cooled_supercomputer_goes_live_at_eth_zurich.html HPC Wire July 2, 2010
  44. http://news.cnet.com/8301-11128_3-20004543-54.html CNet May 10, 2010
  45. News: Government unveils world's fastest computer. CNN. performing 376 million calculations for every watt of electricity used.. http://web.archive.org/web/20080610155646/http://www.cnn.com/2008/TECH/06/09/fastest.computer.ap/index.html. 2008-06-10.
  46. Web site: IBM Roadrunner Takes the Gold in the Petaflop Race.
  47. Web site: Top500 Supercomputing List Reveals Computing Trends. IBM... BlueGene/Q system .. setting a record in power efficiency with a value of 1,680 Mflops/watt, more than twice that of the next best system..
  48. Web site: IBM Research A Clear Winner in Green 500.
  49. http://www.green500.org/lists/2011/06/top/list.php Green 500 list
  50. Encyclopedia of Parallel Computing by David Padua 2011 ISBN 0387097651 pages 426-429
  51. Knowing machines: essays on technical change by Donald MacKenzie 1998 ISBN 0262631881 page 149-151
  52. Euro-Par 2004 Parallel Processing: 10th International Euro-Par Conference 2004, by Marco Danelutto, Marco Vanneschi and Domenico Laforenza ISBN 3540229248 pages 835
  53. Euro-Par 2006 Parallel Processing: 12th International Euro-Par Conference, 2006, by Wolfgang E. Nagel, Wolfgang V. Walter and Wolfgang Lehner ISBN 3540377832 page
  54. An Evaluation of the Oak Ridge National Laboratory Cray XT3 by Sadaf R. Alam etal International Journal of High Performance Computing Applications February 2008 vol. 22 no. 1 52-80
  55. Open Job Management Architecture for the Blue Gene/L Supercomputer by Yariv Aridor et al in Job scheduling strategies for parallel processing by Dror G. Feitelson 2005 ISBN ISBN 978-3-540-31024-2 pages 95-101
  56. Web site: Top500 OS chart. Top500.org. 2010-10-31.
  57. . Note these link will give current statistics, not those on the date last accessed.
  58. . Note these link will give current statistics, not those on the date last accessed.
  59. Web site: Internet PrimeNet Server Distributed Computing Technology for the Great Internet Mersenne Prime Search. GIMPS. June 6, 2011. .
  60. Web site: Kravtsov. Valentin; Carmeli, David; Dubitzky, Werner; Orda, Ariel; Schuster, Assaf; Yoshpa, Benny. Quasi-opportunistic supercomputing in grids, hot topic paper (2007). IEEE International Symposium on High Performance Distributed Computing. IEEE. 4 August 2011.
  61. The Potential Impact of High-End Capability Computing on Four Illustrative Fields of Science and Engineering by Committee on the Potential Impact of High-End Computing on Illustrative Fields of Science and Engineering and National Research Council (Oct 28, 2008) ISBN 0309124859 page 9
  62. Performance Evaluation, Prediction and Visualization of Parallel Systems by Xingfu Wu 1999 ISBN 0792384628 pages 114-117 http://books.google.it/books?id=IJZt5H6R8OIC&pg=PA116&dq=supercomputer+Mips+flops&hl=en&sa=X&ei=teUzT6C7OY3qOZ2qhYgC&redir_esc=y#v=onepage&q=supercomputer%20Mips%20flops&f=false
  63. Web site: Intel brochure - 11/91. Directory page for Top500 lists. Result for each list since June 1993. Top500.org. 2010-10-31.
  64. Web site: K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect. 2011. 11. www.TOP500.org.
  65. News: Japanese supercomputer 'K' is world's fastest. 20 June 2011. The Telegraph. 20 June 2011. London. Tom. Chivers.
  66. Web site: The Cray-1 Computer System. PDF. Cray Research, Inc. May 25, 2011.
  67. Web site: Joshi. Rajani R.. 9 June 1998. A new heuristic algorithm for probabilistic optimization. Department of Mathematics and School of Biomedical Engineering, Indian Institute of Technology Powai, Bombay, India. 2008-07-01. Subscription required.
  68. Web site: [https://www.cosic.esat.kuleuven.be/des/ EFF DES Cracker Source Code]. Cosic.esat.kuleuven.be. 2011-07-08.
  69. Web site: Disarmament Diplomacy: - DOE Supercomputing & Test Simulation Programme. Acronym.org.uk. 2000-08-22. 2011-07-08.
  70. Web site: China’s Investment in GPU Supercomputing Begins to Pay Off Big Time!. Blogs.nvidia.com. 2011-07-08.
  71. Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 65.
  72. Web site: Abstract for SAMSY - Shielding Analysis Modular System. OECD Nuclear Energy Agency, Issy-les-Moulineaux, France. May 25, 2011. (CDC Cyber).|-|1990s|Brute force code breaking (EFF DES cracker),[68]

    3D nuclear test simulations as a substitute for legal conduct Nuclear Proliferation Treaty (ASCI Q).[69] |-|2010s|Molecular Dynamics Simulation (Tianhe-1A)[70] |}

    The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[71]

  73. Web site: Faster Supercomputers Aiding Weather Forecasts. News.nationalgeographic.com. 2010-10-28. 2011-07-08.
  74. http://www.washingtonpost.com/business/technology/petaflop-computer-flap-ibm-unplugs-itself-from-supercomputer-project-at-univ-of-illinois/2011/08/08/gIQAuiFG3I_story.html Washington Post August 8, 2011
  75. News: Patrick. Thibodeau. IBM breaks petaflop barrier. InfoWorld. 2008-06-10.
  76. http://www.computerworld.com/s/article/9217763/SGI_Intel_plan_to_speed_supercomputers_500_times_by_2018?taxonomyId=67 SGI, Intel plan to speed supercomputers 500 times by 2018, ComputerWorld, June 20, 2011
  77. Cray's Titan Supercomputer for ORNL Could Be World's Fastest by Damon PoeterPC Magazine, October 11, 2011 http://www.pcmag.com/article2/0,2817,2394515,00.asp
  78. Fujitsu Unveils Post-K Supercomputer HPC Wire Nov 7 2011
  79. Book: DeBenedictis, Erik P.. Reversible logic for supercomputing. Proceedings of the 2nd conference on Computing frontiers. 2005. 1595930191. 391–402. http://portal.acm.org/citation.cfm?id=1062325.
  80. News: IDF: Intel says Moore's Law holds until 2029. Heise Online. 2008-04-04.
  81. News: India to make World's Fastest Supercomputer.