Overview Supercomputers can be defined as the most advanced and powerful computers, or array of computers, in existence at the time of their construction. Supercomputers are used to solve problems that are too complex or too massive for standard computers, like calculating how individual molecules move in a tornado, or forecasting detailed weather patterns. Some supercomputers are single computers consisting of multiple processors; others are clusters of computers that work together.

History Supercomputers were first developed in the early 1970s when Seymour Cray introduced the “Cray 1” supercomputer. Because microprocessors were not yet available, the processor consisted of individual integrated circuits. Successive generations of supercomputers were developed by Cray and became more powerful with each version. Other companies like IBM, NEC, Texas Instruments and Unisys began to design and manufacture more powerful and faster computers after the introduction of the Cray 1. You can read more about Seymour Cray and other leading figures in supercomputer technology at The history of supercomputers can be viewed in detail at this Web page.  

Today's fastest supercomputers include IBM's Blue Gene and ASCI Purple, SCC's Beowulf, and Cray's SV2. These supercomputers are usually designed to carry out specific tasks.  For example, IBM's ASCI Purple is a $250 million supercomputer built for the Department of Energy (DOE). This computer, with a peak speed of 467 teraflops, is used to simulate aging and the operation of nuclear weapons.  Learn all about this project by linking to this article. Future supercomputer designs might incorporate the use of entirely new technologies of circuit miniaturization that could include new storage devices and data transfer systems. Scientists at UCLA are currently working on computer processor and circuit designs involving a series of molecules that behave like transistors. By incorporating this technology, new designs might include processors 10,000 times smaller, yet much more powerful than any current models. A comprehensive article about this research can be found at

Processing Speeds Supercomputer computational power is rated in FLOPS (Floating Point Operations Per Second). The first commercially available supercomputers reached speeds of 10 to 100 million FLOPS. The next generation of supercomputers (some of which are presently in the early stages of development) is predicted to break the petaflop level. This would represent computing power more than 1,000 times faster than a teraflop machine. To put these processing speeds in perspective, a relatively old supercomputer such as the Cray C90 (built in the mid to late 1990s) has a processing speed of only 8 gigaflops. It can solve a problem, which takes a personal computer a few hours, in .002 seconds! The site is dedicated to providing information about the current 500 sites with the fastest supercomputers. Both the list and the content at this site is updated regularly, providing those interested with a wealth of information about the developments in supercomputing technology. 

Supercomputer Architecture Supercomputer design varies from model to model. Generally, there are vector computers and parallel computers. Detailed information about both kinds of architecture can be found at Vector computers use a very fast data “pipeline” to move data from components and memory in the computer to a central processor. Parallel computers use multiple processors, each with their own memory banks, to 'split up' data intensive tasks. 

 A good analogy to contrast vector and parallel computers is that a vector computer could be represented as a single person solving a series of 20 math problems in consecutive order; while a parallel computer could be represented as 20 people, each solving one math problem in the series. Even if the single person (vector) were a master mathematician, 20 people would be able to finish the series much quicker. Other major differences between vector and parallel processors include how data is handled and how each machine allocates memory. A vector machine is usually a single super-fast processor with all the computer's memory allocated to its operation. A parallel machine has multiple processors, each with its own memory. Vector machines are easier to program, while parallel machines, with data from multiple processors (in some cases greater than 10,000 processors), can be tricky to orchestrate. To continue the analogy, 20 people working together (parallel) could have trouble with communication of data between them, whereas a single person (vector) would entirely avoid these communication complexities. 

Recently, parallel vector computers have been developed to take advantage of both designs. For more information about this design, visit this page

Uses of Supercomputers Supercomputers are called upon to perform the most compute-intensive tasks of modern times. As supercomputers have developed in the last 30 years, so have the tasks they typically perform. Modeling of real world complex systems such as fluid dynamics, weather patterns, seismic activity prediction, and nuclear explosion dynamics represent the most modern adaptations of supercomputers. Other tasks include human genome sequencing, credit card transaction processing, and the design and testing of modern aircraft. 

Manufacturers Although there are numerous companies that manufacture supercomputers, information about purchasing one is not always easy to find on the Internet. The price tag for a custom-built supercomputer can range anywhere from about $500,000 for a beowulf system, up to millions of dollars for the newest and fastest supercomputers. To review a list of the world's top 10 most powerful commercially available computer systems take a look at this Scientific Computing Web page. Cray provides an informative Web site ( with product descriptions, photos, company information, and an index of current developments.

Scyld Computing Corporation (SCC) provides a Web site ( with detailed information about their Beowulf Operating System and the computers developed to allow multiple systems to operate under one platform. 

IBM has produced, and continues to produce, some of the most cutting-edge supercomputer technology. For information about IBM supercomputers visit  Their “Blue Gene” supercomputer, being constructed in collaboration with Lawrence Livermore National Labs, is expected to run 15 times faster (at 200 teraflops) than their current supercomputers.  Read all about this project by visiting this link. IBM is also currently working on what they call a "self-aware" supercomputer, named "Blue Sky", for The National Center for Atmospheric Research (NCAR) in Boulder, Colorado. The Blue Sky will be used to work on colossal computing problems such as weather prediction.  Additionally, this supercomputer can self-repair, requiring no human intervention.  Read all about Blue Sky in the article found here

Intel has developed a line of supercomputers known as Intel TFLOPS. Supercomputers that use thousands of Pentium Pro processors in a parallel configuration to meet the supercomputing demands of their customers. Information about Intel supercomputers can be found at Intel's Web site ( or by reading this article.