Early Machines

The early supercomputer structures that were invented by Seymour Cray relied on compact designs and local parallelism to reach high performance. He realized that increasing the processor speeds did little to no help without the system improving also. The first mass-produced supercomputer, the CDC 6600, solved this problem by providing ten simple computers that was only able to read and write data to and from main memory. This allowed the CPU to only concentrate on processing the data. As time went on the CDC 6600’s spot as the fastest computer was replaced by its successor, the CDC 7600. The design of this machine was similar to the 6600 in general organization but added instruction pipelining to improve performance

Energy Usage

A regular supercomputer consumes large amounts of power, where almost all of it is converted in to heat that requires cooling. The Tianhe-1A consumes 4.04 megawatts of electricity. The estimated cost to power and cool this machine is $400an hour. The energy efficiency of computer systems is mostly measured in FLOPS per watt. Heat management is huge issue in complex electronic devices and affects these systems in multiple ways. The design power and CPU power dissipation in Supercomputing surpass those in regular computers.

Operating Systems

Ever since the end of the 20th century, the operating systems of supercomputers have been changed many times. All of these changes are a result of changes within supercomputer architecture. Early machines had custom operating systems in order to gain speed. Now the trend has been to use more generic operating systems such as Linux. Even though modern supercomputers use the Linux operating system, each supercomputer manufacturer has its own Linux derivative. Also there are no industry standards that exist because different hardware requires changes to improve the operating system to each design

Software Tools

The parallel structures of supercomputers often dictate the use of special programming techniques to exploit their speed. Some software tools that are distributed include standard APIs such as MPI and PVM, VTL, and open source based software solutions such as Beowulf. The most common scenario environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. It is quite difficult to debug and test parallel programs. There are special techniques that need to be used for testing and debugging such applications.


  • Supercomputers
  • These are the world’s most powerful supercomputers
  • IBM to deliver 200-petaflop supercomputer by early 2018; Cray moves to Intel Xeon Phi
  • Supercomputers
  • The History of Supercomputers
  • Semantic Markup
  • HP’s New Supercomputer is Up to 8,000 Times Faster Than Existing PCs

This website was created by
Responsive Web Design Tutorial by G Saunders

Scroll to top