Most people around the world at one point or another have owned a computer, but most people are not fortunate enough to own a supercomputer. A supercomputer is a computer with a high level of performance compared to a regular computer. The performance levels of a supercomputer is measured in FLOPS, which stands for floating-point operations per second. In 2017, the computers that were on the list of the 500 fastest computers were run on the Linux operating system. Now many countries are now doing extra research to build even faster, more powerful, and technologically superior supercomputers. These supercomputers play a big role within the field of computational science, and are used for a wide array of tasks in this fields. These other fields include quantum mechanics, weather forecasting, climate research, oil and gas exploration, and many other fields.
The first appearance of supercomputers came around back in the 1960s, with the Atlas at the University of Manchester. The IBM 730 Stretch and a series of computers at the Control Data Corporation were designed by Seymour Cray. The Atlas was a joint venture between Ferranti and Manchester University, and was designed to operate at processing speeds that approached one microsecond per instruction. This is about one million instructions per second. The first Atlas was employed on December 7, 1962 as one of the world's first supercomputers. At the time this was considered to be the the most powerful supercomputer in the world. In 1972 Cray ended up leaving the CDC in order to form his own company that was named Cray Research. Four years later, Cray delivered the 80 MHz Cray 1, and it became one of the most successful supercomputers in history. It used intergrated circuits and increased word size to obtain performance of 136 megaflops, which was a lot faster than the 3 megaflop CDC 6600. In 1985, the Cray-2 was released which had an 8 processor liquid cooled computer and Fluorinert was pumped through it as it was operating. It performed at 1.9 gigaFlops and was the world's second fastest after the M-13 supercomputer in Moscow.
A supercomputer has one essential feature, and that is that it's a general purpose machine you can use in all kinds of diffrenct ways. You are able to send emails, play games, or do many other things by running a different program. If you are using a high end cellphone, such as an Android phon or an iPhone, what you hav is a powerful little pocket computer that can run programs by loading different applications, which are computer programs named something else.
Some supercomputers are engineered to do very specific jobs. Two of the most famous supercomputers of recent times were engineered this way. IBM's Deep Blue machine that was built in 1997 was made to specifically to play chess against Russian grand mast Gary Kasparov. Later the Watson machine, which was named after IBM's founder Thomas Watson and his son, was engineered to play the game of Jeopardy. Specially designed machines like this can be optimized for particular problems. For example Deep Blue would have been designed to search through huge databases of potential chess moves and evaluate which move was best in a partiular situation. Watson was designed to analyze tricky general knowledge questions phrased in in regular human language.
During the course of a year there is a list made by the TOP500 publishes twice a year for the top 500 supercomputers in the world. This list ranks the most powerful machines in the world. Some of names on the list include the Sunway TaihuLight and Tianhe-2. Both of these are Chinese computers, and the Tianhe-2 is considered the fastest supercomputer in the world. Some of the top five supercomputers come from China, Switzerland, Japan, and the United States.
The early supercomputer structures that were invented by Seymour Cray relied on compact designs and local parallelism to reach high performance. He realized that increasing the processor speeds did not help unless the system improved also. The first mass produced supercomputer, the CDC 6600, solved this problem by providing ten simple computers that was only able to read and write data to and from main memory. This allowed the CPU to only concentrate on processing the data. Later on the CDC's 6600 spot as the fastest computer was replaced by its successor, the CDC 7600. The design of this machine was like the one of 6600 in general organization but added instruction pipelining to improve performance.
A regular supercomputer consumes large amounts of power, where almost all of it is converted into heat that requires cooling. The Tianhe-1A consumes 4.04 megawatts of electricity. The estimated cost to power and cool this machine is $400 an hour. The energy usaage of computer systems is measured in FLOPS per watt. Heat management is a big issue in electronic devices and affects these systmes in multiple ways. The design power and CPU power dissipation in supercomputing surpass those in normal computers.
Since the end of the 20th century, the operating systems of supercomputers have been changed many times. All of these changes are a result of changes within the supercomputer architecture. Early machines had their own operating systems in order to gain speed. Now the trend has been to use more generic operating systems such as Linux. Even though modern supercomputers use the Linux operating system, each supercomputer manufacturer has its own Linux derivative. Also there are no industry rules that exiat because of different hardware requirements have changed to improve the operating system to each design.
The parallel structures of supercomputers often dictate the use of special programming techniques to use their speed. Some software tools are distributed include standard APIs such as MPI and PVM, VTL and open source based software solutions such as Beowulf. The most common scenario environments such as PVM and MPI for loosely connected clusters and OpenMP for coordinated shared memory machines are used. It is quite difficult to debug and test parallel programs. There are special techniques that need to be used for testing and debugging such applications.
As of 2018, IBM is considered to be the leader in the production of supercomputers, with HP in second place. The Oak Ridge National Laboratory is expected to deliver the newest IBM system that is named the Summit. This machine is capable of reaching a peak of 200 petaflops. This would make the Summit about two times as fast as the TaihuLight if the report is true. Summit will have IBM Power9 and Nvidia Volta GPUs. By only using about 3,400 nodes the Summit will deliver over five times the computational performance of Titan's 18,688 nodes. Each of the nodes will have over a terabyte of coherent memory. Also the Summit will have 800GB of non-volatile RAM that serves as extended memory.
Back in late 2016, HP unveiled a working prototype of a new supercomputer named "the Machine". HP claimed that at the time this was the world's first demonstration of memory-driven computing. The simulations that HP did at the time showed that memory driven computer can reach speeds up to 8,000 times faster than regular computers. At its core the Machine uses photonics, which is the transmission of information via light rather than the electron of normal PCs. The prototype at the time used 8 terabytes of memory in total which is 30 times the amount of what a regular server may have. HP plans to eventually develop computers with hundreds of terabytes of memory which could make the Machine more powerful.
In the world of computing floating point operations per second, or FLOPS, is the way to measure computer performance. This different unit of measurement is useful in the fields of scientific computations that require floating point calculations. The term FLOP, which is used for floating point operaton, is used as a unit for computing floating point operations. Floating point arithmetic is a tool that is used for very small or very large real numbers or computations that require a large dynamic range. The way to represent a floating point is similar to the way scientific notation, except everything is carried to base two and not base ten. Floating point representations can support a wide rang of values than fixed point, with the capability to represent very small and very large numbers.
MIPS is another type of metric that measures computer performance. MIPS measures integer performance of a computer. An example of integer operation include data movemennt or value testing. Using MIPS is a good performance benchmark when a computer is used for database queries, word processing, spreadsheets, or to run many virtual operating systems. Fran McMahon invented the terms FLOPS and MFLOPS so that he could compare supercomputers for the day by the number of floating point calculation they performed per second. This method was much better than using MIPS to compare computers as this stat usually had little bearing on the aarithmetic capability of the machine
According to Dictionary.com, "semantics refers to the correct interpretation of the meaning of a word or sentence." Using a word semantically is to line it up with the proper meaning of the word. The misuse of a word is not using that word semantically correct. Most tags that are HTML have semantic meaning to them. This means that the element that is information about it in between the opening and closing tags. An example of this is when a browser runs into a h1 heading, and it makes sure that heading as the most important heading on that page.
When writing semantic markup, HTML tags are used to let browsers know something is contained in the element. Now tags have become a way to tell any machine something about the meaning of the content. In order to write semantic markup people must use HTML tags correctly so the markup is human and machine readable.
On an average website, good CSS can make bad markup invisible. There is no amount of styling that will make bad markup more meaningful to a computerized visitor such as a engine web crawler. According to Bruce Lawson, semantic use of HTML "enhanches accessibility, searchability, internalization, and interoperability. Writing semantic markup is mandatory if you want to be accessible to all visitors, and to be available to visitors from around the world. When the web can read both humans, and computers, it becomes more accessible since computers are better able to analyze its contents.