High Performance Computing generally refers to a practice of aggregating computing power in an approach that delivers much higher performance than one could attain out of a typical desktop computer or workstation in order to solve complex problems in science, engineering, weather forecasting, simulations, or any field of concern. The term Supercomputing or High Performance Computing is used interchangeably and has evolved over the years, what used to be large composite mainframes has been replaced by thousands of processors clustered together to harness their processing power. The synergy of the large number of computing nodes has enhanced the performance of computer systems enabling them to do more in less time and at a less costs. This way of computing makes use of parallel processing so as to make the use of applications more efficiently and quickly.
High Performance Computing is not limited to any discipline but has a wide scope such that any domain in need of high processing power can make use of High Performance Computing. Each individual computer or node, as they are typically called, has two to four cores. A cluster can have from as little as four nodes to forty thousand nodes. Some cluster even boast of hundreds of thousands so as to harness the computing power. There is an African proverb that says; one finger cannot annihilate a louse which simply emphasizes the effect of combined effort. The numerous cores working together achieve more. How do they communicate in their thousands? There are many cluster interconnect options like infiniband and fiber channel that allow for convenient message passing between nodes. The operating system software used on most clusters is Linux. High Performance Computing has brought us to an age where we can now do processing at the speed of innovation.