In this article you will read about,
- how Telekom combines cloud resources with supercomputing capacities in Stuttgart,
- how companies can benefit from the combination of high-performance computing from the public cloud and supercomputing
- and what makes this combination currently unique on the market.
Would you like a little bit more? There are workloads that are too large even for high-performance computing (HPC) in the scalable public cloud. For example, when very high speed is required for highly complex calculations or analyses. Then the capacities of a highly specialized, classic supercomputer can help. But what is the difference between HPC from the public cloud and a supercomputer?
"In classic cloud environments such as the Open Telekom Cloud, the technology currently in use reaches its natural limits with around 1,000 x 86 cores simultaneously assigned to a problem," says Alfred Geiger, Managing Director of Höchstleistungsrechner für Wissenschaft und Wirtschaft GmbH (HWW), the joint operating company of the High-Performance Computing Center Stuttgart (HLRS), T-Systems, Porsche AG and the Karlsruhe Institute of Technology (KIT).
One reason for the limit is the network: the computing cores must be able to communicate with each other at all times without blocking, with low latency and sufficient bandwidth, so that the cluster can work efficiently. The required packing density and communication infrastructure would go far beyond the scope of a general-purpose cloud environment. Geiger: "At the High-Performance Computing Center in Stuttgart, we can currently process workloads with up to 180,000 cores, i.e. up to 180 times more complex or faster. This is due to the architecture of the hardware, which is completely designed for supercomputing."