MarketplaceCommunityDEENDEENProductsCore ServicesRoadmapRelease NotesService descriptionCertifications and attestationsPrivate CloudManaged ServicesBenefitsSecurity/DSGVOSustainabilityOpenStackMarket leaderPricesPricing modelsComputing & ContainersStorageNetworkDatabase & AnalysisSecurityManagement & ApplicationsPrice calculatorSolutionsIndustriesHealthcarePublic SectorScience and researchAutomotiveMedia and broadcastingRetailUse CasesArtificial intelligenceHigh Performance ComputingBig data and analyticsInternet of ThingsDisaster RecoveryData StorageTurnkey solutionsTelekom cloud solutionsPartner cloud solutionsSwiss Open Telekom CloudReferencesPartnerCIRCLE PartnerTECH PartnerBecome a partnerAcademyTraining & certificationsEssentials trainingFundamentals training coursePractitioner online self-trainingArchitect training courseCertificationsCommunityCommunity blogsCommunity eventsLibraryStudies and whitepaperWebinarsBusiness NavigatorMarketplaceSupportSupport from expertsAI chatbotShared ResponsibilityGuidelines for Security Testing (Penetration Tests)Mobile AppHelp toolsFirst stepsTutorialStatus DashboardFAQTechnical documentationNewsBlogFairs & eventsTrade pressPress inquiriesMarketplaceCommunity

0800 3304477 24 hours a day, seven days a week

Write an E-mail 

Book now and claim starting credit of EUR 250
ProductsCore ServicesPrivate CloudManaged ServicesBenefitsPricesPricing modelsPrice calculatorSolutionsIndustriesUse CasesTurnkey solutionsSwiss Open Telekom CloudReferencesPartnerCIRCLE PartnerTECH PartnerBecome a partnerAcademyTraining & certificationsCommunityLibraryBusiness NavigatorMarketplaceSupportSupport from expertsHelp toolsTechnical documentationNewsBlogFairs & eventsTrade pressPress inquiries
  • 0800 330447724 hours a day, seven days a week
  • Write an E-mail 
Book now and claim starting credit of EUR 250
 

High Performance Computing

Simulation and modeling need powerful computing resources that fit the respective model. Since such calculations are usually of a temporary nature, it is not worth purchasing a costly high-performance computer for selective use. The cloud offers needs-oriented access to computing resources that meet these requirements – whether CPU, GPU, Bare Metal Server or Super Computing – and users only pay for what they use.

Consult us nowBook Open Telekom Cloud now
 

0 Waiting time

Expand the limits of your own environment and reduce waiting times

25,95 PetaFlops

Grow flexibly by 1 to 720,000 cores and scale up or down in individual hybrid scenarios

Save up to 30%

Including all operating costs from just EUR 0.015 per core per hour

 
Icon of a cloud with server in the background

What are the advantages of HPC from the public cloud?

  • Reduce run times of technical simulations and genome analyses with flexible scaling
  • Resources can be used as needed and only the service used billed – no investment or operating costs for you
  • Choose from a wide range of state-of-the-art technologies for your projects – CPU, GPU, Bare Metal, FPGA etc.
Icon of a factory with a tile icon in the background

Which sectors can you support?

  • As a large systems house, we are familiar with every sector
  • With profound expertise in the industry, our experts support you at every stage of a project
  • Consulting – implementation – operation
Icon of a speedometer with diagram in the background

How can I access resources quickly and securely?

  • Access to the Open Telekom Cloud gives you access to the latest CPU and GPUs
  • Resources can be provided automatically in minutes
  • Eliminate limits – supercomputing resources can be activated in next to no time
 
 
Satellite image from the Copernicus Data Space Ecosystem

HPC for Copernicus Data Space Ecosystem

The Copernicus Data Space Ecosystem is one of the largest public platforms for Earth observation data in the world. For secure and smooth data processing and analysis, the Copernicus Data Space Ecosystem relies on the powerful storage and computing resources from the Open Telekom Cloud.

Learn more
 
 

T-Systems and High Performance Computing – Use cases

Manufacturing & Engineering

Cloud-based HPC resources allow manufacturing companies to use flexible and efficient clusters for numerical simulation. These can be provided as needed in order to achieve optimum total costs of operation (TCO).


Use case:

  • Product design
  • Design simulation and results analysis
  • Simulation of complex problems


Infrastructures:

Cloud Open Telekom Cloud and Logo hww


Platform:

  • Infrastructure only
  • HPCaaS – customized Service by T-Systems


Software / Application:

Health

Cloud-based HPC resources enable flexible and secure access to unlimited performance for every development. AI and GPU-supported development can be accelerated ten-fold.


Use case:

  • Life science and personalized medicine
  • Cardiovascular simulations
  • Early detection of tumors


Infrastructures:

Wolkr Open Telekom Cloud and Logo hww


Platform:

  • Infrastructure only
  • HPCaaS – customized Service by T-Systems


Software / Application:

  • Partner
Pharma

Cloud-based HPC resources enable flexible and secure access to unlimited performance for every development. AI and GPU-supported development can be accelerated ten-fold.


Use case:

  • Genome analysis and protein folding
  • Chemical simulation
  • Life science and personalized medicine


Infrastructures:

Wolke Open Telekom Cloud and  Logo hww


Platform:

  • Infrastructure only
  • HPCaaS – customized Service by T-Systems


Software / Application:

  • Partner
Science & Research

Cloud-based HPC resources enable flexible and secure access to unlimited performance for every research project.


Use case:

  • Geoinformatics
  • Simulation in chemistry and quantum mechanics
  • Materials research


Infrastructures:

Wolke Open Telekom Cloud and Logo hww


Platform:

  • Infrastructure only
  • HPCaaS – customized Service by T-Systems


Software / Application:

  • Scientific Computing by T-Systems
Finance & Insurance

Cloud-based HPC resources allow financial companies, fintech and insurance providers to access flexible and efficient grid clusters.


Use case:

  • Risk and market analyses
  • Portfolio stress tests and asset allocation
  • Exploitation of new risk algorithms


Infrastructures:

Wolke Open Telekom Cloud and Logo hww


Platform:

  • Infrastructure only
  • HPCaaS – customized Service by T-Systems


Software / Application:

  • Partner
Automotive

Cloud-based HPC resources allow manufacturing companies to use flexible and efficient clusters for numerical simulation. These can be provided as needed in order to achieve optimum total costs of operation (TCO).


Use case:

  • Training of self-driving cars
  • Product design
  • Design simulation and results analysis


Infrastructures:

Wolke Open Telekom Cloud and Logo hww
EDGE Logo


Platform:

  • Infrastructure only
  • HPCaaS – customized Service by T-Systems


Software / Application:

Media & Gaming

Cloud-based GPU and CPU resources allow media and gaming companies to access flexible and efficient clusters. Development, post-processing, rendering and streaming are served from a single environment.


Use case:

  • Rendering of films and 3D productions
  • Transcoding of TV and video streams


Infrastructures:

Open Telekom Cloud Logo


Platform:

  • Infrastructure only
  • HPCaaS – customized Service by T-Systems


Software / Application:

  • Partner
Geology & Oil & Gas

Ready for any and all seismic applications. Cloud storage with InfiniBand connection for high-performance parallel file systems (Lustre, etc.) or server with dedicated CPU for better performance.


Use case:

  • Exploration of the ocean floor, mountains or caves
  • Earthquake forecasting
  • Investigation and prediction of new raw material deposits


Infrastructures:

Wolke Open Telekom Cloud and Logo hww


Platform:


Software / Application:

  • Partner
 

Our customers

CERN Logo
Copernicus Logo
European Synchrotron Radiation Facility Logo
 

Our partners

Intel Xeon Logo
Adaptive Computing Logo
Mundi Web Services Logo
hww Logo
NVIDIA Logo
OpenStack Logo
 
 

High Performance Computing: Frequently asked questions

What is High Performance Computing?

The term High Performance Computing (HPC) refers to technologies and processes that can be used to execute complex computing tasks at high performance. For this purpose, the tasks are usually "parallelized": Many computers are connected to form a cluster and simultaneously work on the calculation. Typical fields of use are science and research, simulation technology, artificial intelligence, or graphics calculations.

Under which circumstances is it worthwhile for companies to use HPC resources?

Higher performance means higher costs, of course. The price of servers for High Performance Computing are higher than those for general-purpose virtual machines. The question companies must ask themselves is: “At what point do these additional costs become worthwhile?”

Unfortunately, there is no formula that can be used to calculate the effectiveness of High Performance Computing compared to traditional IT resources. Ultimately, three aspects determine whether it is beneficial for companies to use special HPC technology:

  • The amount of data to be processed
  • The available time
  • The complexity of the task

It is therefore best to seek the advice of experts. Our consultants are available to share their experience and help your IT team get the cloud resources you need up and running.

What options are there for running HPC clusters?

To achieve the necessary processing power for High Performance Computing, a large number of servers are combined to form a so-called HPC cluster. Together, these can reach the computing power of a supercomputer. There are three implementation scenarios for companies that wish to operate such a cluster:

  • All cloud: The cluster runs entirely in the cloud.
  • On premise: The HPC cluster is operated completely in its own data center.
  • Cloud bursting: It is operated in its own data center but reserves in the public cloud are used to handle peak loads.

Companies most frequently use HPC resources in a so-called bursting scenario. In this case, they use the public cloud as a kind of overflow pool or supplement to their own IT resources in a hybrid cloud model, drawing on their own IT capacities until they are fully utilized. Resources that one’s own IT department cannot provide, as well as workloads that require highly specialized IT resources are moved to the cloud as needed. In doing so, companies pay for HPC capabilities only as long as they are needed.

What is the difference between traditional cloud resources and HPC services?

High Performance Computing places special demands on a cloud infrastructure that general-purpose services usually cannot meet.

Non-specialized cloud services are optimized to be attractive and cost-efficient for the largest possible number of customers. Generally speaking, these customers place great emphasis on being able to quickly and easily adapt the cloud services they have booked to their changing needs. The providers achieve this through a high degree of virtualization – that is, the customer does not book a physical server, but a virtual machine simulated by software. In the process, several customers often share the computing power of a real, physical server. Virtualization makes the flexibility, cost efficiency and scalability of cloud services possible. However, it also limits performance, as the physical machines have more software layers to run.

To get as much performance out of the hardware as possible, cloud services for High Performance Computing usually rely on dedicated resources that are "closer to the metal", i.e., contain fewer virtualization layers. In addition, they usually provide optimized hardware, i.e., more powerful processors and network connections.

For example, servers are frequently equipped with additional Graphics Processing Units (GPUs) used for HPC tasks. These were originally developed to relieve the main computer processor of graphical calculations. Typical use cases for GPUs are, e.g., graphics-intensive applications from the entertainment industry. For this purpose, they contain a large number of processor cores, so-called shaders, which can execute many uniform calculations in parallel. This also makes them ideal for applications in the fields of machine learning or artificial intelligence. For example, the Open Telekom Cloud offers GPU-accelerated Elastic Cloud Servers of flavors p2, p2v, g6 and pi2, as well as Bare Metal Servers physical.p1 and physical.p2, which are based on Nvidia P100, V100, or T4 GPUs.

However, there are certain applications for which GPUs are not suitable due to their highly specialized architecture. For these cases, the Open Telekom Cloud offers virtual machines and bare metal servers in the High Performance category, in which the majority of the calculations are performed using a powerful CPU (central processing unit). They are intended, for example, for High Performance scenarios such as complex simulations. These include virtual experiments with chemical reactions, simulations of air flows, or crash tests. For these use cases, the Open Telekom Cloud has Bare Metal Server physical.h2 or Elastic Cloud Server flavors hl1 and h2 available, each containing a high number of CPU cores.

How fast does the network connection need to be for HPC?

In scenarios such as seismic surveys in the search for oil and gas deposits, data sets are generated that can reach hundreds of gigabytes or even terabytes. Uploading this volume of data to the cloud can take a long time, even with a fast connection. For example, it takes more than 22 hours to upload a terabyte of data to the cloud with a transfer speed of 100 Mbit/s.

If it needs to go quicker, companies should turn to a direct connection such as  This enables transfer rates of up to 10 Gbit/s to the Open Telekom Cloud. As a result, the time required to upload a terabyte of data is reduced from 22 hours to just 13 minutes.

It often turns out that the amount of data required is less in practice than in theory. For the simulation of new materials, for example, only around 100 to 200 MB of data need to be uploaded to the cloud. However, up to 100 gigabytes are generated during the calculation – of which only 10 GB are to be returned to the company as a result.

When does HPC from the cloud reach its limits?

Some tasks are so complex that they exceed the capabilities of HPC from the cloud. When the size of a workload demands more than 1,000 X86 cores, you hit a limit with the technology currently in use. The main reason for this limitation are tiny delays in the networks connecting the processors: The more complex the system becomes, the more connections are needed and the more the delays add up – until adding more processing cores no longer leads to a noticeable increase in computing power. Ensuring delay-free networking in such complex systems is beyond the capabilities of the infrastructure of a general-purpose cloud. For this task, the Open Telekom Cloud can provide access to the High Performance Computing Center Stuttgart (HLRS).  

Which supercomputers can be used via the Open Telekom Cloud?

Our data centers in Biere and Magdeburg are connected via a dedicated line to the High Performance Computing Center Stuttgart (HLRS), which is jointly operated by T-Systems, Porsche AG and the Karlsruhe Institute of Technology (KIT). Its architecture has been optimized for supercomputing and can currently process workloads with up to 180,000 cores, i.e., up to 180 times more complex or faster than an HPC cluster in the public cloud. Thus, the Open Telekom Cloud can give companies access to a supercomputer that scales flexibly with their needs. Just like the High Performance Computing resources from the Open Telekom Cloud, companies can book the HLRS resources on an as-needed basis.

The Open Telekom Cloud also offers access to one of the fastest computers in Germany. The Hawk supercomputer is equipped with 720,896 CPU cores and has been located on the campus of the University of Stuttgart since early 2020. According to the operator, it performs 27 quadrillion computing operations per second and has a main memory capacity of 1.4 petabytes.

Which HPC software does the Open Telekom Cloud support?

Specialized HPC software and middleware is required for HPC applications. If companies are already using HPC resources on-premises, they can often continue to use the corresponding software if they book additional capacities from the cloud. The prerequisite for this is that the software is also supported by their cloud provider.

The HPC infrastructure of the Open Telekom Cloud, for example, is compatible with applications from Altair. This platform is a kind of command center for HPC administrators. It enables them to deploy, manage and optimize HPC applications in any cloud - whether public, private or hybrid.

In addition, the Open Telekom Cloud supports Moab Cloud/NODUS Cloud Bursting, among others. Other HPC software supported includes UNIVA, SGE, IntelMPI and SpectrumMPI, as well as open-source services such as OpenMPI and SLURM.

What role do certificates play in cloud based HPC solutions?  

In addition to the right software, companies should also pay attention to important certificates. Companies from the automotive industry, for example, are not allowed to use IT capacities without, among others, a TISAX 3 certificate (Trusted Information Security Assessment Exchange), as this certificate proves particularly high standards regarding IT security.

In addition to other certificates, the Open Telekom Cloud has also been rewarded the CSA Star Level 2from the Cloud Security Alliance and the Trusted Cloud certificate from the German Federal Ministry for Economic Affairs and Energy. You can find an overview of our certifications here.

How important are security and data protection for HPC?

Data processed using High Performance Computing is very often personal or critical to the company. Therefore, the cloud solutions used must meet a high level of IT security and data protection.

The Open Telekom Cloud, for example, has certifications in accordance with TCDP 1.0 (Trusted Cloud Data Protection Profile) and BSI C5, certifying that it is currently one of few cloud providers on the market to be legally compliant for data protection and information security.

The Open Telekom Cloud also complies with many industry-specific regulations. For example, it facilitates the secure processing of data from professional secrecy holders such as lawyers or doctors in accordance with Section 203 of the German Criminal Code (StGB) as well as social data, for example from health insurance companies or medical settlement agencies in accordance with Section 35 of the German Social Code (SGB I).

For many companies, the location of the cloud provider is also important for data protection reasons. This is because they want to avoid the risks arising from the Schrems II ruling of the European Court of Justice. It prohibits the transfer of data to third countries if data protection equivalent to EU standards is not guaranteed there. Not only the location of the servers is relevant, but also the location of the cloud provider's management.

For this reason, many companies prefer to entrust personal or competition-critical data to European providers such as Open Telekom Cloud, which operates its own data centers in Saxony-Anhalt and the Netherlands, and is under European management.

 

More information on the HPC Cloud for you


Relevant products

The Open Telekom Cloud Community

This is where users, developers and product owners meet to help each other, share knowledge and discuss.

Discover now

Free expert hotline

Our certified cloud experts provide you with personal service free of charge.

 0800 3304477 (from Germany)

 +800 33044770 (from abroad)

 24 hours a day, seven days a week

Write an E-Mail

Our customer service is available free of charge via E-Mail

Write an E-Mail

AIssistant Cloudia

Our AI-powered search helps with your cloud needs.