The Elastic Cloud Server now provides the Pi2 flavor for AI-related mobile services and inference scenarios. The main use cases are GPU inference calculation scenarios such as scene recognition, speech recognition, and natural language processing. Light training scenarios can also be supported.
The most important features include:
- Support NVIDIA Tesla T4 GPU, max support 4x T4 GPU card in large flavor
- GPU Passthrough
- Max 8.1 TFLOPS per single GPU
- 130 TOPS per single GPU INT8
- 16 GB GDDR6 Graphic Mem per GPU
- Bandwidth 300 GB/s
- Build-in with 1 NVENC and 2 NVDEC
The supported deep learning frameworks are:
- Tensorflow
- Caffe
- PyTorch
- MXNet
The following operating system images are currently supported:
- Windows Server 2016 Standard 64bit
- CentOS 7.5 64bit
- Ubuntu Server 16.04 64bit
- Images like Ubuntu Server 18.04 are currently being clarified
The flavor will be provided in AZ 1 with the following sizes:
- Pi2.2xlarge.4 - 1 x T4 card (16 GB) - 8 vCPU - 32 GB memory
- Pi2.4xlarge.4 - 2 x T4 card (32 GB) - 16 vCPU - 64 GB memory
- Pi2.8xlarge.4 - 4 x T4 card (64 GB) - 32 vCPU - 128 GB memory
For details see https://docs.otc.t-systems.com/ecs/index.html