The Open Telekom Cloud offers you access to various LLMs, a RAG service, and the pre-programmed T-Systems Smart Chat as part of the AI Foundation Services. The services are stored and operated on the Open Telekom Cloud. You therefore do not need to plan your own GPU resources for their operation. All you need is an Open Telekom Cloud account and an API to access them. Find the right LLM model now in our Open Telekom Cloud Marketplace.


With the LLM Serving Service, we offer you the option of using the LLM in a shared (more cost-effective) variant and in a “private”/dedicated variant for your company only. The LLMs offered vary. The models available are Meta Llama 3.3, Mistral Small 3 and DeepSeek R via the Marketplace. Other LLMs such as GPT-4o, Claude 3, Gemini 1.5 Pro and other Mistral variants are available on request. This makes you independent of a specific LLM provider and gives you the opportunity to try out different LLMs to identify the most suitable (most cost-effective and powerful) one for your use case.
The RAG service is also easily available via API. Add your suitable internal data sources to the LLM, choose from over 50 retrieval settings and generate a Vector database that runs in your company's own (private) instance of the Open Telekom Cloud. The RAG service supports various data formats (docx, pdf, xlsx, etc.) and can also extract data from diagrams. The RAG approach offers an additional level of security: your data stays with you.
Want to try out AI even faster? Maybe our ready-to-use T-Systems Smart Chat is right for you. Based on the LLM Serving Service and the T-Systems Smart Chat API, T-Systems Smart Chat offers a browser-based chat interface that allows you to ask questions about your documents in natural language.
Our experts can adapt the chat to your specific application scenario as part of a project. With your confidential documents and information as a knowledge base, your specific AI assistant is created, e.g., for research in document pools, automated summaries, comparative analyses of documents, or automatic text creation and optimization.
Optimize your AI models quickly and efficiently with our LLM fine-tuning service. The service enables you to customize open-source models such as LLama 3.3, DeepSeekR1 and Mistral to your individual requirements. Thanks to our API, you can perform fine tuning using either LoRA or DPO (RLHF) based methods, uploading your own training data and then performing the fine tuning automatically. The API according to the OpenAI standard ensures seamless integration into existing systems and workflows. In addition, the trained models can be flexibly deployed on a shared or private instance, depending on what best suits your requirements. Experience how easily and quickly you can adapt and optimize your AI models.
Get in touch with us, get your OpenAI-compatible API and set up your own AI service.
Do you want to get started with AI, LLMs and RAG? Use the contact form or visit our Marketplace and get your API keys.