Open Telekom Cloud for Business Customers

Trusted AI: You reap what you sow 

by Redaktion
Abstract pair of hands holding a green plant shoot in a pot.
You reap what you sow: Only intelligent algorithms that make evaluations that are unprejudiced and discriminating create a basis of trust.

In this article, you'll read about,

  • why artificial intelligence must be trustworthy,
  • what the requirements and guidelines are for this,
  • and how Telekom supports customers in designing their applications accordingly.

Preventive medical check-ups are important – but often time-consuming and annoying. Because getting an appointment with a specialist takes time. In rural areas, there’s often no doctor's surgery in the immediate vicinity. Digital solutions for the healthcare system are intended to ease the burden here. For example, when all you need for a skin screening is an app and your smartphone: Take pictures, answer a few questions online, and send off the data – done. Artificial intelligence (AI) processes and evaluates the information, while medical specialists take care of the subsequent diagnosis and any recommendations for treatment.

It's a simple and straightforward service that can greatly improve medical care – provided patients take up the offer. The prerequisite for this is that they trust the service. In terms of data privacy and security, but also in terms of trusting that artificial intelligence will make at least as good a decision as a doctor. So, in addition to the issues of the hosting location in the case of cloud-based solutions or compliance with the GDPR, it is a question of how the AI arrives at its judgment or decisions and whether it delivers reliable and robust results.

Precise and diverse database is crucial

For this, the intelligent algorithms need suitable foundations and must be able to recognize patterns. They also need to be trained to evaluate without prejudice and in a differentiated manner – in line with the motto: "You reap what you sow." The starting point is the data material with which the digital system trains its skills. One important factor here is the quality of the data for machine learning. The information must be precise, of high quality, and – depending on the area of application – diverse and wide-ranging.

It is equally important that high and precise standards are maintained in data labeling, the recognition and marking of patterns in data preparation and processing for the creation of an AI. Because only then can the algorithm identify them. Take facial recognition, for example: In this case, AI can only deliver the desired results if it has been trained with appropriately comprehensive data sets. For example, a system that was initially designed for adult facial recognition must be retrained if it is to be extended to children.

Standards, norms, and security for a trustworthy AI

What's more, solutions using artificial intelligence make decisions, some of which have enormous implications. It is therefore all the more important that they are based on standards that comply with ethical and moral principles. Accordingly, we need standards, guidelines, norms, and security for the entire lifecycle of an AI.

The European Commission, among others, has addressed this in its "Ethics Guidelines for Trustworthy AI." These also form the basis for the Artificial Intelligence Act, the draft of which the Commission presented in April 2021. In its guidelines, the Commission assumes three components that must be fulfilled throughout the entire lifecycle of an AI system.

Thus, AI should be ...

a) ... lawful, complying with all applicable laws and regulations.

b) ... ethical, ensuring adherence to ethical principles and values.

c) ... robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintended harm.

Only the interaction of these three components makes the systems trustworthy.

According to AI experts, there are additional factors: An AI must be transparent and explainable. It must be possible to understand how it arrives at its decisions. A good example is lending, which is based on these kinds of algorithms. If a loan application is rejected, it has a massive impact on the lives of the people affected. They need to be able to understand why. And what they can do to get a loan in the future.

Regulatory framework assesses risks

The EU's AI Act also creates for the first time a regulatory framework with global legal force that sets out what artificial intelligence is and is not allowed to do. To this end, it divides AI applications into different risk classes using a tier system, among other things.

  • According to the AI Act, an AI solution cannot be used if the risk is unacceptable – this includes, for example, social scoring by governments.
  • If the risk is high, the AI must meet certain requirements. This category includes, among others, all solutions related to autonomous driving – because here it may be a matter of protecting human lives.
  • The AI Act views as less critical those applications where there is a limited risk – this includes, for example, chatbots.
  • In fourth place are AI solutions with minimal risks (for example, video games).

Infrastructure for cloud-based AI solutions

Companies like Telekom also give themselves self-binding ethical AI guidelines. To this end, the company has been following rules and guidelines on digital ethics since 2018, for example. Telekom already implements the General Data Protection Regulation, the EU guidelines, and the AI Act when developing its own compliance guidelines. In addition, data protection issues and addressing possible trained bias of AI in data collection are crucial. Equally important is the issue of security. After all, a trustworthy AI must be protected against hacker attacks or misuse and be able to comply with the required standards on a permanent basis.

With its comprehensive Infrastructure-as-a-Service offering from the Open Telekom Cloud, Telekom also provides its customers with the most appropriate secure and data protection-compliant basis for trustworthy cloud-based AI applications. Among other things, the Open Telekom Cloud meets the requirements of the German Federal Office for Information Security (BSI) for certification in accordance with the "AI Cloud Service Compliance Criteria Catalogue (AIC4)." There are also plans to integrate procedures for developing trustworthy AI into MLOps tools in the future. And Telekom offers its customers AI solutions tested according to AIC4. For example, the smart voice and chatbots of the "Conversational AI Suite."

But what exactly can the different cloud infrastructures and platforms achieve when it comes to the development, training, testing, and operation of artificial intelligence? Answers to this question are provided by the Cloud Mercato benchmark study. The analysts compared the offerings of AWS, Microsoft Azure, Google Cloud, and the Open Telekom Cloud. They measured the GPU and CPU performance as well as AI capacities of the cloud providers. They also provide a price-performance overview – to make it easier to understand how much needs to be invested in an AI installation.


Book now and claim starting credit of EUR 250

 

Do you have questions?

We answer your questions about testing, booking and use – free of charge and individually. Try it! 
Hotline: 24 hours a day, 7 days a week
0800 3304477 from Germany / 00800 33044770 from abroad

  • Communities

    The Open Telekom Cloud Community

    This is where users, developers and product owners meet to help each other, share knowledge and discuss.

    Discover now

  • Telefon

    Free expert hotline

    Our certified cloud experts provide you with personal service free of charge.

     0800 3304477 (from Germany)

     
    +800 33044770 (from abroad)

     
    24 hours a day, seven days a week

  • E-Mail

    Our customer service is available free of charge via E-Mail

    Write an E-Mail