Responsible AI: What is it?
Artificial Intelligence (AI) makes life easier for all of us. But where do we have to draw a red line that is not to be crossed? The responsible use of AI, also called responsible AI, entails – as the term suggests – responsibility. However, it appears that many companies have not recognized this fact yet.
- The responsible use of Artificial Intelligence is gaining importance. The relevant terms are responsible AI and AI governance.
- While consumers already expect efforts along those lines, companies still lag behind this development.
- The combination of ethics and AI is gaining momentum: The EU is taking action, and there are labels and other guidelines that organizations can follow.
AI governance: The critical importance of digital ethics
The Handelsblatt newspaper calls it a “moral code for machines”: Although AI governance, i.e., digital ethics, is becoming increasingly important in the corporate world, the responsible handling of data and its protection is still in its infancy. Only about one-fifth of companies monitor their AI models with regard to ethical guidelines (FICO, 2021).
Balance is the key to data fairness: Did you know that only 40% of consumers feel they have control over the data they give to companies (Zerotap, 2022)? By contrast, 90% of respondents are confident that the combination of human and artificial intelligence will be the future. At BSI, we are convinced of this, too.
Companies are already recognizing today that they have to give more consideration to consumers’ concerns and that there are limits to ethics in technology. On the “Digital Ethics Mood Barometer,” the pendulum points to 83 out of 100 points when it comes to the question of how important ethical aspects will be for companies in five years. Yet, companies are only partially committed today; with this question, the barometer only reaches 50 points (Institute for Digital Business, HWZ 2022).
Artificial intelligence, ethical issues – are guidelines in place already?
Despite the perceived importance of AI governance, this question arises immediately: Can companies go by the existing rules, guidelines, or laws? Are there authorities or institutions that define what is and what is not ethically correct?
In 2018, for example, the European Union established an Expert Group on Artificial Intelligence, which adopted the EU Ethics Guidelines for Trustworthy Artificial Intelligence (AI) in 2019. The Guidelines highlight four ethical principles: Respect for human autonomy, prevention of harm, fairness and explicability (European Commission, 2019).
In addition, the EU’s Artificial Intelligence Act (EU AI Act) is in the planning stage. It aims to regulate AI governance according to the risk to people, thus focusing on people’s fundamental rights (Future of Life, 2022). For example, it proposes a ban on the social scoring of citizens by governments and recommends that a program for scanning job applicants be assigned clear legal requirements.
In addition, there are other guidelines and labels available that companies can go by. One of them is the Data Fairness Label launched by SWISS INSIGHTS, whose Vice President is our own Christoph Bräunlich, Head of BSI AI. With this label, companies pledge to independently go through a review process in the form of a digital questionnaire for all their data science projects.
What do we need guidelines for? Christoph Bräunlich answers this question by using an example connected to the Data Fairness Label: “I like to use the example of bicycle couriers who deliver food by bike. It is easy to imagine an AI algorithm that calculates these couriers’ pay based on speed and reliability, for instance. But what if a courier’s bike has a flat tire? Should this courier receive less pay then?” (Horizont, 2022)
There is also the Digital Trust Label launched by the Swiss Digital Initiative (SDI) foundation. It is the first label in the world to certify a company’s high transparency, clear responsibility and data security. The certification of data centers as “Trusted Data Centers” is a similar label awarded by TÜV TRUST IT for security management. As a result, we can respond to the question posed at the beginning of this section with a clear “yes.”
Digital responsibility and data fairness as the cornerstones of ethical AI
In this context, the acronym CDR is used more and more frequently. It comes from Corporate Social Responsibility (CSR), stands for Corporate Digital Responsibility and refers to sustainable, digitization-inclusive and social operations.
At the same time, the fair handling of data, also called data fairness, is also a key issue. And it concerns consumers, too: Around 80% of them are concerned about the potential misuse of their personal data (Ipsos and Google, 2021).
There is still a lot more to be done: Only about 30% of decision-makers consider ethical, social and sustainability-related impacts at best occasionally when selecting, developing and using AI systems in their companies (Kienbaum, 2021).
Three ideas for making AI more ethical
Some variables can be adjusted to reduce the high number of consumers concerned about their personal data and to ensure that companies do not lose their customers’ trust.
- Guidelines become regulations: One conceivable way to drive more consistency regarding responsible AI is to turn mere guidelines into obligations. The EU AI Act demonstrates the direction this might take.
- Teamwork: The combination of human and artificial intelligence is what is ahead – there is no proper ethical behavior without human supervision.
- Transparency: Consumers and customers want to know what happens to their data. Manipulative consent declarations for AI-controlled data processing are no longer up to date.
Taking these aspects into account, AI can indeed become more ethical – but without proactive action, consumer mistrust will increase. It is up to companies to do this.
Conclusion: AI needs an ethical framework
Responsible AI and the awareness of what the use of Artificial Intelligence brings about every time it is applied are absolutely essential. Without ethics and its responsible use, there is no future-proof, comprehensive and sustainable use of AI.
With the guidelines it presented, the EU has taken the first step towards a uniform ethics standard. It is not inconceivable that the guidance and recommendations for action will soon become laws and regulations, as the EU AI Act demonstrates.
Are you interested in responsible AI? Whether it’s in banking, retail or insurance: At BSI, we believe that ethics and artificial intelligence belong together. Would you like to know how that works?
We look forward to hearing from you.