Oliver Hechler
29.08.2025
Transparency is now key: what the EU AI Act means for the responsible use of AI in banks and insurance companies


The risk-based approach of the EU AI Act places new demands on the financial and insurance industries. A key concept here is transparency. The AI Act obliges providers and operators of AI systems to disclose information on data processing in a comprehensible manner. Among other things, users have to recognize that they are interacting with an AI system and must be informed about the decision-making logic. The aim is to build trust with customers and partners by means of traceable decisions.
Transparency requirements and risk categories
The EU AI Act sets out different transparency requirements and therefore divides AI systems in different risk categories: forbidden applications, high-risk applications and applications with limited or minimal risk. Especially high-risk applications, such as those used for credit checks or fraud detection, are subject to comprehensive transparency, documentation and control requirements. For example, the system must have precise, complete and clearly understandable technical documentation, log all functions and provide opportunities for human control.
For banks and insurers, this means that their AI applications are not just measured in terms of technological benefits. Factors such as explainability and fairness, as well as the effect on customers and society, are also important. Here, transparency is a complex concept. It has different meanings with specific requirements at different technological levels of an AI system. It makes sense to differentiate between model, data, process and result transparency.
Model transparency: the path to decisions
Model transparency refers to the question how an AI system makes its decisions. It reveals which parameters and variables the AI takes into account in assessment and how these are weighted. This form of transparency is, among other things, important for lending. Scoring models for creditworthiness evaluate income, length of employment, existing loans and payment history.
Model transparency shows how this assessment is arrived at and how each factor is weighted in the assessment. This rather technical information is important for IT and data science teams in order to optimize and maintain models and to recognize potential biases. Regulatory authorities, such as FINMA in Switzerland or BaFin in Germany, require such insights in order to assess the fairness and legality of AI.
Data transparency: using data without bias
Daten-Transparenz meint die vollständige Dokumentation aller Informationen, die für das Training und den Betrieb des AI-Systems verwendet werden. Dazu gehören unter anderem die Herkunft der Daten, ihre Qualität, mögliche Verzerrungen und die Eignung für den vorgesehenen Verwendungszweck. Bei Versicherungen ist Daten-Transparenz besonders wichtig.
Wenn ein Versicherer historische Schadendaten zur Risikobewertung nutzt, muss klar sein, aus welchen Zeiträumen und Regionen diese Daten stammen. Die Prüfung auf systematische Verzerrungen ist ebenso wichtig, um unfaire Bewertungen zu vermeiden. Vor allem Compliance- und Datenschutzbeauftragte sowie Auditoren benötigen die entsprechenden Informationen. Sie müssen sicherstellen, dass alle verwendeten Daten rechtmässig erhoben und verarbeitet werden.
Process transparency: integrating AI into all processes
Process transparency shows where AI systems are being used and what human control mechanisms are built in – critical decisions should not be fully automated. A typical example is claims settlement. An AI system can process simple damage claims completely independently – a process known as dark processing.
At the same time, criteria must be defined for when a case needs to be forwarded to a human claims consultant. Escalation rules and the role of the human control authority must be documented transparently. This form of transparency is particularly important for claims managers or credit departments: This way, they always know which claims the system approves or rejects independently and which need to be checked manually.
Result transparency: clear communication
With regard to customers, result transparency is a must. For example, a rejection of a claim or a loan application cannot simply be justified with “the system rejected it”. Instead, the applicant needs to receive a comprehensible explanation of the relevant criteria. Customers should be informed at every point of contact when an AI system is being used in customer contact.
For instance, chatbots must identify themselves as such, and recommendation systems should make it clear that their suggestions are based on algorithms. This labeling requirement is already stipulated in the EU AI Act. However, it is not just about automatic labeling. Service and sales staff should be able to explain AI decisions and to answer follow-up questions competently.
Target-group oriented transparency for sustainable customer relationships
The four dimensions of AI transparency complement each other and together provide a complete picture of AI systems. Each dimension has its own specific requirements and target groups. The challenge for companies is to find the right amount of transparency for each level.
Too little transparency leads to mistrust and regulatory problems. Too much transparency, however, can disclose trade secrets or overwhelm customers with technical details. The goal is to achieve transparency that is tailored to the target group, builds trust and at the same time remains practical. This gives companies the opportunity to use AI efficiently and responsibly. Investing in transparency lays the foundation for sustainable customer relationships in an increasingly data-driven market.

Just talk to us
We are always and very happy to be there for you.