Transparency

Last reviewed April 2026

A customer is offered an insurance renewal premium 40 per cent higher than last year. No claim was made. No risk factors changed. The insurer's AI pricing model identified the customer as unlikely to switch providers, and priced accordingly. The customer is never told this. Transparency in AI means the organisation is open about when AI is used, how it affects decisions, and what recourse exists, and in financial services, the regulatory expectation for transparency is sharpening rapidly.

What is transparency?

Transparency in the AI context encompasses three related obligations. Disclosure: telling customers and stakeholders when AI is involved in decisions that affect them. Explanation: providing understandable reasons for AI-driven outcomes. Openness: making the organisation's AI practices, governance structures, and risk management approaches available for scrutiny. These are distinct from explainability, which is a technical capability. Transparency is an organisational practice that uses explainability as one of its tools.

In financial services, transparency is not optional. Customers have legal rights to information about automated decisions that affect them. Regulators require firms to be open about how AI is used in regulated activities. Auditors need visibility into AI systems to assess controls. And the board needs clarity about the AI estate, its risks, and its governance. Each audience requires different information at different levels of detail, but all require the organisation to be willing and able to provide it.

The commercial tension is real. Firms may view their AI models as competitive advantages and resist disclosure. An insurance pricing model that uses behavioural data to predict switching propensity is a commercial asset. But the decision to charge a customer more because they are unlikely to switch is a conduct issue that the Consumer Duty directly addresses. Transparency does not require revealing proprietary algorithms. It requires disclosing the factors that influence decisions and ensuring those decisions are fair.

The landscape

The EU AI Act establishes transparency obligations at multiple levels. High-risk AI systems must be accompanied by instructions for use that enable deployers to understand the system's capabilities and limitations. Users interacting with AI systems (chatbots, for example) must be informed that they are interacting with an AI. AI-generated content must be labelled as such. These obligations apply regardless of whether the AI system is developed in-house or purchased from a vendor.

The FCA's Consumer Duty requires firms to ensure customers can make informed decisions. If AI influences a product recommendation, a pricing decision, or a claims assessment, the customer must have enough information to understand the outcome and challenge it if appropriate. The FCA has not prescribed specific transparency requirements for AI, but it has made clear that opacity in AI-driven decisions is inconsistent with the Duty's outcomes focus.

GDPR Articles 13 and 14 require firms to inform data subjects about the existence of automated decision-making, the logic involved, and the significance and envisaged consequences. Article 22 gives individuals the right not to be subject to decisions based solely on automated processing, with the right to obtain human intervention. These rights create a floor for transparency that applies to every AI system processing personal data.

How AI changes this

Automated disclosure mechanisms inform customers when AI is involved in a decision, integrated into the decision workflow. A credit application response can automatically include a statement about AI involvement and the factors considered. An insurance quote can include an explanation of how the premium was calculated. These disclosures are generated programmatically, ensuring consistency and completeness across every customer interaction.

Layered transparency provides different levels of detail for different audiences. The customer receives a plain-language explanation. The regulator receives a technical description of the model, its data, and its governance. Internal teams receive detailed model documentation. This layered approach satisfies all audiences without overwhelming any single one, and explainability tools generate the appropriate level of detail for each layer.

Public reporting on AI practices is emerging as a governance norm. Annual AI transparency reports, covering the number and types of AI systems deployed, their governance framework, fairness metrics, and incident reports, demonstrate organisational commitment to openness. Several major financial institutions have begun publishing these reports voluntarily, and regulatory requirements for similar disclosure are likely to follow.

Audit trails support transparency by providing verifiable evidence of what happened and why. When a customer challenges a decision, the audit trail provides the facts. When a regulator asks about governance practices, the audit trail provides the evidence. Transparency claims that are not backed by auditable records are assertions, not assurances.

What to know before you start

Transparency is a design choice, not a retrofit. Building customer disclosures, explanation generation, and governance reporting into AI systems from the start is significantly cheaper than adding them after deployment. Include transparency requirements in your AI controls framework as non-functional requirements alongside performance, security, and availability.

The explanation must be meaningful to the recipient. A customer who receives "your premium was calculated using a gradient-boosted ensemble trained on 47 features" has been given information but not transparency. A customer who receives "your premium reflects your vehicle type, annual mileage, and claims history" has been given meaningful transparency. Design explanations for the audience, not the model.

Internal transparency is as important as external. If the board does not understand the AI systems the firm operates, it cannot exercise effective oversight. If the risk function does not have visibility into AI deployments, it cannot assess the firm's risk profile accurately. Internal transparency requires regular, structured reporting from the AI governance function to senior management and the board.

Start with customer-facing decisions where transparency obligations are clearest and regulatory scrutiny is highest: credit, insurance pricing, and claims. Implement automated disclosure and explanation for these use cases first. Extend to internal reporting and public transparency as the infrastructure and organisational practice mature.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary