AI Governance
Last reviewed April 2026
Most financial institutions have more AI models in production than they realise. Spreadsheets with embedded regression models, vendor-supplied scoring engines, and departmental machine learning experiments all make decisions that affect customers and capital. Without a governance framework, nobody knows the full inventory, nobody owns the risk, and nobody can answer the regulator's questions. AI governance is the organisational infrastructure that makes responsible AI operational rather than aspirational.
What is AI governance?
AI governance is the set of policies, processes, roles, and oversight structures that ensure AI systems are developed, deployed, and operated in line with the organisation's risk appetite, regulatory obligations, and ethical commitments. It covers the full lifecycle: use case approval, data sourcing, model development, validation, deployment, monitoring, and retirement. The goal is not to slow AI adoption but to ensure that every AI system in production has a known owner, a defined purpose, an assessed risk level, and an active monitoring regime.
In financial services, AI governance sits within the broader model risk management framework but extends beyond it. Model risk management, as defined by the PRA's SS1/23, covers models that produce quantitative outputs informing business decisions. AI governance also covers systems that generate text, classify documents, or route workflows, systems that may not fit the traditional definition of a "model" but still carry risk. The scope is wider, and the governance must reflect that.
The practical challenge is proportionality. A customer-facing credit scoring model and an internal document classification tool do not warrant the same governance intensity. Effective AI governance frameworks tier AI systems by risk level and apply controls proportionate to the potential impact. Over-governing low-risk systems wastes resources and creates bottlenecks. Under-governing high-risk systems creates regulatory and reputational exposure.
The landscape
The EU AI Act introduces a risk-based classification system for AI. High-risk systems, which include credit scoring, insurance pricing, and fraud detection, face mandatory requirements for governance, documentation, and human oversight. The Act requires organisations to maintain an AI use case inventory, conduct conformity assessments, and appoint responsible individuals. For UK firms with EU operations, compliance is mandatory regardless of the UK's regulatory approach.
The UK's pro-innovation approach, set out in the 2023 white paper and reinforced by the FCA's subsequent feedback statement, relies on existing regulators applying cross-cutting principles (safety, transparency, fairness, accountability, contestability) within their sectors. This means the FCA and PRA are the AI regulators for financial services, and they are integrating AI expectations into existing supervisory frameworks rather than creating standalone AI rules.
Board-level accountability is crystallising. The Senior Managers and Certification Regime (SM&CR) means that an individual senior manager is personally accountable for the firm's AI governance. Regulators expect boards to understand the AI systems their firms deploy, the risks those systems carry, and the controls in place. This is not a compliance formality. Supervisors are testing board understanding during routine reviews.
How AI changes this
Automated model inventory management tracks AI systems across the organisation, capturing metadata about each model's purpose, data inputs, outputs, owner, risk tier, and validation status. This addresses the foundational problem that most institutions cannot produce a complete list of their AI systems. The inventory becomes the single source of truth for governance, audit, and regulatory reporting.
Continuous monitoring platforms provide real-time dashboards showing model performance, data drift, fairness metrics, and usage patterns across the AI portfolio. When a model's performance degrades or its outputs shift, the platform alerts the model owner and the governance function. This replaces periodic manual reviews with continuous assurance, matching the pace at which models can drift.
Auditability infrastructure automatically captures the decision trail for every AI output: what data went in, what model version was used, what output was produced, and what action was taken. For regulated decisions, this trail is essential for responding to customer complaints, regulatory enquiries, and internal audit reviews. Building this infrastructure into the AI platform from the start is orders of magnitude cheaper than retrofitting it.
Risk assessment automation helps governance teams evaluate new AI use cases consistently. A structured questionnaire, scored against defined criteria, produces a risk tier that determines the required governance controls. This ensures that two similar use cases receive similar governance treatment, regardless of which business unit proposes them.
What to know before you start
Start with the inventory. You cannot govern what you cannot see. Conduct a thorough AI use case inventory across the organisation, including vendor-supplied models and spreadsheet-based models that may not self-identify as AI. The inventory reveals the actual scope of governance required and almost always exceeds initial estimates.
Governance is an organisational design problem, not a technology problem. The most common failure is establishing a governance policy that nobody follows because it does not fit the development workflow. Embed governance checkpoints into the AI development lifecycle: use case approval before development begins, risk assessment before data is sourced, validation before deployment, monitoring after deployment. If governance is a separate process that runs in parallel, it will be bypassed.
Proportionality is the design principle that makes governance sustainable. A four-tier risk classification (minimal, limited, high, critical) with escalating controls for each tier ensures that low-risk internal tools are not subject to the same governance overhead as customer-facing credit models. Define the tiers, define the controls for each tier, and communicate clearly so that development teams know what is expected.
Engage the regulator proactively. Both the FCA and PRA have signalled openness to dialogue about AI governance approaches. A conversation with your supervisor about your governance framework, before they ask, demonstrates maturity and builds regulatory confidence. It also provides early warning if your approach does not meet supervisory expectations.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together