AI Controls Framework
Last reviewed April 2026
A governance policy says "all AI models must be validated before deployment." But who validates? Against what standards? Using what tools? With what authority to block deployment? The gap between an AI policy and an AI practice is where risk lives. An AI controls framework fills this gap with specific, implementable controls that translate governance principles into operational reality.
What is an AI controls framework?
An AI controls framework is a structured set of requirements, procedures, and checkpoints that govern the AI system lifecycle from ideation through retirement. It defines what must happen (the control), who must do it (the role), when it must happen (the lifecycle stage), how compliance is evidenced (the artefact), and what happens when the control fails (the escalation). In financial services, the framework typically covers use case approval, data sourcing and quality, model development and testing, validation, deployment, monitoring, and change management.
The framework must be proportionate. A low-risk internal tool that summarises documents does not require the same controls as a high-risk model that determines credit eligibility. Effective frameworks tier controls by risk level, applying lighter requirements to lower-risk systems and more stringent requirements to higher-risk ones. The risk tier is determined by the AI risk assessment, creating a direct link between risk identification and control implementation.
The practical value of a controls framework is consistency and accountability. Without it, governance depends on individual judgement: one team validates rigorously, another does not. One project documents thoroughly, another ships without documentation. The framework creates a minimum standard that applies across the organisation, enforced through defined checkpoints and visible evidence.
The landscape
The PRA's SS1/23 provides the regulatory baseline for model controls in banks. Its five principles, covering model identification, development standards, validation, model use, and governance, translate directly into control requirements. Firms must demonstrate not just that these principles are documented but that they are operationally effective: that models are actually inventoried, validated, monitored, and governed as the framework prescribes.
The EU AI Act adds specific control requirements for high-risk systems: data quality management, technical documentation, record-keeping, transparency obligations, human oversight provisions, and accuracy and robustness requirements. These map to specific controls in the framework. For UK firms with EU operations, the framework must satisfy both PRA and EU AI Act requirements.
Industry frameworks are converging. The NIST AI Risk Management Framework, ISO/IEC 42001 (AI management systems), and the Singapore MAS FEAT principles all describe similar control categories. The specific controls differ, but the structure, risk-based, lifecycle-oriented, proportionate, is consistent across frameworks. Institutions operating across multiple jurisdictions can build a single controls framework that maps to multiple regulatory requirements.
How AI changes this
Automated control enforcement embeds governance checkpoints into the AI development pipeline. A model cannot be deployed to production without evidence that it has been registered in the inventory, risk-assessed, validated (for high-risk systems), and approved by the designated owner. Pipeline-integrated controls are harder to bypass than procedural ones, because the technology enforces what the policy requires.
Continuous control monitoring tracks compliance across the AI portfolio in real time. A dashboard showing which models are overdue for validation, which have missing documentation, and which have triggered performance alerts gives the governance function visibility into the control environment. This replaces periodic audit-based assurance with continuous monitoring, catching control failures earlier and reducing the accumulation of unaddressed findings.
Evidence management automates the collection and storage of governance artefacts: validation reports, approval records, monitoring logs, and change documentation. When the regulator or internal audit requests evidence of compliance for a specific model, the artefacts are immediately available in a structured, searchable format. This reduces the cost of regulatory engagement and audit response.
Testing of controls themselves, not just the models they govern, ensures the framework is effective. Automated tests that verify controls are in place (does every production model have a registered owner? Is every high-risk model validated within the required frequency?) provide assurance that the framework is working as designed. This meta-monitoring distinguishes frameworks that exist in policy from frameworks that exist in practice.
What to know before you start
Map your controls to regulatory requirements explicitly. For each control, document which regulatory requirement it addresses (SS1/23 principle, EU AI Act article, Consumer Duty outcome). This mapping serves two purposes: it ensures completeness (no regulatory requirement is unaddressed) and it provides a clear justification for each control's existence, which prevents controls from being removed during "simplification" exercises.
Embed controls into existing workflows rather than creating parallel processes. If the development team uses a CI/CD pipeline, integrate governance checkpoints into that pipeline. If the business uses a project approval process, add the AI risk assessment to that process. Controls that require developers to use a separate governance system will be bypassed. Controls that are part of the existing workflow will be followed.
Define the minimum viable framework and iterate. A perfect framework that takes two years to implement is less useful than an adequate framework that is operational in three months. Start with the controls that address the highest risks: model inventory, risk assessment, validation for high-risk models, and ongoing monitoring. Add refinements as the organisation's maturity increases.
Test the framework with a pilot before organisation-wide rollout. Select two to three AI projects at different risk levels and apply the full controls framework. The pilot will reveal where controls are unclear, where evidence requirements are unrealistic, and where the framework creates bottlenecks that need resolution. Revise based on the pilot findings, then roll out.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together