AI Use Case Inventory
Last reviewed April 2026
The regulator asks a simple question: how many AI systems does your firm operate? The CTO says twelve. The CDO says twenty-three. The business units, when surveyed, identify forty-seven, including vendor models, spreadsheet-based scoring tools, and a chatbot that procurement purchased independently. The real number is unknown, and that is the problem. An AI use case inventory is the organisational register that closes this gap, and without it, AI governance is fiction.
What is an AI use case inventory?
An AI use case inventory is a comprehensive register of all AI and machine learning systems deployed across an organisation, including those provided by third-party vendors. For each system, the inventory records its purpose, the data it uses, the decisions it informs, its risk classification, its owner, its validation status, and its monitoring arrangements. It is the single source of truth for the AI estate and the foundation on which governance, risk management, and regulatory reporting are built.
The scope must be broad enough to capture systems that do not self-identify as AI. A regression model embedded in a spreadsheet used for pricing decisions is a model. A vendor's credit scoring engine integrated via API is a model. A rule-based system that was originally deterministic but has been augmented with machine learning is a model. The PRA's definition of a model is deliberately wide: any quantitative method that processes input data into quantitative estimates. The inventory must reflect this breadth.
Most institutions that conduct their first comprehensive inventory discover significantly more AI systems than expected. The gap arises from departmental initiatives that bypassed central oversight, vendor-supplied models embedded in purchased platforms, and legacy systems that have been incrementally enhanced with ML components without formal re-classification. Closing this visibility gap is the first step toward meaningful governance.
The landscape
The EU AI Act requires deployers of high-risk AI systems to register those systems in an EU database. This creates a legal obligation for inventory management: firms must know which of their AI systems are high-risk and maintain records that support registration. For UK firms with EU operations, this registration requirement applies to systems deployed in the EU market.
The PRA's SS1/23 requires firms to maintain a comprehensive model inventory that includes all material models. The inventory must be actively maintained, not a one-time exercise. Models must be categorised by materiality, and the inventory must support the scheduling of validation, monitoring, and governance activities. Gaps in the inventory are a supervisory finding that attracts regulatory attention.
The FCA has signalled interest in understanding the extent of AI deployment across the firms it supervises. While the FCA has not yet mandated a specific inventory format, supervisory conversations increasingly include questions about the firm's AI systems, their purposes, and their governance. An institution that cannot produce a comprehensive inventory during a supervisory visit is demonstrating a governance gap.
How AI changes this
Automated discovery tools scan the organisation's IT estate to identify systems that use machine learning libraries, call ML inference endpoints, or contain model artefacts. This supplements the manual survey approach, which relies on business units self-reporting, with technical detection that catches systems the business may not recognise as AI. The combination of manual survey and automated discovery produces a more complete inventory than either approach alone.
Metadata management platforms maintain the inventory as a living register. Each entry includes structured metadata: model type, data sources, deployment date, last validation date, risk tier, owner, and monitoring status. Automated alerts flag entries that are overdue for validation, that have missing metadata, or that have changed without governance approval. The platform turns the inventory from a static document into an active governance tool.
Integration with the risk assessment workflow ensures that every new AI system is inventoried and risk-assessed before deployment. A development pipeline that requires inventory registration as a gate before production deployment prevents the accumulation of ungoverned systems. This "governance by design" approach is more effective than periodic inventory sweeps that discover ungoverned systems after the fact.
Reporting from the inventory supports board-level oversight and regulatory engagement. Dashboards showing the number of AI systems by risk tier, validation status, and business function give senior management visibility into the AI estate. Trend data shows whether the estate is growing, whether governance coverage is keeping pace, and where resource constraints are creating backlogs.
What to know before you start
Define "AI system" broadly. If your definition is too narrow, systems will fall outside scope and create governance gaps. If it is too broad, the inventory becomes unmanageable. A practical approach is to include any system that uses statistical or machine learning methods to produce outputs that inform business decisions, plus any system that uses generative AI to produce content or recommendations. Refine the boundary based on your initial survey findings.
The initial inventory exercise is resource-intensive. Surveying every business unit, reviewing vendor contracts for embedded models, and scanning the IT estate takes weeks of effort across multiple functions. Budget for this upfront investment and treat it as a foundational governance exercise, not a one-off project. The inventory must be maintained continuously once established.
Vendor models require special attention. Many firms use vendor-supplied models for credit scoring, fraud detection, and AML screening. These models are in scope for the inventory and for validation. Negotiate access to model documentation and performance data in vendor contracts. Without this access, the firm cannot meet its regulatory obligation to understand and validate the models it uses.
Start with a survey of the three to five business functions most likely to use AI: risk, compliance, operations, customer analytics, and pricing. This targeted approach builds the inventory methodology and governance infrastructure on a manageable scope before extending to the full organisation. The early inventory will also identify shadow AI deployments that need immediate governance attention.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together