Cyber Threat Detection
Last reviewed April 2026
The average time from initial compromise to detection in financial services is still measured in weeks, not minutes. Attackers use this dwell time to map the network, escalate privileges, and position for maximum impact. Cyber threat detection is the discipline of shrinking that window, and AI is the capability that makes real-time detection feasible against adversaries who design their attacks specifically to avoid triggering rules.
What is cyber threat detection?
Cyber threat detection is the identification of malicious activity within an organisation's technology environment. It relies on the same predictive analytics principles that underpin financial crime detection, applied to a different adversary. It encompasses the tools, techniques, and processes used to discover threats that have bypassed preventive controls (firewalls, access controls, endpoint protection) and are active within the network. Detection operates at multiple layers: network (unusual traffic patterns), endpoint (suspicious process behaviour), identity (anomalous authentication), and application (unexpected data access).
In financial services, detection is a regulatory obligation, not just a security practice. The PRA's operational resilience framework and the FCA's cyber resilience expectations both require firms to demonstrate the ability to detect threats and respond within defined timeframes. The PRA's supervisory statement on operational resilience (SS1/21) requires firms to identify and monitor threats to their important business services. Detection capabilities are tested during supervisory assessments and, under DORA, through formal threat-led penetration testing.
The detection challenge in financial services is compounded by architectural complexity. A typical bank operates thousands of applications, hybrid cloud infrastructure, legacy mainframes, and third-party integrations. Each component generates telemetry in different formats, at different volumes, and with different retention characteristics. Achieving visibility across this estate is an engineering problem before it is an AI problem.
The landscape
The MITRE ATT&CK framework has become the standard taxonomy for describing adversary techniques, and financial services firms increasingly map their detection capabilities against it. The framework catalogues over 200 techniques across 14 tactics, from initial access through exfiltration and impact. Mapping detection rules and AI models against ATT&CK reveals coverage gaps: techniques where the firm has no detection capability at all. Most firms discover that their rule-based detection covers fewer than 40 per cent of relevant techniques.
The convergence of IT and OT (operational technology) security is an emerging concern for financial services firms that operate data centres, trading floors with specialised hardware, and physical branch infrastructure. Attacks that cross from IT to OT or vice versa require detection capabilities that span both domains. Most cybersecurity AI platforms are designed for IT environments and have limited visibility into OT protocols and systems.
Threat actor sophistication continues to increase. State-sponsored groups targeting financial services infrastructure use custom malware, living-off-the-land techniques (using legitimate system tools for malicious purposes), and supply chain compromises that are invisible to signature-based detection. The only viable detection approach for these adversaries is behavioural: identifying the anomaly in how systems and users behave, rather than matching a known malicious signature.
How AI changes this
Anomaly detection across multiple data streams is the core AI capability for threat detection. Rather than writing rules for specific attack patterns, ML models learn what normal looks like for each user, device, application, and network segment, and then flag deviations. This approach detects novel attacks that no rule anticipated, which is precisely the category of attacks that causes the most damage in financial services.
Correlation across kill chain stages connects individual anomalies into a coherent attack narrative. An anomalous login (initial access), followed by unusual administrative tool usage (discovery and lateral movement), followed by atypical data access patterns (collection), may each individually score below the alerting threshold. Correlated across the kill chain, they constitute strong evidence of a compromise. AI models that reason across stages detect attacks that stage-specific rules miss.
Automated threat hunting generates hypotheses about potential compromises based on threat intelligence and internal telemetry. Rather than waiting for an alert, the system proactively searches for indicators: unusual DNS resolution patterns, beaconing behaviour, or the presence of tools commonly used by specific threat groups. This shifts detection from reactive (waiting for an alert) to proactive (looking for evidence of techniques that the adversary is known to use).
Deception technology enhanced by AI creates realistic honeypots, fake credentials, and decoy data that attract attackers and generate high-fidelity alerts. An alert from a decoy system has near-zero false positive rate because no legitimate user has reason to access it. AI makes the deception more convincing and the analysis of attacker behaviour within the deception environment more automated, producing intelligence about the adversary's techniques and objectives. The same secure AI principles that protect the detection platform itself must be applied to ensure adversaries cannot tamper with the monitoring infrastructure.
What to know before you start
Data quality and coverage determine detection quality. An AI model cannot detect threats in data it does not receive. Audit your logging coverage: are all critical systems sending telemetry to your detection platform? Are the logs complete and timely? A gap in logging is a gap in detection, and attackers will find and exploit it. The data governance discipline of ensuring log completeness and integrity is a prerequisite for AI-powered detection.
Tune for your environment, not the vendor's demo. AI detection models trained on generic enterprise data will generate excessive false positives in a financial services environment with its unique traffic patterns, application behaviours, and user workflows. Plan for a tuning period of three to six months where the security team validates alerts, provides feedback, and adjusts thresholds. The model improves with this feedback; deploying without it creates alert fatigue that undermines the entire investment.
Detection without response is observation, not security. Every detection capability must have a corresponding response playbook: what happens when this alert fires? Who investigates? What containment actions are pre-authorised? How is the response documented? AI observability tools can monitor the detection system's own health, ensuring it continues to function correctly under attack. Investing in detection AI without investing in response processes and automation means you will detect threats faster but respond to them no faster than before.
Start with detection for your most critical business services. Map the technology components that support each important business service (as defined in your operational resilience framework) and ensure detection coverage for those components first. Expanding detection across the full estate is a multi-year programme. Covering the systems that matter most is achievable in months and directly supports your regulatory obligations.
Last updated
Exploring AI for your organisation? There are fifteen minutes on the calendar.
Let’s build AI together