Algorithmic Trading

Last reviewed April 2026

Over 70 per cent of equity trading volume in developed markets is now executed by algorithms. The human trader who telephones a broker to place an order is a historical artefact in most asset classes. Algorithmic trading has moved from competitive advantage to table stakes, and the frontier has shifted from execution speed to execution intelligence: algorithms that adapt to market conditions, optimise across multiple objectives, and manage their own risk in real time.

What is algorithmic trading?

Algorithmic trading is the use of computer programmes to execute trading decisions based on predefined rules, statistical models, or machine learning. It shares a common reliance on robust data infrastructure with every other AI application in financial services. The scope ranges from execution algorithms that break a large order into smaller pieces to minimise market impact, through systematic strategies that generate buy and sell signals from quantitative models, to high-frequency trading systems that exploit microsecond price discrepancies across venues.

In financial services, the operational context matters as much as the technology. A bank's trading desk uses execution algorithms to achieve best execution for client orders, a regulatory obligation under MiFID II. An asset manager uses systematic strategies to implement investment decisions at scale. A market maker uses algorithms to quote continuous prices across thousands of instruments, managing inventory risk in real time. Each use case has different performance requirements, risk profiles, and regulatory obligations.

The infrastructure requirements are significant. Algorithmic trading demands low-latency market data feeds, fast order routing, reliable execution venues, real-time position management, and comprehensive risk controls. A failure in any component can result in erroneous trades that move markets and crystallise losses in seconds. The Knight Capital incident, where a software deployment error caused 440 million dollars in losses in 45 minutes, remains the canonical cautionary tale.

The landscape

MiFID II's algorithmic trading provisions, Articles 17 and 48, require firms to have effective systems and risk controls, to test algorithms before deployment and after material changes, and to provide annual self-assessments to regulators. The FCA has emphasised that these obligations apply to the full lifecycle: development, testing, deployment, monitoring, and decommissioning. Firms that cannot demonstrate governance across the lifecycle face enforcement risk.

The extension of algorithmic trading into new asset classes is accelerating. Fixed income, foreign exchange, and commodities markets that were historically voice-traded are increasingly electronic. This creates opportunities but also complexity: bond markets are fragmented, with thousands of instruments that trade infrequently, making the statistical models used in equities less directly applicable. The data infrastructure for fixed income algo trading is less mature than for equities, and the regulatory framework is still evolving.

Machine learning models in trading face a unique validation challenge: the market is adversarial. A model that identifies a profitable pattern will, once deployed at scale, change the market dynamics that created the pattern. This reflexivity means that backtesting, the standard model validation approach, overstates future performance. The model monitoring requirements for trading algorithms are more demanding than for most AI applications because degradation translates directly into financial loss. Firms that deploy ML trading models without accounting for market impact and strategy decay will see performance degrade faster than their backtests predicted.

How AI changes this

Adaptive execution algorithms adjust their behaviour based on real-time market conditions. A traditional VWAP algorithm slices an order according to a historical volume profile. An AI-enhanced version detects that today's volume pattern differs from the historical average (perhaps due to an economic data release) and adjusts its schedule accordingly. The improvement is marginal per trade but compounds across thousands of daily executions.

Reinforcement learning is being applied to execution optimisation, where the algorithm learns optimal strategies through simulated interaction with market environments. The approach is promising but not yet mainstream in production. The challenge is sim-to-real transfer: the simulated market environment does not perfectly replicate real market dynamics, and strategies that perform well in simulation may fail in live markets. Firms at the frontier use RL as one input to execution decisions, not the sole decision-maker.

Natural language processing for news and social media sentiment is production-ready for event-driven trading. LLMs parse earnings announcements, central bank communications, and geopolitical developments, extracting structured signals that feed into trading models. The speed advantage is clear: an algorithm that can parse a central bank statement and extract the policy implication in milliseconds acts before a human has finished reading the first paragraph. The risk is misinterpretation, which requires careful model validation and human oversight for high-impact events.

Portfolio construction and risk management benefit from AI models that capture non-linear relationships between assets. Traditional mean-variance optimisation assumes linear correlations that break down during market stress. ML models that learn from historical stress episodes can construct portfolios that are more robust to tail events, though the limited number of true tail events in the training data constrains this approach.

What to know before you start

Governance first, technology second. The regulatory framework for algorithmic trading is well-established and demanding. Before deploying any AI-enhanced trading system, ensure your algorithmic trading governance framework covers the full lifecycle: model development, validation, testing, deployment, monitoring, and retirement. Sound data governance underpins every stage, from training data quality through to production monitoring. The FCA will ask to see this framework, and "we use machine learning" is not a governance answer.

Backtesting is necessary but not sufficient. Any quantitative trading strategy can be overfit to historical data. Require out-of-sample testing, walk-forward analysis, and stress testing against scenarios not present in the training data. For ML models, require explainability of the signals the model has learned. A model that has discovered a genuine market inefficiency should be able to articulate what that inefficiency is. If it cannot, it may have learned noise.

Kill switches and risk limits are mandatory, not optional. Every algorithmic trading system must have hard limits on position size, loss thresholds, and order rates that cannot be overridden by the algorithm. These must be tested regularly, including in conditions where the algorithm is behaving unexpectedly. The cost of a malfunctioning algorithm without kill switches is existential for the firm. Your predictive analytics and risk models should inform these limits, not replace them.

Start with execution optimisation for existing strategies rather than AI-generated alpha signals. The regulatory bar is lower, the risk is more contained, and the performance improvement is measurable against a clear benchmark. Proving that AI can reduce execution costs by five basis points is achievable, defensible, and financially meaningful. Building an AI that generates profitable trading signals is a different problem with a different risk profile.

Last updated

Exploring AI for your organisation? There are fifteen minutes on the calendar.

Let’s build AI together
← Back to AI Glossary