Foundations

Andrey Markov

1856 – 1922Markov chains
0.40.30.50.20.30.60.40.20.20.4ABCD

Chains of dependence

In 1906 Markov introduced a new class of stochastic processes: sequences where the probability of each future state depends only on the current state, not on the entire history. This 'memoryless' property — the Markov property — made it possible to analyse complex dependent systems with tractable mathematics. His first application was to the alternation of vowels and consonants in Pushkin's Eugene Onegin.

Transition matrices

Markov chains are fully characterised by their transition matrix: a table of probabilities governing movement between states. Given the current state, the matrix tells you the probability of every possible next state. This framework enables computation of steady-state distributions, expected first passage times, and the long-run behaviour of dynamic systems — all from a single matrix.

Hidden Markov models

The extension to hidden Markov models — where the states are not directly observable but must be inferred from noisy data — is one of the most powerful tools in signal processing. In finance, hidden Markov models detect regime changes: shifts between bull and bear markets, between high and low volatility environments, between risk-on and risk-off states. The states are hidden; the data reveals them.

Why this matters

Our regime detection systems use Markov models to identify shifts in market state from disclosure patterns. When filing language across a sector transitions from expansion to caution, the Markov framework quantifies the probability that the regime has changed — and what comes next.