Systematic approaches to public equity markets, grounded in data, driven by conviction.
We apply computational methods to process large-scale financial datasets systematically, cross-reference structured and unstructured sources, and surface probabilistic signals that compound at scale.
We use quantitative tools to accelerate fundamental research, not replace it. Models surface candidates. Humans build theses. Conviction comes from evidence, not correlation.
We focus on moments where the information landscape shifts — before the broader market has processed the implications. Catalysts matter. We identify them early.
We don't compete on latency. We compete on depth of analysis, breadth of coverage, and the ability to synthesize data across multiple sources in ways no human analyst can match.
Every thesis has a kill condition. We update beliefs as new data arrives, re-score signals against fresh evidence, and revise when the data contradicts the narrative.
Our research spans multiple systematic and discretionary strategies across the public equity landscape, each leveraging our proprietary data infrastructure.
Fundamental analysis driven by systematic data. Our systems detect improving fundamentals on one side and deteriorating quality hidden behind headline numbers on the other.
Data analysis · Structured data divergence · Quality scoring→02Systematic identification of catalysts from public data sources and news flow. M&A announcements, proxy fights, activist campaigns, and material corporate events all create information dislocations that our pipeline detects and maps against historical base rates.
Catalyst detection · Base rate analysis · News pipeline→03Cross-sectional and time-series analysis that identifies temporary divergences between fundamentally related companies. Our embedding space identifies firms that are deeply similar across business model, data profile, and financial structure.
Embedding similarity · Pair analysis · Mean reversion→04Sector and factor insights informed by aggregate data signals. When our systems detect clusters of companies simultaneously revising guidance or shifting risk profiles, that is a macro signal extracted from micro data.
Sector rotation · Factor analysis · Aggregate signals→05Custom NLP models applied to detect sentiment shifts, quantify information density, and measure linguistic divergence across corporate communications and financial data sources.
Sentiment analysis · Linguistic divergence · Information density→06Systematic analysis of announced M&A transactions. We model completion probabilities from corporate data, regulatory precedent, and structural deal terms.
Deal spread · Completion probability · Regulatory risk→07Identifying mispricings between convertible bonds and their underlying equity. We decompose the embedded option and isolate pure volatility and credit exposure.
Embedded optionality · Credit-equity basis · Decomposition→08Systematic analysis of the spread between implied and realised volatility. Our data signals provide forward-looking volatility estimates that complement options market pricing.
Implied vs realised · Term structure · Skew analysis→09Systematic value analysis enhanced by data signals. We combine traditional valuation metrics with quality signals to separate genuine value from value traps.
Factor construction · Quality screens · Systematic scoring→10Systematic analysis of momentum and trend persistence. Our data signals identify when momentum is fundamentally supported versus likely to reverse.
Cross-sectional momentum · Time-series trend · Regime filters→11Systematic analysis of factor risk premia. We build proprietary factors from alternative data that are orthogonal to traditional price-based factors.
Multi-factor models · Dynamic allocation · Risk premia analysis→12Identifying mispricings across the fixed income universe. Our data analysis of corporate issuers provides credit signals that complement traditional bond market analysis.
Yield curve · Basis analysis · Credit spread decomposition→13Directional credit analysis driven by data signals. We identify credits where signals indicate improving fundamentals and those where signals indicate deterioration.
Fundamental credit · CDS basis · Distressed situations→14Extracting signals from the regulatory process itself. Regulatory communications, enforcement actions, and policy changes create information asymmetries that systematic analysis can detect.
Regulatory signals · Enforcement actions · Policy shifts→15Framework-level optimisation across all analytical approaches. We treat each methodology as a signal source and weight them based on expected reliability and cross-strategy correlations.
Correlation management · Signal budgeting · Dynamic allocation→Our competitive advantage is not a single model or signal. It's the infrastructure that processes vast quantities of financial data continuously, with layers of verification and scoring that compound over time.
Continuous ingestion across structured and unstructured financial sources. Every data point chunked, embedded, and cross-referenced against multiple independent datasets. Broad coverage across public markets.
A proprietary scoring engine ranks every data fragment across multiple dimensions. Semantic relevance, financial density, temporal alignment, source authority, and additional proprietary signals — each layer reduces noise.
Progressive narrowing from broad field to precise evidence. Sub-second queries across the entire research corpus — designed to surface the needle, not the haystack.
Continuous acquisition of financial developments, regulatory activity, and sector events. Scored for significance and mapped to our active research universe.
Insight without rigor is indistinguishable from noise. Every claim we make is bounded by evidence, traceable to source, and subject to systematic review.
Every output carries an uncertainty estimate. We surface what the data supports, not what confirms a narrative. Strength of signal is explicit, not implied.
Hidden correlations are the silent failure mode of quantitative research. We track structural similarity across datasets — companies that look unrelated by sector can be deeply correlated in disclosure language and business model.
Every result can be traced back to its inputs. No black boxes. If a system produces a finding, the evidence chain is visible and reviewable at every step.
Every output is logged with the evidence that supported it. Every number links to its original source. Full reproducibility is not optional — it's the foundation of credible research.