Condorcet proved in 1785 that if each member of a jury independently has a probability greater than 0.5 of making the correct decision, then the probability that the majority is correct approaches 1 as the jury grows. More voters, better outcomes — provided their errors are independent.
Condorcet's theorem is the mathematical foundation of ensemble methods: combining many weak classifiers produces a strong classifier. Random forests, boosting, and bagging all exploit the same principle — aggregate independent noisy signals and the noise cancels while the signal accumulates.
The theorem's crucial assumption is independence. When jurors are correlated — when they share biases or information sources — aggregation can amplify error rather than reduce it. Understanding and managing correlation is as important as collecting more signals.
Our multi-signal scoring engine aggregates nine independent scoring dimensions. Condorcet's theorem tells us exactly why this works: independent signals compound toward truth. It also tells us what to watch for — correlated signals that masquerade as confirmation.