De Moivre's 1733 paper showed that as the number of coin flips grows, the binomial distribution approaches a smooth bell-shaped curve. This was the first derivation of the normal distribution — the most important probability distribution in applied mathematics.
De Moivre's result was the precursor to the central limit theorem: regardless of the underlying distribution, the sum of many independent random variables tends toward a normal distribution. This explains why the bell curve appears everywhere — in measurement errors, in stock returns, in test scores.
Before computers, de Moivre's normal approximation was a practical breakthrough. It replaced combinatorial calculations requiring factorials of large numbers with a simple formula involving the exponential function. This made probability calculations feasible for real-world problems.
The normal distribution is the default model for return distributions, risk estimation, and confidence intervals. When our scoring engine estimates the probability that a signal is genuine rather than noise, it relies on distributional assumptions that trace directly to de Moivre.