III - Probability & Statistics

Jump to Topic

To truly understand machine learning, a strong grasp of probability theory and statistics is essential because machine learning is an elegant combination of statistics and algorithms. In Section I: Linear Algebra, we intentionally avoided applications related to statistics, focusing instead on foundational concepts. Now, it's time to build upon the probabilistic basis of machine learning. This section introduces the essential concepts of probability, providing the tools and insights necessary to understand and apply machine learning techniques. At its core, statistics involves inferring unknown parameters from outcomes. This process is the inverse of probability theory. Two main approaches dominate statistical inference: frequentist statistics, which treats parameters as fixed and data as random, and in contrast, Bayesian statistics, which treats data as fixed and parameters as random. In particular, Bayesian statistics forms the foundation of many machine learning algorithms.

p

Part 1: Basic Probability Ideas

Probability Sample Space Events Mutually Exclusive Permutation Combinations Conditional Probability Independent Events Law of Total Probability Bayes' Theorem
\(X\)

Part 2: Random Variables

Discrete Random Variables Continuous Random Variables Probability Mass Function (p.m.f.) Probability Density Function (p.d.f.) Cumulative Distribution Function(c.d.f.) Expected Value Variance Standard Deviation
\(\Gamma\)

Part 3: Gamma & Beta Distribution

Interactive Demo Gamma Distribution Gamma Function Exponential Distribution Beta Function Beta Distribution Uniform Distribution
\(\mathcal{N}\)

Part 4: Normal (Gaussian) Distribution

Gaussian Function Error Function Gaussian Integral Normal(Gaussian) Distribution Standard Normal Distribution Independent and Identically Distributed(i.i.d.) Random Sample Sample Mean Sample Variance Central Limit Theorem
t

Part 5: Student's \(t\)-Distribution

Student's \(t\)-Distribution Degrees of Freedom Cauchy Distribution Half Cauchy Distribution Laplace Distribution Double Sided Exponential Distribution
Cov

Covariance

Code Included Covariance Covariance Matrix Total Variance Principal Component Principal Component Analysis(PCA)
r

Part 7: Correlation

Cross-Covariance Matrix Auto-Covariance Matrix Correlation Coefficient Correlation Matrix
\(\Sigma\)

Part 8: Multivariate Distributions

Multivariate Normal Distribution (MVN) Mahalanobis Distance Bivariate Normal Distribution Dirichlet Distribution Probability Simplex Wishart Distribution Inverse Wishart Distribution
\(\mathcal{L}\)

Part 9: Maximum Likelihood Estimation

Point Estimator Mean Square Error(MSE) Standard Error (SE) Likelihood Function Log-likelihood Function Maximum Likelihood Estimation(MLE) Binomial Distribution Sample Proportion
\(H_0\) vs \(H_1\)

Part 10: Statistical Inference & Hypothesis Testing

Null Hypothesis Alternative Hypothesis Type I Error (False Negative) Type II Error (False Positive) Significance Level Test Statistic Null Hypothesis Significance Test(NHST) One Sample t-Tests Confidence intervals Critical Values z-scores Credible Intervals Bootstrap
LS

Part 11: Linear Regression

Interactive Demo Linear Regression Least-Squares Estimation
\(\mathbb{H}\)

Part 12: Entropy

Information Content Entropy Joint Entropy Conditional Entropy Cross Entropy KL Divergence(Relative Entropy, Information Gain) Gibbs' Inequality Log Sum Inequality Jensen's Inequality Mutual Information (MI)
\(n \to \infty\)

Part 13: Convergence

Convergence in Probability Convergence in Distribution Asymptotic(limiting) Distribution Moment Generating Function(m.g.f.) Central Limit Theorem(CLT)
\(p(\theta | \mathcal{D})\)

Part 14: Intro to Bayesian Statistics

Bayesian Inference Prior Distribution Posterior Distribution Marginal Likelihood Conjugate Prior Posterior Predictive Distribution Beta-Binomial Model Normal Distribution Model with known Variance \(\sigma^2\) Normal Distribution Model with known Mean \(\mu\)
\(\eta\)

Part 15: The Exponential Family

Exponential Family Natural Parameters(Canonical Parameters) Base Measure Sufficient Statistics Partition Function Minimal Representation Natural Exponential Family(NEF) Moment Parameters Precision Matrix Information Form Moment Matching Cumulants
\(F(\theta)\)

Part 16: Fisher Information Matrix

Fisher Information Matrix(FIM) Score Function Covariance Negative Log Likelihood Log Partition Function Approximated KL Divergence Natural Gradient Jeffreys Prior Uninformative Prior Reference Prior Mutual Information
\(\pi\)

Part 17: Bayesian Decision Theory

Decision Theory Optimal Policy(Bayes estimator) Zero-One Loss Maximum A Posteriori (MAP) Estimate Reject Option Confusion Matrix False Positive (FP, Type I error) False Negative (FN, Type II error) Receiver Operating Characteristic (ROC) Curve Equal Error Rate (EER) Precision-Recall (PR) Curve Interpolated Precision Average Precision (AP)
\(\prod\)

Part 18: Markov Chains

Code Included Probabilistic Graphical Models(PGMs) Bayesian Networks Markov Chains Language Modeling n-gram Transition Function(Kernel) Stochastic Matrix(Transition Matrix) Maximum likelihood estimation(MLE) in Markov models Sparse Data Problem Add-One Smoothing Dirichlet Prior