Philosophy of statistics
Statistical analysis is very important in addressing the problem of induction. Can inductive inference be formalized? What are the caveats? Can inductive inference be automated? How does machine learning work?
All knowledge is, in final analysis, history. All sciences are, in the abstract, mathematics. All judgements are, in their rationale, statistics.
– C. R. Rao1
Contents
- Introduction to the foundations of statistics
- Probability and its related concepts
- Statistical models
- Point estimation and confidence intervals
- Statistical hypothesis testing
- Uncertainty quantification
- Statistical classification
- Causal inference
- Exploratory data analysis
- “Statistics Wars”
- Replication crisis
- Classical machine learning
- Deep learning
- Theoretical machine learning
- Information geometry
- Automation
- Implications for the realism debate
- My thoughts
- Annotated bibliography
- Mayo, D.G. (1996). Error and the Growth of Experimental Knowledge.
- Cowan, G. (1998). Statistical Data Analysis.
- James, F. (2006). Statistical Methods in Experimental Physics.
- Cowan, G. et al. (2011). Asymptotic formulae for likelihood-based tests of new physics.
- ATLAS Collaboration. (2012). Combined search for the Standard Model Higgs boson.
- Cranmer, K. (2015). Practical statistics for the LHC.
- More articles to do
- Links and encyclopedia articles
- References
Introduction to the foundations of statistics
Problem of induction
A key issue for the scientific method, as discussed in the previous outline, is the problem of induction. Inductive inferences are used in the scientific method to make generalizations from finite data. This introduces unique avenues of error not found in purely deductive inferences, like in logic and mathematics. Compared to deductive inferences, which are sound and necessarily follow if an argument is valid and all of its premises obtain, inductive inferences can be valid and probably (not certainly) sound, and therefore can still result in error in some cases because the support of the argument is ultimately probabilistic.
A skeptic may further probe if we are even justified in using the probabilities we use in inductive arguments. What is the probability the Sun will rise tomorrow? What kind of probabilities are reasonable?
In this outline, we sketch and explore how the mathematical theory of statistics has arisen to wrestle with the problem of induction, and how it equips us with careful ways of framing inductive arguments and notions of confidence in them.
See also:
Early investigators
- Ibn al-Haytham (c. 965-1040)
- “Ibn al-Haytham was an early proponent of the concept that a hypothesis must be supported by experiments based on confirmable procedures or mathematical evidence—an early pioneer in the scientific method five centuries before Renaissance scientists.” - Wikipedia
- Gerolamo Cardano (1501-1576)
- Book on Games of Chance (1564)
- John Graunt (1620-1674)
- Jacob Bernoulli (1655-1705)
The art of measuring, as precisely as possible, probabilities of things, with the goal that we would be able always to choose or follow in our judgments and actions that course, which will have been determined to be better, more satisfactory, safer or more advantageous.4
- Thomas Bayes (1701-1761)
- Pierre-Simon Laplace (1749-1827)
- The rule of succession, bayesian
- Carl Friedrich Gauss (1777-1855)
- John Stuart Mill (1806-1873)
- Francis Galton (1822-1911)
- Regression towards the mean in phenotypes
- John Venn (1834-1923)
- The Logic of Chance (1866)5
- William Stanley Jevons (1835-1882)
Foundations of modern statistics
- Central limit theorem
- De Moivre-Laplace theorem (1738)
- Glivenko-Cantelli theorem (1933)
- Charles Sanders Peirce (1839-1914)
- Formulated modern statistics in “Illustrations of the Logic of Science,” a series published in Popular Science Monthly (1877-1878), and also “A Theory of Probable Inference” in Studies in Logic (1883).8
- With a repeated measures design, introduced blinded, controlled randomized experiments (before Fisher).
- Karl Pearson (1857-1936)
- The Grammar of Science (1892)
- “On the criterion that a given system of deviations…” (1900)9
- Proposed testing the validity of hypothesized values by evaluating the chi distance between the hypothesized and the empirically observed values via the \(p\)-value.
- With Frank Raphael Weldon, he established the journal Biometrika in 1902.
- Founded the world’s first university statistics department at University College, London in 1911.
- John Maynard Keynes (1883-1946)
- Keynes, J. M. (1921). A Treatise on Probability.10
- Ronald Fisher (1890-1972)
- Fisher significance of the null hypothesis (\(p\)-values)
- “On the ‘probable error’ of a coefficient of correlation deduced from a small sample”13
- Definition of likelihood
- ANOVA
- Statistical Methods for Research Workers (1925)
- The Design of Experiments (1935)
- “Statistical methods and scientific induction”14
- The Lady Tasting Tea15
- Jerzy Neyman (1894-1981)
- biography by Reid16
- Neyman, J. (1955). The problem of inductive inference.17
- Shows that Neyman read Carnap, but did Carnap read Neyman?
- Discussion: Mayo, D.G. (2014). Power taboos: Statue of Liberty, Senn, Neyman, Carnap, Severity.
- Egon Pearson (1895-1980)
- Neyman-Pearson confidence intervals with fixed error probabilities (also \(p\)-values but considering two hypotheses involves two types of errors)
- Harold Jeffreys (1891-1989)
- objective (non-informative) Jeffreys priors
- Andrey Kolmogorov (1903-1987)
- C.R. Rao (1920-2023)
- Ray Solomonoff (1926-2009)
- Shun’ichi Amari (b. 1936)
- Judea Pearl (b. 1936)
Pedagogy
- Kendall18
- James19
- Cowan20
- Cranmer21
- Jaynes, E.T. (2003). Probability Theory: The Logic of Science.22
- Lista: book,23 notes24
- Cox25
- Behnke, O., Kröninger, K., Schott, G., & Schörner-Sadenius, T. (2013). Data Analysis in High Energy Physics: A Practical Guide to Statistical Methods.26
- Cousins27
- Weisberg28
- Cranmer, K. (2020). Statistics and Data Science.
- Cosma Shalizi’s notes on
- Gelman, A. & Vehtari, A. (2021). What are the most important statistical ideas of the past 50 years?29
- Taboga, M. (2022). statlect.com.
- Otsuka, J. (2023). Thinking About Statistics: The Philosophical Foundations.30
- Otsuka, J. (2023). Talk: What machine learning tells us about the mathematical structures of concepts.
Probability and its related concepts
Probability
Probability is of epistemic interest, being in some sense a measure of inductive confidence.
TODO:
- Kolmogorov axioms
- Probability vs odds: \(p/(p+q)\) vs \(p/q\)
- Carnap, R. (1947). Probability as a guide in life.31
- Carnap, R. (1953). What is probability?.32
Expectation and variance
Expectation:
\[ \mathbb{E}(y) \equiv \int dx \: p(x) \: y(x) \label{eq:expectation} \]
Expectation values can be approximated with a partial sum over some data or Monte Carlo sample:
\[ \mathbb{E}(y) \approx \frac{1}{n} \sum_s^n y(x_s) \label{eq:expectation_sum} \]
The variance of a random variable, \(y\), is defined as
\[\begin{align} \mathrm{Var}(y) &\equiv \mathbb{E}((y - \mathbb{E}(y))^2) \nonumber \\ &= \mathbb{E}(y^2 - 2 \: y \: \mathbb{E}(y) + \mathbb{E}(y)^2) \nonumber \\ &= \mathbb{E}(y^2) - 2 \: \mathbb{E}(y) \: \mathbb{E}(y) + \mathbb{E}(y)^2 \nonumber \\ &= \mathbb{E}(y^2) - \mathbb{E}(y)^2 \label{eq:variance} \end{align}\]
The covariance matrix, \(\boldsymbol{V}\), of random variables \(x_i\) is
\[\begin{align} V_{ij} &= \mathrm{Cov}(x_i, x_j) \equiv \mathbb{E}[(x_i - \mathbb{E}(x_i)) \: (x_j - \mathbb{E}(x_j))] \nonumber \\ &= \mathbb{E}(x_i \: x_{j} - \mu_i \: x_j - x_i \: \mu_j + \mu_i \: \mu_j ) \nonumber \\ &= \mathbb{E}(x_i \: x_{j}) - \mu_i \: \mu_j \label{eq:covariance_matrix_indexed} \end{align}\]
\[\begin{equation} \boldsymbol{V} = \begin{pmatrix} \mathrm{Var}(x_1) & \mathrm{Cov}(x_1, x_2) & \cdots & \mathrm{Cov}(x_1, x_n) \\ \mathrm{Cov}(x_2, x_1) & \mathrm{Var}(x_2) & \cdots & \mathrm{Cov}(x_2, x_n) \\ \vdots & \vdots & \ddots & \vdots \\ \mathrm{Cov}(x_n, x_1) & \mathrm{Cov}(x_n, x_2) & \cdots & \mathrm{Var}(x_n) \end{pmatrix} \label{eq:covariance_matrix_array} \end{equation}\]
Diagonal elements of the covariance matrix are the variances of each variable.
\[ \mathrm{Cov}(x_i, x_i) = \mathrm{Var}(x_i) \]
Off-diagonal elements of a covariance matrix measure how related two variables are, linearly. Covariance can be normalized to give the correlation coefficient between variables:
\[ \mathrm{Cor}(x_i, x_j) \equiv \frac{ \mathrm{Cov}(x_i, x_j) }{ \sqrt{ \mathrm{Var}(x_i) \: \mathrm{Var}(x_j) } } \label{eq:correlation_matrix} \]
which is bounded: \(-1 \leq \mathrm{Cor}(x_i, x_j) \leq 1\).
The covariance of two random vectors is given by
\[ \boldsymbol{V} = \mathrm{Cov}(\vec{x}, \vec{y}) = \mathbb{E}(\vec{x} \: \vec{y}^{\mathsf{T}}) - \vec{\mu}_x \: \vec{\mu}_{y}^{\mathsf{T}}\label{eq:covariance_matrix_vectors} \]
Cross entropy
TODO: discuss the Shannon entropy and Kullback-Leibler (KL) divergence.33
Shannon entropy:
\[ H(p) = - \underset{x\sim{}p}{\mathbb{E}}\big[ \log p(x) \big] \label{eq:shannon_entropy} \]
Cross entropy:
\[ H(p, q) = - \underset{x\sim{}p}{\mathbb{E}}\big[ \log q(x) \big] = - \sum_{x} p(x) \: \log q(x) \label{eq:cross_entropy} \]
Kullback-Leibler (KL) divergence:
\[\begin{align} D_\mathrm{KL}(p, q) &= \underset{x\sim{}p}{\mathbb{E}}\left[ \log \left(\frac{p(x)}{q(x)}\right) \right] = \underset{x\sim{}p}{\mathbb{E}}\big[ \log p(x) - \log q(x) \big] \label{eq:kl_divergence} \\ &= - H(p) + H(p, q) \\ \end{align}\]
See also the section on logistic regression.
Uncertainty
Quantiles and standard error
TODO:
- Quantiles
- Practice of standard error for uncertainty quantification.
Propagation of error
Given some vector of random variables, \(\vec{x}\), with estimated means, \(\vec{\mu}\), and estimated covariance matrix, \(\boldsymbol{V}\), suppose we are concerned with estimating the variance of some variable, \(y\), that is a function of \(\vec{x}\). The variance of \(y\) is given by
\[ \sigma^2_y = \mathbb{E}(y^2) - \mathbb{E}(y)^2 \,. \]
Taylor expanding \(y(\vec{x})\) about \(x=\mu\) gives
\[ y(\vec{x}) \approx y(\vec{\mu}) + \left.\frac{\partial y}{\partial x_i}\right|_{\vec{x}=\vec{\mu}} (x_i - \mu_i) \,. \]
Therefore, to first order
\[ \mathbb{E}(y) \approx y(\vec{\mu}) \]
and
\[\begin{align} \mathbb{E}(y^2) &\approx y^2(\vec{\mu}) + 2 \, y(\vec{\mu}) \, \left.\frac{\partial y}{\partial x_i}\right|_{\vec{x}=\vec{\mu}} \mathbb{E}(x_i - \mu_i) \nonumber \\ &+ \mathbb{E}\left[ \left(\left.\frac{\partial y}{\partial x_i}\right|_{\vec{x}=\vec{\mu}}(x_i - \mu_i)\right) \left(\left.\frac{\partial y}{\partial x_j}\right|_{\vec{x}=\vec{\mu}}(x_j - \mu_j)\right) \right] \\ &= y^2(\vec{\mu}) + \, \left.\frac{\partial y}{\partial x_i}\frac{\partial y}{\partial x_j}\right|_{\vec{x}=\vec{\mu}} V_{ij} \\ \end{align}\]
TODO: clarify above, then specific examples.
See:
- Cowan.34
- Arras, K.O. (1998). An introduction to error propagation: Derivation, meaning and examples of \(C_y= F_x C_x F_{x}^{\top}\).35
Bayes’ theorem
- Bayes, Thomas (1701-1761)
- Bayes’ theorem
\[ P(A|B) = P(B|A) \: P(A) \: / \: P(B) \label{eq:bayes_theorem} \]
- Extended version of Bayes theorem
- TODO: Example of conditioning with medical diagnostics
- Cook, J.D. (2008). Canonical example of Bayes’ theorem in detail.
Likelihood and frequentist vs bayesian probability
- Frequentist vs bayesian probability
- Frequentism grew out of theories of statistical sampling error.
- Bayesianism grew out of what used to be called “inverse probability.”
- Fienberg, S.E. (2006). When did Bayesian inference become “Bayesian?”36
- Weisberg: “Two Schools”37
\[ P(H|D) = P(D|H) \: P(H) \: / \: P(D) \label{eq:bayes_theorem_hd} \]
- Likelihood
\[ L(\theta) = P(D|\theta) \label{eq:likelihood_def_x} \]
- We will return to the frequentist vs bayesian debate in the section on the “Statistics Wars”.
Fisher:
To appeal to such a result is absurd. Bayes’ theorem ought only to be used where we have in past experience, as for example in the case of probabilities and other statistical ratios, met with every admissible value with roughly equal frequency. There is no such experience in this case.38
Curse of dimensionality
- Curse of dimensionality
- The volume of the space increases so fast that the available data become sparse.
- Stein’s paradox
- The ordinary decision rule for estimating the mean of a multivariate Gaussian distribution (with dimensions, \(n \geq 3\)) is inadmissible under mean squared error risk.
- Stein, C. (1956). Inadmissibility of the usual estimator for the mean of a multivariate normal distribution.39
- James, W. & Stein, C. (1961). Estimation with quadratic loss.40
- Proof of Stein’s example
- van Handel, R. (2016). Probability in high dimensions.41
- Vershynin, R. (2018). High-Dimensional Probability: An Introduction with Applications in Data Dcience.42
Statistical models
Parametric models
- Data: \(x_i\)
- Parameters: \(\theta_j\)
- Model: \(f(\vec{x} ; \vec{\theta})\)
Canonical distributions
Bernoulli distribution
\[ \mathrm{Ber}(k; p) = \begin{cases} p & \mathrm{if}\ k = 1 \\ 1-p & \mathrm{if}\ k = 0 \end{cases} \label{eq:bernoulli} \]
which can also be written as
\[ \mathrm{Ber}(k; p) = p^k \: (1-p)^{(1-k)} \quad \mathrm{for}\ k \in \{0, 1\} \]
or
\[ \mathrm{Ber}(k; p) = p k + (1-p)(1-k) \quad \mathrm{for}\ k \in \{0, 1\} \]
- Binomial distribution
- Poisson distribution
TODO: explain, another important relationship is
Normal/Gaussian distribution
\[ N(x \,|\, \mu, \sigma^2) = \frac{1}{\sqrt{2\,\pi\:\sigma^2}} \: \exp\left(\frac{-(x-\mu)^2}{2\,\sigma^2}\right) \label{eq:gaussian} \]
and in \(k\) dimensions:
\[ N(\vec{x} \,|\, \vec{\mu}, \boldsymbol{\Sigma}) = (2 \pi)^{-k/2}\:\left|\boldsymbol{\Sigma}\right|^{-1/2} \: \exp\left(\frac{-1}{2}\:(\vec{x}-\vec{\mu})^{\mathsf{T}}\:\boldsymbol{\Sigma}^{-1}\:(\vec{x}-\vec{\mu})\right) \label{eq:gaussian_k_dim} \]
where \(\boldsymbol{\Sigma}\) is the covariance matrix (defined in eq. \(\eqref{eq:covariance_matrix_indexed}\)) of the distribution.
- Central limit theorem
- \(\chi^2\) distribution
- Univariate distribution relationships
- The exponential family of distributions are maximum entropy distributions.
Mixture models
- Gaussian mixture models (GMM)
- Marked poisson
- pyhf model description
- HistFactory44
Point estimation and confidence intervals
Inverse problems
Recall that in the context of parametric models of data, \(x_i\) the pdf of which is modeled by a function, \(f(x_i ; \theta_j)\) with parameters, \(\theta_j\). In a statistical inverse problem, the goal is to infer values of the model parameters, \(\theta_j\) given some finite set of data, \(\{x_i\}\) sampled from a probability density, \(f(x_i; \theta_j)\) that models the data reasonably well.45
- Inverse problem
- Inverse probability (Fisher)
- Statistical inference
- See also: Structural realism
- Estimators
- Regression
- Accuracy vs precision46
Bias and variance
The bias of an estimator, \(\hat\theta\), is defined as
\[ \mathrm{Bias}(\hat{\theta}) \equiv \mathbb{E}(\hat{\theta} - \theta) = \int dx \: P(x|\theta) \: (\hat{\theta} - \theta) \label{eq:bias} \]
The mean squared error (MSE) of an estimator has a similar formula to variance (eq. \(\eqref{eq:variance}\)) except that instead of quantifying the square of the difference of the estimator and its expected value, the MSE uses the square of the difference of the estimator and the true parameter:
\[ \mathrm{MSE}(\hat{\theta}) \equiv \mathbb{E}((\hat{\theta} - \theta)^2) \label{eq:mse} \]
The MSE of an estimator can be related to its bias and its variance by the following proof:
\[\begin{align} \mathrm{MSE}(\hat{\theta}) &= \mathbb{E}(\hat{\theta}^2 - 2 \: \hat{\theta} \: \theta + \theta^2) \nonumber \\ &= \mathbb{E}(\hat{\theta}^2) - 2 \: \mathbb{E}(\hat{\theta}) \: \theta + \theta^2 \end{align}\]
noting that
\[ \mathrm{Var}(\hat{\theta}) = \mathbb{E}(\hat{\theta}^2) - \mathbb{E}(\hat{\theta})^2 \]
and
\[\begin{align} \mathrm{Bias}(\hat{\theta})^2 &= \mathbb{E}(\hat{\theta} - \theta)^2 \nonumber \\ &= \mathbb{E}(\hat{\theta})^2 - 2 \: \mathbb{E}(\hat{\theta}) \: \theta + \theta^2 \end{align}\]
we see that MSE is equivalent to
\[ \mathrm{MSE}(\hat{\theta}) = \mathrm{Var}(\hat{\theta}) + \mathrm{Bias}(\hat{\theta})^2 \label{eq:mse_variance_bias} \]
For an unbiased estimator, the MSE is the variance of the estimator.
TODO:
- Note the discussion of the bias-variance tradeoff by Cranmer.
- Note the new deep learning view. See Deep learning.
See also:
Maximum likelihood estimation
A maximum likelihood estimator (MLE) was first used by Fisher.47
\[\hat{\theta} \equiv \underset{\theta}{\mathrm{argmax}} \: \mathrm{log} \: L(\theta) \label{eq:mle} \]
Maximizing \(\mathrm{log} \: L(\theta)\) is equivalent to maximizing \(L(\theta)\), and the former is more convenient because for data that are independent and identically distributed (i.i.d.) the joint likelihood can be factored into a product of individual measurements:
\[ L(\theta) = \prod_i L(\theta|x_i) = \prod_i P(x_i|\theta) \]
and taking the log of the product makes it a sum:
\[ \mathrm{log} \: L(\theta) = \sum_i \mathrm{log} \: L(\theta|x_i) = \sum_i \mathrm{log} \: P(x_i|\theta) \]
Maximizing \(\mathrm{log} \: L(\theta)\) is also equivalent to minimizing \(-\mathrm{log} \: L(\theta)\), the negative log-likelihood (NLL). For distributions that are i.i.d.,
\[ \mathrm{NLL} \equiv - \log L = - \log \prod_i L_i = - \sum_i \log L_i = \sum_i \mathrm{NLL}_i \]
Invariance of likelihoods under reparametrization
- Likelihoods are invariant under reparametrization.48
- Bayesian posteriors are not invariant in general.
See also:
Ordinary least squares
- Least squares from MLE of gaussian models: \(\chi^2\)
- Ordinary Least Squares (OLS)
- Geometric interpretation
Variance of MLEs
- Taylor expansion of a likelihood near its maximum
- Cramér-Rao bound51
- Define efficiency of an estimator.
- Common formula for variance of unbiased and efficient estimators
- Proof in Rice52
- Cranmer: Cramér-Rao bound
- Nielsen, F. (2013). Cramer-Rao lower bound and information geometry.53
- Under some reasonable conditions, one can show that MLEs are efficient and unbiased. TODO: find ref.
- Fisher information matrix
- “is the key part of the proof of Wilks’ theorem, which allows confidence region estimates for maximum likelihood estimation (for those conditions for which it applies) without needing the Likelihood Principle.”
- Variance of MLEs
- Wilks’s theorem
- Method of \(\Delta\chi^2\) or \(\Delta{}L\)
- Frequentist confidence intervals (e.g. at 95% CL)
- Cowan54
- Likelihood need not be Gaussian55
- Minos method in particle physics in MINUIT56
- See slides for my talk: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing
- Asymptotics
- Cowan, G., Cranmer, K., Gross, E., & Vitells, O. (2012). Asymptotic distribution for two-sided tests with lower and upper boundaries on the parameter of interest.57
- Common error bars
- Poisson error bars
- Gaussian approximation: \(\sqrt{n}\)
- Wilson-Hilferty approximation
- Binomial error bars
- Error on efficiency or proportion
- See: Statistical classification
- Poisson error bars
- Discussion
- Wainer, H. (2007). The most dangerous equation. (de Moivre’s equation for variance of means)58
- Misc
- Karhunen-Loève eigenvalue problems in cosmology: How should we tackle large data sets?59
Bayesian credibility intervals
- Inverse problem to find a posterior probability distribution.
- Maximum a posteriori estimation (MAP)
- Prior sensitivity
- Betancourt, M. (2018). Towards a principled Bayesian workflow - ipynb
- Not invariant to reparametrization in general
- Jeffreys priors are
- TODO: James
Uncertainty on measuring an efficiency
- Binomial proportion confidence interval
- Normal/Gaussian/Wald interval
- Wilson score interval
- Clopper-Pearson interval (1934)60
- Agresti-Coull interval (1998)61
- Rule of three (1983)62
- Review by Brown, Cai, & DasGupta (2001)63
- Casadei, D. (2012). Estimating the selection efficiency.64
- Precision vs recall for classification, again
- Classification and logistic regression
- See also:
- Logistic regression in the section on Classical machine learning.
- Clustering in the section on Classical machine learning.
Examples
- Some sample mean
- Bayesian lighthouse
- Measuring an efficiency
- Some HEP fit
Statistical hypothesis testing
Null hypothesis significance testing
- Karl Pearson observing how rare sequences of roulette spins are
- Null hypothesis significance testing (NHST)
- goodness of fit
- Fisher
Fisher:
[T]he null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation.65
Neyman-Pearson theory
Introduction
- probes an alternative hypothesis66
- Type-1 and type-2 errors
- Power and confidence
- Cohen, J. (1992). A power primer.67
- Cranmer, K. (2020). Thumbnail of LHC statistical procedures.
- ATLAS and CMS Collaborations. (2011). Procedure for the LHC Higgs boson search combination in Summer 2011.68
- Cowan, G., Cranmer, K., Gross, E., & Vitells, O. (2011). Asymptotic formulae for likelihood-based tests of new physics.69
See also:
Neyman-Pearson lemma
Neyman-Pearson lemma:70
For a fixed signal efficiency, \(1-\alpha\), the selection that corresponds to the lowest possible misidentification probability, \(\beta\), is given by
\[ \frac{L(H_1)}{L(H_0)} > k_{\alpha} \,, \label{eq:np-lemma} \]
where \(k_{\alpha}\) is the cut value required to achieve a type-1 error rate of \(\alpha\).
Neyman-Pearson test statistic:
\[ q_\mathrm{NP} = - 2 \ln \frac{L(H_1)}{L(H_0)} \label{eq:qnp-test-stat} \]
Profile likelihood ratio:
\[ \lambda(\mu) = \frac{ L(\mu, \hat{\theta}_\mu) }{ L(\hat{\mu}, \hat{\theta}) } \label{eq:profile-llh-ratio} \]
where \(\hat{\theta}\) is the (unconditional) maximum-likelihood estimator that maximizes \(L\), while \(\hat{\theta}_\mu\) is the conditional maximum-likelihood estimator that maximizes \(L\) for a specified signal strength, \(\mu\), and \(\theta\) as a vector includes all other parameters of interest and nuisance parameters.
Neyman construction
Cranmer: Neyman construction.
TODO: fix
\[ q = - 2 \ln \frac{L(\mu\,s + b)}{L(b)} \label{eq:q0-test-stat} \]
Flip-flopping
- Flip-flopping and Feldman-Cousins confidence intervals71
p-values and significance
- \(p\)-values and significance72
- Coverage
- Fisherian vs Neyman-Pearson \(p\)-values
Cowan et al. define a \(p\)-value as
a probability, under assumption of \(H\), of finding data of equal or greater incompatibility with the predictions of \(H\).73
Also:
It should be emphasized that in an actual scientific context, rejecting the background-only hypothesis in a statistical sense is only part of discovering a new phenomenon. One’s degree of belief that a new process is present will depend in general on other factors as well, such as the plausibility of the new signal hypothesis and the degree to which it can describe the data. Here, however, we only consider the task of determining the \(p\)-value of the background-only hypothesis; if it is found below a specified threshold, we regard this as “discovery.”74
Uppper limits
- Cousins, R.D. & Highland, V.L. (1992). Incorporating systematic uncertainties into an upper limit.75
CLs method
Asymptotics
- Analytic variance of the likelihood-ratio of gaussians: \(\chi^2\)
- Wilks79
- Under the null hypothesis, \(-2 \ln(\lambda) \sim \chi^{2}_{k}\), where \(k\), the degrees of freedom for the \(\chi^{2}\) distribution is the number of parameters of interest (including signal strength) in the signal model but not in the null hypothesis background model.
- Wald80
- Wald generalized the work of Wilks for the case of testing some nonzero signal for exclusion, showing \(-2 \ln(\lambda) \approx (\hat{\theta} - \theta)^{\mathsf{T}}V^{-1} (\hat{\theta} - \theta) \sim \mathrm{noncentral}\:\chi^{2}_{k}\).
- In the simplest case where there is only one parameter of interest (the signal strength, \(\mu\)), then \(-2 \ln(\lambda) \approx \frac{ (\hat{\mu} - \mu)^{2} }{ \sigma^2 } \sim \mathrm{noncentral}\:\chi^{2}_{1}\).
- Pearson \(\chi^2\)-test
- Wilks79
- Cowan et al.81
- Wald approximation
- Asimov dataset
- Talk by Armbruster: Asymptotic formulae (2013).
- Criteria for projected discovery and exclusion sensitivities of counting experiments82
Student’s t-test
- Student’s t-test
- ANOVA
- A/B-testing
Frequentist vs bayesian decision theory
- Frequentist vs bayesian decision theory83
- Goodman, S.N. (1999). Toward evidence-based medical statistics 2: The Bayes factor.84
Support for using Bayes factors:
which properly separates issues of long-run behavior from evidential strength and allows the integration of background knowledge with statistical findings.85
See also:
Examples
- Difference of two means: \(t\)-test
- A/B-testing
- New physics
Uncertainty quantification
Sinervo classification of systematic uncertainties
- Class-1, class-2, and class-3 systematic uncertanties (good, bad, ugly), Classification by Pekka Sinervo (PhyStat2003)86
- Not to be confused with type-1 and type-2 errors in Neyman-Pearson theory
- Heinrich, J. & Lyons, L. (2007). Systematic errors.87
- Caldeira & Nord88
Lyons:
In analyses involving enough data to achieve reasonable statistical accuracy, considerably more effort is devoted to assessing the systematic error than to determining the parameter of interest and its statistical error.89
- Poincaré’s three levels of ignorance
Profile likelihoods
- Profiling and the profile likelihood
- Importance of Wald and Cowan et al.
- hybrid Bayesian-frequentist method
Examples of poor estimates of systematic uncertanties
- Unaccounted-for effects
- CDF \(Wjj\) bump
- Phys.Rev.Lett.106:171801 (2011) / arxiv:1104.0699
- Invariant mass distribution of jet pairs produced in association with a \(W\) boson in \(p\bar{p}\) collisions at \(\sqrt{s}\) = 1.96 TeV
- Dorigo, T. (2011). The jet energy scale as an explanation of the CDF signal.
- OPERA. (2011). Faster-than-light neutrinos.
- BICEP2 claimed evidence of B-modes in the CMB as evidence of cosmic inflation without accounting for cosmic dust.
Statistical classification
Introduction
- Precision vs recall
- Recall is sensitivity
- Sensitivity vs specificity
- Accuracy
Examples
- TODO
See also:
Causal inference
Introduction
- Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies.90
- Lewis, D. (1981). Causal decision theory.91
- Pearl, J. (2018). The Book of Why: The new science of cause and effect.92
See also:
Causal models
- Structural Causal Model (SCM)
- Pearl, J. (2009). Causal inference in statistics: An overview.93
- Robins, J.M. & Wasserman, L. (1999). On the impossibility of inferring causation from association without background knowledge.94
- Peters, J., Janzing, D., & Schölkopf, B. (2017). Elements of Causal Inference.95
- Lundberg, I., Johnson, R., & Stewart, B.M. (2021). What is your estimand? Defining the target quantity connects statistical evidence to theory.96
Counterfactuals
- Counterfactuals
- Regret
- Interventionist conception of causation
- Ismael, J. (2023). Reflections on the asymmetry of causation.97
- Chevalley, M., Schwab, P., & Mehrjou, A. (2024). Deriving causal order from single-variable interventions: Guarantees & algorithm.98
Exploratory data analysis
Introduction
- William Playfair (1759-1823)
- Father of statistical graphics
- John Tukey (1915-2000)
- Exploratory data analysis
- Exploratory Data Analysis (1977)99
Look-elsewhere effect
- Look-elsewhere effect (LEE)
- AKA File-drawer effect
- Stopping rules
- validation dataset
- statistical issues, violates the likelihood principle
Archiving and data science
- “Data science”
- Data collection, quality, analysis, archival, and reinterpretation
- Scientific research and big data
- Reproducible an reinterpretable
- RECAST
- Chen, X. et al. (2018). Open is not enough.100
“Statistics Wars”
Introduction
Cranmer:
Bayes’s theorem is a theorem, so there’s no debating it. It is not the case that Frequentists dispute whether Bayes’s theorem is true. The debate is whether the necessary probabilities exist in the first place. If one can define the joint probability \(P (A, B)\) in a frequentist way, then a Frequentist is perfectly happy using Bayes theorem. Thus, the debate starts at the very definition of probability.103
Neyman:
Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong.104
Likelihood principle
- Likelihood principle
- The likelihood principle is the proposition that, given a statistical model and a data sample, all the evidence relevant to model parameters is contained in the likelihood function.
- The history of likelihood106
- Allan Birnbaum proved that the likelihood principle follows from two more primitive and seemingly reasonable principles, the conditionality principle and the sufficiency principle.107
- Hacking identified the “law of likelihood.”108
- Berger & Wolpert. (1988). The Likelihood Principle.109
O’Hagan:
The first key argument in favour of the Bayesian approach can be called the axiomatic argument. We can formulate systems of axioms of good inference, and under some persuasive axiom systems it can be proved that Bayesian inference is a consequence of adopting any of these systems… If one adopts two principles known as ancillarity and sufficiency principles, then under some statement of these principles it follows that one must adopt another known as the likelihood principle. Bayesian inference conforms to the likelihood principle whereas classical inference does not. Classical procedures regularly violate the likelihood principle or one or more of the other axioms of good inference. There are no such arguments in favour of classical inference.110
- Gandenberger
- “A new proof of the likelihood principle”111
- Thesis: Two Principles of Evidence and Their Implications for the Philosophy of Scientific Method (2015)
- gandenberger.org/research
- Do frequentist methods violate the likelihood principle?
- Criticisms:
- Likelihoodist statistics
Mayo:
Likelihoods are vital to all statistical accounts, but they are often misunderstood because the data are fixed and the hypothesis varies. Likelihoods of hypotheses should not be confused with their probabilities. … [T]he same phenomenon may be perfectly predicted or explained by two rival theories; so both theories are equally likely on the data, even though they cannot both be true.116
Discussion
Lyons:
Particle Physicists tend to favor a frequentist method. This is because we really do consider that our data are representative as samples drawn according to the model we are using (decay time distributions often are exponential; the counts in repeated time intervals do follow a Poisson distribution, etc.), and hence we want to use a statistical approach that allows the data “to speak for themselves,” rather than our analysis being dominated by our assumptions and beliefs, as embodied in Bayesian priors.117
- Carnap
- Sznajder on the alleged evolution of Carnap’s views of inductive logic118
- Carnap, R. (1952). The Continuum of Inductive Methods.119
- David Cox
- Ian Hacking
- Logic of Statistical Inference120
- Neyman
- “Frequentist probability and frequentist statistics”121
- Rozeboom
- Rozeboom, W.W. (1960). The fallacy of the null-hypothesis significance test.122
- Meehl
- Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology.123
- Zech
- “Comparing statistical data to Monte Carlo simulation”124
- Richard Royall
- Statistical Evidence: A likelihood paradigm125
- Jim Berger
- “Could Fisher, Jeffreys, and Neyman have agreed on testing?”126
- Deborah Mayo
- “In defense of the Neyman-Pearson theory of confidence intervals”127
- Concept of “Learning from error” in Error and the Growth of Experimental Knowledge128
- “Severe testing as a basic concept in a Neyman-Pearson philosophy of induction”129
- “Error statistics”130
- Statistical Inference as Severe Testing131
- Statistics Wars: Interview with Deborah Mayo - APA blog
- Review of SIST by Prasanta S. Bandyopadhyay
- LSE Research Seminar: Current Controversies in Phil Stat (May 21, 2020)
- Meeting 5 (June 18, 2020)
- Slides: The Statistics Wars and Their Casualties
- Andrew Gelman
- Confirmationist and falsificationist paradigms of science - Sept. 5, 2014
- Beyond subjective and objective in statistics132
- Retire Statistical Significance: The discussion
- Exchange with Deborah Mayo on abandoning statistical significance
- Several reviews of SIST
- Larry Wasserman
- Kevin Murphy
- Greg Gandenberger
- An introduction to likelihoodist, bayesian, and frequentist methods (1/3)
- As Neyman and Pearson put it in their original presentation of the frequentist approach, “without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behavior with regard to them, in the following which we insure that, in the long run of experience, we shall not too often be wrong” (1933, 291).
- An introduction to likelihoodist, bayesian, and frequentist methods (2/3)
- An introduction to likelihoodist, bayesian, and frequentist methods (3/3)
- An argument against likelihoodist methods as genuine alternatives to bayesian and frequentist methods
- “Why I am not a likelihoodist”135
- Jon Wakefield
- Bayesian and Frequentist Regression Methods136
- Efron & Hastie
- “Flaws in Frequentist Inference”137
- Kruschke & Liddel138
- Steinhardt, J. (2012). Beyond Bayesians and frequentists.139
- VanderPlas, J. (2014). Frequentism and Bayesianism III: Confidence, credibility, and why frequentism and science do not mix.
- Kent, B. (2021). No, your confidence interval is not a worst-case analysis.
Goodman:
The idea that the \(P\) value can play both of these roles is based on a fallacy: that an event can be viewed simultaneously both from a long-run and a short-run perspective. In the long-run perspective, which is error-based and deductive, we group the observed result together with other outcomes that might have occurred in hypothetical repetitions of the experiment. In the “short run” perspective, which is evidential and inductive, we try to evaluate the meaning of the observed result from a single experiment. If we could combine these perspectives, it would mean that inductive ends (drawing scientific conclusions) could be served with purely deductive methods (objective probability calculations).140
Replication crisis
Introduction
- Ioannidis, J.P. (2005). Why most published research findings are false.141
p-value controversy
- Wasserstein, R.L. & Lazar, N.A. (2016). The ASA’s statement on \(p\)-values: Context, process, and purpose.142
- Wasserstein, R.L., Allen, L.S., & Lazar, N.A. (2019). Moving to a World Beyond “p<0.05.”143
- Big names in statistics want to shake up much-maligned P value144
- Hi-Phi Nation, episode 7
- Fisher:
[N]o isolated experiment, however significant in itself, can suffice for the experimental demonstration of any natural phenomenon; for the “one chance in a million” will undoubtedly occur, with no less and no more than its appropriate frequency, however surprised we may be that it should occur to us. In order to assert that a natural phenomenon is experimentally demonstrable we need, not an isolated record, but a reliable method of procedure. In relation to the test of significance, we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us a statistically significant result.145
- Relationship to the LEE
- Tukey, John (1915-2000)
- Wasserman
- Rao & Lovric
- Rao, C.R. & Lovric, M.M. (2016). Testing point null hypothesis of a normal mean and the truth: 21st century perspective.146
- Mayo
- “Les stats, c’est moi: We take that step here!”
- “Significance tests: Vitiated or vindicated by the replication crisis in psychology?”147
- At long last! The ASA President’s Task Force Statement on Statistical Significance and Replicability
- Gorard & Gorard. (2016). What to do instead of significance testing.148
- Vox: What a nerdy debate about p-values shows about science–and how to fix it
- Karen Kafadar: The Year in Review … And More to Come
- The JASA Reproducibility Guide
From “The ASA president’s task force statement on statistical significance and replicability”:
P-values are valid statistical measures that provide convenient conventions for communicating the uncertainty inherent in quantitative results. Indeed, P-values and significance tests are among the most studied and best understood statistical procedures in the statistics literature. They are important tools that have advanced science through their proper application.149
Classical machine learning
Introduction
- Classification vs regression
- Supervised and unsupervised learning
- Classification = supervised; clustering = unsupervised
- Hastie, Tibshirani, & Friedman150
- Information Theory, Inference, and Learning151
- Murphy, K.P. (2012). Machine Learning: A probabilistic perspective. MIT Press.152
- Murphy, K.P. (2022). Probabilistic Machine Learning: An introduction. MIT Press.153
- Shalev-Shwarz, S. & Ben-David, S. (2014). Understanding Machine Learning: From Theory to Algorithms.154
- VC-dimension
History
- Arthur Samuel (1901-1990)
- Dartmouth workshop (1956)
- McCarthy, J., Minsky, M.L., Rochester, N., & Shannon, C.E. (1955). A proposal for the Dartmouth Summer Research Project on Artificial Intelligence.157
- Solomonoff, G. (2016). Ray Solomonoff and the Dartmouth Summer Research Project in Artificial Intelligence, 1956.158
- Rudolf Carnap (1891-1970)
- Kardum, M. (2020). Rudolf Carnap–The grandfather of artificial neural networks: The influence of Carnap’s philosophy on Walter Pitts.159
- Bright, L.K. (2022). Carnap’s contributions.
- Perceptron
- Sompolinsky, H. (2013). Introduction: The Perceptron.
- Rosenblatt, F. (1961). Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms.160
- Connectionist vs symbolic AI
- Minsky, M. & Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry.161
- AI Winter
See also:
- Honorific reinterpretation of scientism
Logistic regression
From a probabilistic point of view,162 logistic regression can be derived from doing maximum likelihood estimation of a vector of model parameters, \(\vec{w}\), in a dot product with the input features, \(\vec{x}\), and squashed with a logistic function that yields the probability, \(\mu\), of a Bernoulli random variable, \(y \in \{0, 1\}\).
\[ p(y | \vec{x}, \vec{w}) = \mathrm{Ber}(y | \mu(\vec{x}, \vec{w})) = \mu(\vec{x}, \vec{w})^y \: (1-\mu(\vec{x}, \vec{w}))^{(1-y)} \]
The negative log-likelihood of multiple trials is
\[\begin{align} \mathrm{NLL} &= - \sum_i \log p(y_i | \vec{x}_i, \vec{w}) \nonumber \\ &= - \sum_i \log\left( \mu(\vec{x}_i, \vec{w})^{y_i} \: (1-\mu(\vec{x}_i, \vec{w}))^{(1-y_i)} \right) \nonumber \\ &= - \sum_i \log\left( \mu_i^{y_i} \: (1-\mu_i)^{(1-y_i)} \right) \nonumber \\ &= - \sum_i \big( y_i \, \log \mu_i + (1-y_i) \log(1-\mu_i) \big) \label{eq:cross_entropy_loss0} \end{align}\]
which is the cross entropy loss. Note that the first term is non-zero only when the true target is \(y_i=1\), and similarly the second term is non-zero only when \(y_i=0\).163 Therefore, we can reparametrize the target \(y_i\) in favor of \(t_{ki}\) that is one-hot in an index \(k\) over classes.
\[ \mathrm{CEL} = \mathrm{NLL} = - \sum_i \sum_k \big( t_{ki} \, \log \mu_{ki} \big) \label{eq:cross_entropy_loss1} \]
where
\[ t_{ki} = \begin{cases} 1 & \mathrm{if}\ (k = y_i = 0)\ \mathrm{or}\ (k = y_i = 1) \\ 0 & \mathrm{otherwise} \end{cases} \]
and
\[ \mu_{ki} = \begin{cases} 1-\mu_i & \mathrm{if}\ k = 0 \\ \mu_i & \mathrm{if}\ k =1 \end{cases} \]
This readily generalizes from binary classification to classification over many classes as we will discuss more below. Note that in the sum over classes, \(k\), only one term for the true class contributes.
\[ \mathrm{CEL} = - \left. \sum_i \log \mu_{ki} \right|_{k\ \mathrm{is\ such\ that}\ y_k=1} \label{eq:cross_entropy_loss2} \]
Logistic regression uses the logit function,164 which is the logarithm of the odds—the ratio of the chance of success to failure. Let \(\mu\) be the probability of success in a Bernoulli trial, then the logit function is defined as
\[ \mathrm{logit}(\mu) \equiv \log\left(\frac{\mu}{1-\mu}\right) \label{eq:logit} \]
Logistic regression assumes that the logit function is a linear function of the explanatory variable, \(x\).
\[ \log\left(\frac{\mu}{1-\mu}\right) = \beta_0 + \beta_1 x \]
where \(\beta_0\) and \(\beta_1\) are trainable parameters. (TODO: Why would we assume this?) This can be generalized to a vector of multiple input variables, \(\vec{x}\), where the input vector has a 1 prepended to be its zeroth component in order to conveniently include the bias, \(\beta_0\), in a dot product.
\[ \vec{x} = (1, x_1, x_2, \ldots, x_n)^{\mathsf{T}}\]
\[ \vec{w} = (\beta_0, \beta_1, \beta_2, \ldots, \beta_n)^{\mathsf{T}}\]
\[ \log\left(\frac{\mu}{1-\mu}\right) = \vec{w}^{\mathsf{T}}\vec{x} \]
For the moment, let \(z \equiv \vec{w}^{\mathsf{T}}\vec{x}\). Exponentiating and solving for \(\mu\) gives
\[ \mu = \frac{ e^z }{ 1 + e^z } = \frac{ 1 }{ 1 + e^{-z} } \]
This function is called the logistic or sigmoid function.
\[ \mathrm{logistic}(z) \equiv \mathrm{sigm}(z) \equiv \frac{ 1 }{ 1 + e^{-z} } \label{eq:logistic} \]
Since we inverted the logit function by solving for \(\mu\), the inverse of the logit function is the logistic or sigmoid.
\[ \mathrm{logit}^{-1}(z) = \mathrm{logistic}(z) = \mathrm{sigm}(z) \]
And therefore,
\[ \mu = \mathrm{sigm}(z) = \mathrm{sigm}(\vec{w}^{\mathsf{T}}\vec{x}) \]
See also:
- Logistic regression
- Harlan, W.S. (2007). Bounded geometric growth: motivation for the logistic function.
- Heesch, D. A short intro to logistic regression.
- Roelants, P. (2019). Logistic classification with cross-entropy.
Softmax regression
Again, from a probabilistic point of view, we can derive the use of multi-class cross entropy loss by starting with the Bernoulli distribution, generalizing it to multiple classes (indexed by \(k\)) as
\[ p(y_k | \mu) = \mathrm{Cat}(y_k | \mu_k) = \prod_k {\mu_k}^{y_k} \label{eq:categorical_distribution} \]
which is the categorical or multinoulli distribution. The negative-log likelihood of multiple independent trials is
\[ \mathrm{NLL} = - \sum_i \log \left(\prod_k {\mu_{ki}}^{y_{ki}}\right) = - \sum_i \sum_k y_{ki} \: \log \mu_{ki} \label{eq:nll_multinomial} \]
Noting again that \(y_{ki} = 1\) only when \(k\) is the true class, and is 0 otherwise, this simplifies to eq. \(\eqref{eq:cross_entropy_loss2}\).
See also:
- Multinomial logistic regression
- McFadden165
- Softmax is really a soft argmax. TODO: find ref.
- Softmax is not unique. There are other squashing functions.166
- Roelants, P. (2019). Softmax classification with cross-entropy.
- Gradients from backprop through a softmax
- Goodfellow et al. point out that any negative log-likelihood is a cross entropy between the training data and the probability distribution predicted by the model.167
Decision trees
- Freund, Y. & Schapire, R.E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. (AdaBoost)168
- Chen, T. & Guestrin, C. (2016). Xgboost: A scalable tree boosting system.169
- Aytekin, C. (2022). Neural networks are decision trees.170
- Grinsztajn, L., Oyallon, E., & Varoquaux, G. (2022). Why do tree-based models still outperform deep learning on tabular data?171
- Coadou, Y. (2022). Boosted decision trees.172
Clustering
- unsupervised
- Gaussian Mixture Models (GMMs)
- Gaussian discriminant analysis
- \(\chi^2\)
- Generalized Linear Models (GLMs)
- Exponential family of PDFs
- Multinoulli \(\mathrm{Cat}(x|\mu)\)
- GLMs
- EM algorithm
- \(k\)-means
- Clustering high-dimensional data
- t-distributed stochastic neighbor embedding (t-SNE)
- Slonim, N., Atwal, G.S., Tkacik, G. & Bialek, W. (2005). Information-based clustering.173
- Topological data analysis
- Dindin, M. (2018). TDA To Rule Them All: ToMATo Clustering.
- Relationship of clustering and autoencoding
- Olah, C. (2014). Neural networks, manifolds, and topology.
- Batson et al. (2021). Topological obstructions to autoencoding.174
- “What are the true clusters?”175
- Lauc, D. (2020). Machine learning and the philosophical problems of induction.176
- Ronen, M., Finder, S.E., & Freifeld, O. (2022). DeepDPM: Deep clustering with an unknown number of clusters.177
- Fang, Z. et al. (2022). Is out-of-distribution detection learnable?.178
See also:
Deep learning
Introduction
- Conceptual reviews of deep learning
- Lower to higher level representations179
- LeCun, Y., Bengio, Y., & Hinton, G. (2015). Review: Deep learning.180
- Sutskever, I. (2015). A brief overview of deep learning.181
- Deep Learning182
- Kaplan, J. (2019). Notes on contemporary machine learning.183
- Raissi, M. (2017). Deep learning tutorial.
- Backpropagation
- Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning representations by back-propagating errors.184
- Schmidhuber’s Critique of Honda Prize for Dr. Hinton.
- Schmidhuber: Who invented backpropagation?
- Scmidhuber: The most cited neural networks all build on work done in my labs.
- LeCun, Y. & Bottou, L. (1998). Efficient BackProp.185
- Pedagogy
- Bekman, S. (2023). Machine Learning Engineering Open Book.
- Labonne, M. (2023). Large Language Model Course.
- Microsoft. (2023). Generative AI for Beginners.
- Scardapane, S. (2024). Alice’s Adventures in a Differentiable Wonderland, Vol. I: A Tour of the Land.186
- Grover, A. (2018). Notes on deep generative models.
- Practical guides
- Bottou, L. (1998). Stochastic gradient descent tricks.187
- Bengio, Y. (2012). Practical recommendations for gradient-based training of deep architectures.
- Hao, L. et al. (2017). Visualizing the loss landscape of neural nets.
- Discussion
- Norvig, P. (2011). On Chomsky and the Two Cultures of Statistical Learning.188
- Sutton, R. (2019). The bitter lesson.189
- Frankle, J. & Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable neural networks.190
- AIMyths.com
Gradient descent
\[ \hat{f} = \underset{f \in \mathcal{F}}{\mathrm{argmin}} \underset{x \sim \mathcal{X}}{\mathbb{E}} L(f, x) \]
The workhorse algorithm for optimizing (training) model parameters is gradient descent:
\[ \vec{w}[t+1] = \vec{w}[t] - \eta \frac{\partial L}{\partial \vec{w}}[t] \]
In Stochastic Gradient Descent (SGD), you chunk the training data into minibatches (AKA batches), \(\vec{x}_{bt}\), and take a gradient descent step with each minibatch:
\[ \vec{w}[t+1] = \vec{w}[t] - \frac{\eta}{m} \sum_{i=1}^m \frac{\partial L}{\partial \vec{w}}[\vec{x}_{bt}] \]
where
- \(t \in \mathbf{N}\) is the learning step number
- \(\eta\) is the learning rate
- \(m\) is the number of samples in a minibatch, called the batch size
- \(L\) is the loss function
- \(\frac{\partial L}{\partial \vec{w}}\) is the gradient
Deep double descent
- Bias and variance trade-off. See Bias and variance.
- MSE vs model capacity
Papers:
- Belkin, M., Hsu, D., Ma, S., & Mandal, S. (2019). Reconciling modern machine-learning practice and the classical bias-variance trade-off.192
- Muthukumar, V., Vodrahalli, K., Subramanian, V., & Sahai, A. (2019). Harmless interpolation of noisy data in regression.193
- Nakkiran, P., Kaplun, G., Bansal, Y., Yang, T., Barak, B., & Sutskever, I. (2019). Deep double descent: Where bigger models and more data hurt.194
- Chang, X., Li, Y., Oymak, S., & Thrampoulidis, C. (2020). Provable benefits of overparameterization in model compression: From double descent to pruning neural networks.195
- Holzmüller, D. (2020). On the universality of the double descent peak in ridgeless regression.196
- Dar, Y., Muthukumar, V., & Baraniuk, R.G. (2021). A farewell to the bias-variance tradeoff? An overview of the theory of overparameterized machine learning.197
- Balestriero, R., Pesenti, J., & LeCun, Y. (2021). Learning in high dimension always amounts to extrapolation.198
- Belkin, M. (2021). Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation.199
- Nagarajan, V. (2021). Explaining generalization in deep learning: progress and fundamental limits.200
- Bach, F. (2022). Learning Theory from First Principles.201
- Barak, B. (2022). The uneasy relationship between deep learning and (classical) statistics.
- Ghosh, N. & Belkin, M. (2022). A universal trade-off between the model size, test loss, and training loss of linear predictors.202
- Singh, S.P., Lucchi, A., Hofmann, T., & Schölkopf, B. (2022). Phenomenology of double descent in finite-width neural networks.203
- Hastie, T., Montanari, A., Rosset, S., & Tibshirani, R. J. (2022). Surprises in high-dimensional ridgeless least squares interpolation.204
- Bubeck, S. & Sellke, M. (2023). A universal law of robustness via isoperimetry.205
- Gamba, M., Englesson, E., Björkman, M., & Azizpour, H. (2022). Deep double descent via smooth interpolation.206
- Schaeffer, R. et al. (2023). Double descent demystified: Identifying, interpreting & ablating the sources of a deep learning puzzle.207
- Yang, T. & Suzuki, J. (2023). Dropout drops double descent.208
- Maddox, W.J., Benton, G., & Wilson, A.G. (2023). Rethinking parameter counting in deep models: Effective dimensionality revisited.209
Blogs:
- Hubinger, E. (2019). Understanding deep double descent. LessWrong.
- OpenAI. (2019). Deep double descent.
- Steinhardt, J. (2022). More is different for AI.210
- Henighan, T. et al. (2023). Superposition, memorization, and double descent.211
Twitter threads:
- Daniela Witten. (2020). Twitter thread: The bias-variance trade-off & double descent.
- François Fleuret. (2020). Twitter thread: The double descent with polynomial regression.
- adad8m. (2022). Twitter thread: The double descent with polynomial regression.
- Peyman Milanfar. (2022). Twitter thread: The perpetually undervalued least-squares.
- Pierre Ablin. (2023). Twitter thread: The double descent with polynomial regression.
Regularization
Regularization = any change we make to the training algorithm in order to reduce the generalization error but not the training error.212
Most common regularizations:
- L2 Regularization
- L1 Regularization
- Data Augmentation
- Dropout
- Early Stopping
Papers:
- Loshchilov, I. & Hutter, F. (2019). Decoupled weight decay regularization.
- Chen, S., Dobriban, E., & Lee, J. H. (2020). A group theoretic framework for data augmentation.213
Batch size vs learning rate
Papers:
- Keskar, N.S. et al. (2016). On large-batch training for deep learning: Generalization gap and sharp minima.
[L]arge-batch methods tend to converge to sharp minimizers of the training and testing functions—and as is well known—sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation.
Hoffer, E. et al. (2017). Train longer, generalize better: closing the generalization gap in large batch training of neural networks.
- \(\eta \propto \sqrt{m}\)
Goyal, P. et al. (2017). Accurate large minibatch SGD: Training ImageNet in 1 hour.
- \(\eta \propto m\)
You, Y. et al. (2017). Large batch training of convolutional networks.
- Layer-wise Adaptive Rate Scaling (LARS)
You, Y. et al. (2017). ImageNet training in minutes.
- Layer-wise Adaptive Rate Scaling (LARS)
Jastrzebski, S. (2018). Three factors influencing minima in SGD.
- \(\eta \propto m\)
Smith, S.L. & Le, Q.V. (2018). A Bayesian Perspective on Generalization and Stochastic Gradient Descent.
Smith, S.L. et al. (2018). Don’t decay the learning rate, increase the batch size.
- \(m \propto \eta\)
Masters, D. & Luschi, C. (2018). Revisiting small batch training for deep neural networks.
This linear scaling rule has been widely adopted, e.g., in Krizhevsky (2014), Chen et al. (2016), Bottou et al. (2016), Smith et al. (2017) and Jastrzebski et al. (2017).
On the other hand, as shown in Hoffer et al. (2017), when \(m \ll M\), the covariance matrix of the weight update \(\mathrm{Cov(\eta \Delta\theta)}\) scales linearly with the quantity \(\eta^2/m\).
This implies that, adopting the linear scaling rule, an increase in the batch size would also result in a linear increase in the covariance matrix of the weight update \(\eta \Delta\theta\). Conversely, to keep the scaling of the covariance of the weight update vector \(\eta \Delta\theta\) constant would require scaling \(\eta\) with the square root of the batch size \(m\) (Krizhevsky, 2014; Hoffer et al., 2017).
Lin, T. et al. (2020). Don’t use large mini-batches, use local SGD.
- Post-local SGD.Golmant, N. et al. (2018). On the computational inefficiency of large batch sizes for stochastic gradient descent.
Scaling the learning rate as \(\eta \propto \sqrt{m}\) attempts to keep the weight increment length statistics constant, but the distance between SGD iterates is governed more by properties of the objective function than the ratio of learning rate to batch size. This rule has also been found to be empirically sub-optimal in various problem domains. … There does not seem to be a simple training heuristic to improve large batch performance in general.
- McCandlish, S. et al. (2018). An empirical model of large-batch training.
- Critical batch size
- Shallue, C.J. et al. (2018). Measuring the effects of data parallelism on neural network training.
In all cases, as the batch size grows, there is an initial period of perfect scaling (\(b\)-fold benefit, indicated with a dashed line on the plots) where the steps needed to achieve the error goal halves for each doubling of the batch size. However, for all problems, this is followed by a region of diminishing returns that eventually leads to a regime of maximal data parallelism where additional parallelism provides no benefit whatsoever.
- Jastrzebski, S. et al. (2018). Width of minima reached by stochastic gradient descent is influenced by learning rate to batch size ratio.
- \(\eta \propto m\)
We show this experimentally in Fig. 5, where similar learning dynamics and final performance can be observed when simultaneously multiplying the learning rate and batch size by a factor up to a certain limit.
- You, Y. et al. (2019). Large-batch training for LSTM and beyond.
- Warmup and use \(\eta \propto m\)
[W]e propose linear-epoch gradual-warmup approach in this paper. We call this approach Leg-Warmup (LEGW). LEGW enables a Sqrt Scaling scheme in practice and as a result we achieve much better performance than the previous Linear Scaling learning rate scheme. For the GNMT application (Seq2Seq) with LSTM, we are able to scale the batch size by a factor of 16 without losing accuracy and without tuning the hyper-parameters mentioned above.
- You, Y. et al. (2019). Large batch optimization for deep learning: Training BERT in 76 minutes.
- LARS and LAMB
- Zhang, G. et al. (2019). Which algorithmic choices matter at which batch sizes? Insights from a Noisy Quadratic Model.
Consistent with the empirical results of Shallue et al. (2018), each optimizer shows two distinct regimes: a small-batch (stochastic) regime with perfect linear scaling, and a large-batch (deterministic) regime insensitive to batch size. We call the phase transition between these regimes the critical batch size.
- Li, Y., Wei, C., & Ma, T. (2019). Towards explaining the regularization effect of initial large learning rate in training neural networks.
Our analysis reveals that more SGD noise, or larger learning rate, biases the model towards learning “generalizing” kernels rather than “memorizing” kernels.
Kaplan, J. et al. (2020). Scaling laws for neural language models.
Jastrzebski, S. et al. (2020). The break-even point on optimization trajectories of deep neural networks.
Blogs:
- Shen, K. (2018). Effect of batch size on training dynamics.
- Chang, D. (2020). Effect of batch size on neural net training.
Normalization
- BatchNorm
- LayerNorm, GroupNorm
- Online Normalization: Chiley, V. et al. (2019). Online normalization for training neural networks.214
- Kiani, B., Balestriero, R., LeCun, Y., & Lloyd, S. (2022). projUNN: efficient method for training deep networks with unitary matrices.215
- Huang, L. et al. (2020). Normalization techniques in training DNNs: Methodology, analysis and application.216
Finetuning
- Hu, E.J. et al (2021). LoRA: Low-rank adaptation of large language models.217
- Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). QLoRA: Efficient finetuning of quantized LLMs.218
- Zhao, J. et al (2024). GaLore: Memory-efficient LLM training by gradient low-rank projection.219
- Huh, M. et al. (2024). Training neural networks from scratch with parallel low-rank adapters.220
Computer vision
- Computer Vision (CV)
- Fukushima: neocognitron221
- LeNet-5
- LeCun, Y. (1989). Generalization and network design strategies.
- LeCun: OCR with backpropagation222
- LeCun: LeNet-5223
- Ciresan: MCDNN224
- Krizhevsky, Sutskever, and Hinton: AlexNet225
- VGG226
- ResNet227
- ResNet is performing a forward Euler discretisation of the ODE: \(\dot{x} = \sigma(F(x))\).228
- MobileNet229
- Neural ODEs230
- EfficientNet231
- VisionTransformer232
- EfficientNetV2233
- gMLP234
- Dhariwal, P. & Nichol, A. (2021). Diffusion models beat GANs on image synthesis.235
- Liu, Y. et al. (2021). A survey of visual transformers.236
- Ingrosso, A. & Goldt, S. (2022). Data-driven emergence of convolutional structure in neural networks.237
- Park, N. & Kim, S. (2022). How do vision transformers work?238
- Zhao, Y. et al. (2023). DETRs beat YOLOs on real-time object detection.239
- Nakkiran, P., Bradley, A., Zhou, H. & Advani, M. (2024). Step-by-step diffusion: An elementary tutorial.240
Resources:
- Neptune.ai. (2021). Object detection algorithms and libraries.
- facebookresearch/vissl
- PyTorch Geometric (PyG)
Natural language processing
Introduction
- Natural Language Processing (NLP)
- History
- Firth, J.R. (1957): “You shall know a word by the company it keeps.”241
- Nirenburg, S. (1996). Bar Hillel and Machine Translation: Then and Now.242
- Hutchins, J. (2000). Yehoshua Bar-Hillel: A philosophers’ contribution to machine translation.243
- Textbooks
- Jurafsky, D. & Martin, J.H. (2022). Speech and Language Processing: An introduction to natural language processing, computational linguistics, and speech recognition.244
- Liu, Z., Lin, Y., & Sun, M. (2023). Representation Learning for Natural Language Processing.245
word2vec
- Mikolov246
- Julia Bazińska
- Olah, C. (2014). Deep learning, NLP, and representations.
- Alammar, J. (2019). The illustrated word2vec.
- Migdal, P. (2017). king - man + woman is queen; but why?
- Kun, J. (2018). A Programmer’s Introduction to Mathematics.
- Word vectors have semantic linear structure247
- Ethayarajh, K. (2019). Word embedding analogies: Understanding King - Man + Woman = Queen.
- Allen, C. (2019). “Analogies Explained” … Explained.
RNNs
- RNNs and LSTMs
- Hochreiter, S. & Schmidhuber, J. (1997). Long short-term memory.248
- Graves, A. (2013). Generating sequences with recurrent neural networks.249
- Auto-regressive language modeling
- Olah, C. (2015). Understanding LSTM networks.
- Karpathy, A. (2015). The unreasonable effectiveness of recurrent neural networks.
Chain rule of language modeling (chain rule of probability):
\[ P(x_1, \ldots, x_T) = P(x_1, \ldots, x_{n-1}) \prod_{t=n}^{T} P(x_t | x_1 \ldots x_{t-1}) \label{eq:chain_rule_of_lm} \]
or for the whole sequence:
\[ P(x_1, \ldots, x_T) = \prod_{t=1}^{T} P(x_t | x_1 \ldots x_{t-1}) \label{eq:chain_rule_of_lm_2} \]
\[ = P(x_1) \: P(x_2 | x_1) \: P(x_3 | x_1 x_2) \: P(x_4 | x_1 x_2 x_3) \ldots \]
A language model (LM), predicts the next token given previous context. The output of the model is a vector of logits, which is given to a softmax to convert to probabilities for the next token.
\[ P(x_t | x_1 \ldots x_{t-1}) = \mathrm{softmax}\left( \mathrm{model}(x_1 \ldots x_{t-1}) \right) \]
Auto-regressive inference follows this chain rule. If done with greedy search:
\[ \hat{x}_t = \underset{x_t \in V}{\mathrm{argmax}} \: P(x_t | x_1 \ldots x_{t-1}) \label{eq:greedy_search} \]
Beam search:
- Beam search as used in NLP is described in Sutskever.250
- Zhang, W. (1998). Complete anytime beam search.251
- Zhou, R. & Hansen, E. A. (2005). Beam-stack search: Integrating backtracking with beam search.252
- Collobert, R., Hannun, A., & Synnaeve, G. (2019). A fully differentiable beam search decoder.253
Backpropagation through time (BPTT):
- Werbos, P.J. (1990). Backpropagation through time: what it does and how to do it.254
Neural Machine Translation (NMT):
Transformers
\[ \mathrm{attention}(Q, K, V) = \mathrm{softmax}\left(\frac{Q\, K^\intercal}{\sqrt{d_k}}\right) V \label{eq:attention} \]
Attention and Transformers
- Transformer259
- BERT260
- Alammar, J. (2018). The illustrated BERT.
- Horev, R. (2018). BERT Explained: State of the art language model for NLP.
- Liu, Y. et al. (2019). RoBERTa: A robustly optimized BERT pretraining approach.261
- Raffel, C. et al. (2019). Exploring the limits of transfer learning with a unified text-to-text transformer. (T5 model)262
- Horan, C. (2021). 10 things you need to know about BERT and the transformer architecture that are reshaping the AI landscape.
- Video: What are transformer neural networks?
- Video: How to get meaning from text with language model BERT.
- ALBERT263
- BART264
- GPT-1,265 2,266 3267
- Alammar, J. (2019). The illustrated GPT-2.
- Yang, Z. et al. (2019). XLNet: Generalized autoregressive pretraining for language understanding.268
- Daily Nous: Philosophers On GPT-3.
- DeepMind’s blog posts for more details: AlphaFold1, AlphaFold2 (2020). Slides from the CASP14 conference are publicly available here.
- Joshi, C. (2020). Transformers are GNNs.
- Lakshmanamoorthy, R. (2020). A complete learning path to transformers (with guide to 23 architectures).
- Zaheer, M. et al. (2020). Big Bird: Transformers for longer sequences.269
- Edelman, B.L., Goel, S., Kakade, S., & Zhang, C. (2021). Inductive biases and variable creation in self-attention mechanisms.270
- Tay, Y., Dehghani, M., Bahri, D., & Metzler, D. (2022). Efficient transformers: A survey.271
- Phuong, M. & Hutter, M. (2022). Formal algorithms for transformers.272
- Chowdhery, A. et al. (2022). PaLM: Scaling language modeling with pathways.273
- Ouyang, L. et al. (2022). Training language models to follow instructions with human feedback. (InstructGPT)274
- OpenAI. (2022). Blog: ChatGPT: Optimizing Language Models for Dialogue.
- Wolfram, S. (2023). What is ChatGPT doing—and why does it work?275
- GPT-4276
- Mohamadi, S., Mujtaba, G., Le, N., Doretto, G., & Adjeroh, D.A. (2023). ChatGPT in the age of generative AI and large language models: A concise survey.277
- Zhao, W.X. et al. (2023). A survey of large language models.278
- 3Blue1Brown. (2024). Video: But what is a GPT? Visual intro to Transformers.
- Golovneva, O., Wang, T., Weston, J., & Sukhbaatar, S. (2024). Contextual position encoding: Learning to count what’s important.279
- Apple. (2024). Apple Intelligence Foundation Language Models.
Computational complexity of transformers
- kipply (2022). Transformer inference arithmetic.
- Bahdanau, D. (2022). The FLOPs calculus of language model training.
- Sanger, A. (2023). Inference characteristics of Llama-2.
- Shenoy, V. & Kiely, P. (2023). A guide to LLM inference and performance.
- Anthony, Q., Biderman, S., & Schoelkopf, H. (2023). Transformer math 101.
Efficient transformers
- Dao, T., Fu, D.Y., Ermon, S., Rudra, A., & Ré, C. (2022). FlashAttention: Fast and memory-efficient exact attention with IO-awareness.280
- Pope, R. et al. (2022). Efficiently scaling transformer inference.
- Dao, T. (2023). FlashAttention-2: Faster attention with better parallelism and work partitioning.
- Kim, S. et al. (2023). Full stack optimization of transformer inference: A survey.
- PyTorch. (2023). Accelerating generative AI with PyTorch II: GPT, Fast.
- Nvidia. (2023). Mastering LLM techniques: Inference optimization.
- Weng, L. (2023). Large transformer model inference optimization.
- Kwon, W. et al. (2023). Efficient memory management for large language model serving with PagedAttention.
- Zhang, L. (2023). Dissecting the runtime performance of the training, fine-tuning, and inference of large language models.
- Dettmers, T. (2023). Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning.
- Fu, Y. (2023). Towards 100x speedup: Full stack transformer inference optimization.
- Fu, Y. (2024). Challenges in deploying long-context transformers: A theoretical peak performance analysis.
- Fu, Y. et al. (2024). Data engineering for scaling language models to 128K context.
- Kwon, W. et al. (2023). Efficient memory management for large language model serving with PagedAttention. (vLLM)
- Shah, J. et al. (2024). FlashAttention-3: Fast and accurate attention with asynchrony and low-precision.
What comes after Transformers?
- Gu, A., Goel, K., & Ré, C. (2021). Efficiently modeling long sequences with structured state spaces.281
- Merrill, W. & Sabharwal, A. (2022). The parallelism tradeoff: Limitations of log-precision transformers.282
- Bulatov, A., Kuratov, Y., & Burtsev, M.S. (2022). Recurrent memory transformer.283
- Raffel, C. (2023). A new alchemy: Language model development as a subfield?.
- Bulatov, A., Kuratov, Y., & Burtsev, M.S. (2023). Scaling transformer to 1M tokens and beyond with RMT.284
- Bertsch, A., Alon, U., Neubig, G., & Gormley, M.R. (2023). Unlimiformer: Long-range transformers with unlimited length input.285
- Mialon, G. et al. (2023). Augmented Language Models: a Survey.286
- Peng, B. et al. (2023). RWKV: Reinventing RNNs for the Transformer Era.287
- Sun, Y. et al. (2023). Retentive network: A successor to transformer for large language models.288
- Gu, A. & Dao, T. (2023). Mamba: Linear-time sequence modeling with selective state spaces.289
- Wang, H. et al. (2023). BitNet: Scaling 1-bit transformers for large language models.290
- Ma, S. et al. (2024). The era of 1-bit LLMs: All large language models are in 1.58 bits.291
- Ma, X. et al. (2024). Megalodon: Efficient LLM pretraining and inference with unlimited context length.292
- Bhargava, A., Witkowski, C., Shah, M., & Thomson, M. (2023). What’s the magic word? A control theory of LLM prompting.293
- Sun, Y. et al. (2024). Learning to (learn at test time): RNNs with expressive hidden states.
- Dao, T. & Gu, A. (2024). Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality.294
- Banerjee, S., Agarwal, A., & Singla, S. (2024). LLMs will always hallucinate, and we need to live with this.295
Evaluation methods
- Hendrycks, D. et al. (2020). Measuring Massive Multitask Language Understanding. (MMLU)
- Yue, X. et al. (2023). MMMU: A Massive Multi-discipline Multimodal Understanding and reasoning benchmark for expert AGI.
- Kim, J. et al. (2024). Evalverse: Unified and accessible library for large language model evaluation.
- Biderman, S. (2024). Lessons from the trenches on reproducible evaluation of language models.
- Stanford’s HELM Leaderboards
- Liang, P. et al. (2022). Holistic Evaluation of Language Models (HELM)
- Lee, T. et al. (2023). Holistic Evaluation of Text-To-Image Models (HEIM)
- github.com/stanford-crfm/helm
- EleutherAI/lm-evaluation-harness
- Srivastava, A. (2022). Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models.
- Suzgun, M. (2022). Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them.
- Wang, Y. (2024). MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark.
Scaling laws in NLP
- Hestness, J. et al. (2017). Deep learning scaling is predictable, empirically.296
- Church, K.W. & Hestness, J. (2019). Rationalism and empiricism in artificial intellegence: A survey of 25 years of evaluation [in NLP].297
- Kaplan, J. et al. (2020). Scaling laws for neural language models.298
- Rae, J.W. et al. (2022). Scaling language models: Methods, analysis & insights from training Gopher.299
- Hoffmann, J. et al. (2022). Training compute-optimal large language models (Chinchilla).300
- Caballero, E., Gupta, K., Rish, I., & Krueger, D. (2022). Broken neural scaling laws.301
- Constantin, S. (2023). “Scaling Laws” for AI and some implications.
- Muennighoff, N. et al. (2023). Scaling data-constrained language models.302
- Pandey, R. (2024). gzip predicts data-dependent scaling laws.303
- Bach, F. (2024). Scaling laws of optimization.304
Language understanding
- NLU
- Mahowald, K. et al. (2023). Dissociating language and thought in large language models: a cognitive perspective.305
- Kosinski, M. (2023). Theory of mind may have spontaneously emerged in large language models.306
- Chitra, T. & Prior, H. (2023). Do language models possess knowledge (soundness)?
See also:
Interpretability
- Grandmother cell
- Watson, D. & Floridi, L. (2019). The explanation game: A formal framework for interpretable machine learning.307
- Anthropic. (2021). A mathematical framework for transformer circuits.
- Anthropic. (2022). In-context learning and induction heads.
- Gurnee, W. et al. (2023). Finding neurons in a haystack: Case studies with sparse probing.308
- Meng, K., Bau, D., Andonian, A., & Belinkov, Y. (2023). Locating and editing factual associations in GPT.309
- McDougall, C., Conmy, A., Rushing, C., McGrath, T., & Nanda, N. (2023). Copy suppression: Comprehensively understanding an attention head.310
Linear probes:
- Alain, G. & Bengio, Y. (2016). Understanding intermediate layers using linear classifier probes.311
- Belinkov, Y. (2022). Probing classifiers: Promises, shortcomings, and advances.312
- Gurnee, W. & Tegmark, M. (2023). Language models represent space and time.313
Reinforcement learning
- Reinforcement Learning (RL)
- Dynamic programming
- Bellman equation
- Backward induction
- John von Neumann & Oskar Morgenstern. (1944). Theory of Games and Economic Behavior.
Pedagogy:
- Sutton & Barto314
- Deep Reinforcement Learning: A Brief Survey315
- Cesa-Bianchi, N. & Lugosi, G. (2006). Prediction, Learning, and Games.316
- OpenAI. (2018). Spinning Up.
Tutorials:
- RL course by David Silver
- RL course by Emma Brunskill
- DeepMind Reinforcement Learning Lecture Series 2021
More:
- List by OpenAI of key RL papers
- List of game AI codes by DATA Lab
- Chen, L. et al. (2021). Decision Transformer: Reinforcement learning via sequence modeling.
- Silver, D., Singh, S., Precup, D., & Sutton, R.S. (2024). Reward is enough.317
Q-learning
- Q-learning and DQN
- Uses the Markov Decision Process (MDP) framework
- The Bellman equation318
- Q-learning is a values-based learning algorithm. Value based algorithms updates the value function based on an equation (particularly Bellman equation). Whereas the other type, policy-based estimates the value function with a greedy policy obtained from the last policy improvement (source: towardsdatascience.com).
- DQN masters Atari319
AlphaZero
- AlphaGo Lee320 → AlphaGo Zero321 → AlphaZero322
- OpenAI Five masters Dota2
- AlphaStar masters StarCraftII
- AlphaZero
- \(\pi(a|s)\) and \(V(s)\)
- Monte Carlo Tree Search (MCTS)
Regret minimization
Regret matching (RM)
- Hart, S. & Mas‐Colell, A. (2000). A simple adaptive procedure leading to correlated equilibrium.323
Consider a game like rock-paper-scissors, where there is only one action per round. Let \(v^{t}(a)\) be the value observed when playing action \(a\) on iteration \(t\).
TODO: explain that the entire rewards vector, \(v^{t}(a)\), over \(a\) is observable after the chosen action is played.
Let a strategy, \(\sigma^t\), be a probability distribution over actions, \(a \in A\). Then the value of a strategy, \(v^{t}(\sigma^{t})\), is the expectation of its value over actions.
\[ v^{t}(\sigma^{t}) = \sum_{a \in A} \sigma^{t}(a) \: v^{t}(a) \label{eq:value_of_strategy} \]
Regret, \(R^{T}\), measures how much better some sequence of strategies, \(\sigma'\), would do compared to the chosen sequence of strategies, \(\sigma = \{\sigma^1, \sigma^2, \ldots \sigma^T\}\).
\[ R^{T} \equiv \sum_{t=1}^{T} \left( v^{t}({\sigma'}^{t}) - v^{t}(\sigma^{t}) \right) \label{eq:regret} \]
External regret, \(R^{T}(a)\), measures the regret of the chosen sequence of strategies versus a hypothetical stategy where action \(a\) is always chosen.
\[ R^{T}(a) \equiv \sum_{t=1}^{T} \left( v^{t}(a) - v^{t}(\sigma^{t}) \right) \label{eq:external_regret} \]
Regret Matching (RM) is a rule to determine the strategy for the next iteration:
\[ \sigma^{t+1}(a) \equiv \frac{ R^{t}_{+}(a) }{ \sum_{b \in A} R^{t}_{+}(b) } \label{eq:regret_matching} \]
where \(R_{+} \equiv \mathrm{max}(R, 0)\).
At the end of training, the resulting recommended strategy with convergence bounds is not the final strategy used in training, \(\sigma^{T}\), but the average strategy over all time steps:
\[ \bar{\sigma}^{T}(a) = \frac{1}{T} \sum_{t=1}^{T} \sigma^{t}(a) \]
TODO: explain the convergence of \(\bar{\sigma}^{t}\) to an \(\varepsilon\)-Nash equilibrium.
- Roughgarden, T. (2013). Video: Twenty Lectures on Algorithmic Game Theory.
- Roughgarden, T. (2016). Twenty Lectures on Algorithmic Game Theory.324
- TODO: Coarse correlated equilibria
Counterfactual regret minimization (CFR)
- CFR
- Zinkevich, M., Johanson, M., Bowling, M., & Piccione, C. (2007). Regret minimization in games with incomplete information.325
- Counterfactual regret minimization (CFR) is an algorithm for extensive-form games that independently minimizes regret in each information set.326
- “In other words, actions are selected in proportion to the amount of positive counterfactual regret for not playing that action.”327
- CFR differs from traditional RL algorithms in that it does not try to maximize expected return. Instead, it minimizes exploitability. CFR does not use the MDP framework; instead, it uses extensive-form games (source: Quora).
- Johanson’s explanation on Quora.
- CFR+
- Tammelin, O. (2014). Solving large imperfect information games using CFR+.328
- Tammelin, O., Burch, N., Johanson, M., & Bowling, M. (2015). Solving heads-up limit texas hold’em329
- Burch, N., Moravcik, M., & Schmid, M. (2019). Revisiting CFR+ and alternating updates.330
- Brown, N. & Sandholm, T. (2019). Solving imperfect-information games via discounted regret minimization.331
- LCFR+ is worse than CFR+ or LCFR.
- Examples
TODO: explain extensive-form games.
A finite extensive game with imperfect information has the following components:332
- A finite set \(N\) of players. A finite set \(H\) of sequences, the possible histories of actions, such that the empty sequence is in \(H\) and every prefix of a sequence in \(H\) is also in \(H\). Define \(h \sqsubseteq h'\) to mean \(h\) is a prefix of \(h'\). \(Z \subseteq H\) are the terminal histories (those which are not a prefix of any other sequences). \(A(h) = \{a : ha \in H\}\) are the actions available after a non-terminal history, \(h \in H \backslash Z\).
- A function \(P\) that assigns to each non-terminal history a member of \(N \cup \{c\}\). \(P\) is the player function. \(P(h)\) is the player who takes an action after the history \(h\). If \(P(h) = c\) then chance determines the action taken after history \(h\).
- For each player \(i \in N \cup \{c\}\) a partition \(\mathcal{I}_i\) of \(\{h \in H : P (h) = i\}\) with the property that \(A(h) = A(h')\) whenever \(h\) and \(h'\) are in the same member of the partition. For \(I \in \mathcal{I}_i\) we denote by \(A(I_i)\) the set \(A(h)\) and by \(P(I_i)\) the player \(P(h)\) for any \(h \in I_i\) . \(\mathcal{I}_i\) is the information partition of player \(i\); a set \(I_i \in \mathcal{I}_i\) is an information set of player \(i\).
- A function \(f_c\) that associates with every information set \(I\) where \(P(I) = c\), a probability measure \(f_c(a|I)\) on \(A(h)\); \(f_c(a|I)\) is the probability that \(a\) occurs given some \(h \in I\), where each such probability measure is independent of every other such measure.
- For each player \(i \in N\) there is a utility function \(u_i\) from the terminal states \(Z\) to the reals \(\mathbb{R}\). If \(N = \{1, 2\}\) and \(u_1 = -u_2\), it is a zero-sum extensive game. Define \(\Delta_{u,i} \equiv \mathrm{max}_z \: u_i(z) - \mathrm{min}_z \: u_i(z)\) to be the range of utilities to player \(i\).
The player reach, \(\pi^{\sigma}_{i}(h)\), of a history \(h\) is the product of the probabilities for all agent \(i\) actions leading to \(h\). Formally,333
\[ \pi^{\sigma}_{i}(h) \equiv \prod_{h' \cdot a' \sqsubseteq h | P(h') = i} \sigma_{i}(h', a') \label{eq:player_reach} \]
Due to perfect recall, any two histories in infoset \(I_i\) have the same player reach for player \(i\). Thus, we similarly define the player reach \(\pi^{\sigma}_{i}(I_i)\) of infoset \(I_i\) as
\[ \pi^{\sigma}_{i}(I_i) \equiv \prod_{ {I'}_{i} \cdot a' \sqsubseteq I_i | P(I_i) = i } \sigma_{i}({I'}_{i}, a') = \left.\pi^{\sigma}_{i}(h)\right|_{h \in I_i} \label{eq:player_reach_from_infoset} \]
The external reach AKA opponent reach, \(\pi^{\sigma}_{-i}(h)\), of a history \(h\) is the contribution of chance and all other players than \(i\). Formally,
\[ \pi^{\sigma}_{-i}(h) \equiv \prod_{h' \cdot a' \sqsubseteq h | P(h') \neq i} \sigma_{i}(h', a') \label{eq:external_reach} \]
We also define the external reach of an infoset as
\[ \pi^{\sigma}_{-i}(I_i) \equiv \sum_{h \in I_{i}} \pi^{\sigma}_{-i}(h) \label{eq:external_reach_from_infoset} \]
The counterfactual value of an infoset \(I\) is the expected utility to player \(i\) given that \(I\) has been reached, weighted by the external reach of \(I\) for player \(i\). Formally,334
\[ v(I) = \sum_{h \in I} \pi^{\sigma}_{-i}(h) \sum_{z \in Z} \pi^{\sigma}(h, z) \: u_{i}(z) \label{eq:counter_factual_value} \]
The counterfactual value of an action, \(a\), is
\[ v(I, a) = \sum_{h \in I} \pi^{\sigma}_{-i}(h) \sum_{z \in Z} \pi^{\sigma}(h \cdot a, z) \: u_{i}(z) \label{eq:counter_factual_value_of_a} \]
Let’s consider the case where, like in NLHE, our two private hole cards each make a single unique history \(h\), and we form infosets with a single hand, so \(I=h\). Then
\[ v(h) = \pi^{\sigma}_{-i}(h) \sum_{z \in Z} \pi^{\sigma}(h, z) \: u_{i}(z) \]
making explicit the player reach and the external reach,
\[ v(h) = \pi^{\sigma}_{-i}(h) \sum_{z \in Z} \pi_{i}^{\sigma}(h, z) \: \pi_{-i}^{\sigma}(h, z) \: u_{i}(z) \]
At a leaf node where we finally calculate the rewards,
\[ v(z) = \pi^{\sigma}_{-i}(z) \: u_{i}(z) \]
TODO: explain CFR.
The instantaneous regret is
\[ r^{t}(I, a) = v^{\sigma^t}(I, a) - v^{\sigma^t}(I) \]
The (cummulative) counterfactual regret
\[ R^{t}(I, a) = \sum_{t=1}^{T} r^{t}(I, a) \]
Similar to the single-node game discussed above, eq. \(\eqref{eq:regret_matching}\), applying regret matching during training means to update strategies according to the following rule.
\[ \sigma^{t+1}(I, a) \equiv \frac{ R^{t}_{+}(I, a) }{ \sum_{b \in A} R^{t}_{+}(I, b) } \label{eq:regret_matching_cfr} \]
The average strategy is
\[ \bar{\sigma}^{T}(I, a) = \sum_{t=1}^{T} \frac{\pi^{t}_{i}(I) \: \sigma^{t}(I, a) }{\pi^{t}_{i}(I)} \]
Monte Carlo Counterfactual Regret Minimization (MCCFR)
- Lanctot, M. (2009). Monte Carlo sampling for regret minimization.335
- Neller, T.W. & Lanctot, M. (2013). An introduction to counterfactual regret minimization.336
- Vectorized and sampling variants
- Burch, N., Lanctot, M., Szafron, D., & Gibson, R. (2012). Efficient Monte Carlo counterfactual regret minimization in games with many player actions.337
- Johanson, M., Bard, N., Lanctot, M., Gibson, R.G., & Bowling, M. (2012). Efficient Nash equilibrium approximation through Monte Carlo counterfactual regret minimization.338
- Variants of chance sampling with single or vector opponent actions.
- Schmid, M. et al. (2019). Variance reduction in Monte Carlo counterfactual regret minimization (VR-MCCFR) for extensive form games using baselines.339
- Li, H. et al. (2020). Regret minimization via novel vectorized sampling policies and exploration.340
- Habara, K., Fukuda, E.H., & Yamashita, N. (2023). Convergence analysis and acceleration of the smoothing methods for solving extensive-form games.341
- MCCFR Ph.D. theses
- Lanctot, M. (2013). Monte Carlo Sample and Regret Minimization for Equilibrium Computation and Decision-Making in Large Extensive Form Games.342
- Gibson, R. (2014). Regret minimization in games and the development of champion multiplayer computer poker-playing agents.343
- Johanson, M. (2016). Robust Strategies and Counter-Strategies: From Superhuman to Optimal Play.344
- Burch, N. (2018). Time and Space: Why imperfect information games are hard.345
- Horacek, M. (2022). Risk-Aversion in Algorithms for Poker.346
TODO: explain MCCFR.
External sampling MCCFR:
\[ \tilde{v}^{\sigma}_{i}(I) = \sum_{z \in Q} u_{i}(z) \: \pi^{\sigma}_{i}(z[I] \rightarrow z) \label{eq:external_sample_mccfr} \]
Best response and exploitability
Best response:
\[ \mathrm{BR}(\sigma_{-i}) = \underset{\sigma_{i}^{\prime}}{\mathrm{argmax}} \: u_{i}(\sigma_{i}^{\prime}, \sigma_{-i}) \label{eq:best_response} \]
TODO: Local Best Response (LBR).347
Exploitability:
\[ \varepsilon_{i}(\sigma) = u_{i}(\mathrm{BR}(\sigma_{-i}), \sigma_{-i}) - u_{i}(\sigma_{i}, \mathrm{BR}(\sigma_{i})) \label{eq:exploitability} \]
NashConv348 exploitability uses the convention:
\[ \varepsilon_{i}(\sigma) = u_{i}(\mathrm{BR}(\sigma_{-i}), \sigma_{-i}) - u_{i}(\sigma_{i}, \sigma_{-i}) \label{eq:nc_exploitability} \]
The average exploitability per player is
\[ \varepsilon(\sigma) = \frac{1}{n} \sum_{i}^{n} \varepsilon_{i}(\sigma) \]
Note that in zero-sum games, when summing over players, the second terms in NashConv sum to zero.349
\[ \varepsilon(\sigma) = \frac{1}{n} \sum_{i}^{n} u_{i}(\mathrm{BR}(\sigma_{-i}), \sigma_{-i}) \label{eq:average_exploitability} \]
In two-player games:
\[ \varepsilon(\sigma) = \frac{1}{2} \Big( u_{1}(\mathrm{BR}(\sigma_{2}), \sigma_{2}) + u_{2}(\sigma_{1}, \mathrm{BR}(\sigma_{1})) \Big) \label{eq:average_exploitability_two_player} \]
- Johanson, M., Waugh, K., Bowling, M., & Zinkevich, M. (2011). Accelerating best response calculation in large extensive games.350
- Evaluates range vs range rewards in \(O(n \log n)\) + \(O(n)\) instead of \(O(n^2)\).
- Ponsen, M., De Jong, S., & Lanctot, M. (2011). Computing approximate Nash equilibria and robust best-responses using sampling.351
- Lisy, V. & Bowling, M. (2016). Equilibrium approximation quality of current no-limit poker bots.352
- Timbers, F. (2020). Approximate exploitability: Learning a best response in large games.353
Solving poker
- Simplified toy pokers
- Earlier poker work
- Billings, D., Davidson, A., Schaeffer, J., & Szafron, D. (2002). The challenge of poker.356
- Billings, D. et al. (2003). Approximating game-theoretic optimal strategies for full-scale poker.357
- Johanson, M. (2013). Measuring the size of large no-limit poker games.358
- Claudico (2015) - Sandholm, T. et al. (CMU)
- Bowling, M., Burch, N., Johanson, M., & Tammelin, O. (2015). Heads-up limit hold’em poker is solved.359
- CFR+
- Heinrich & Silver. (2016). Deep reinforcement learning from self play in imperfect-information games.360
- Q-learning
- Moravcik, M. et al. (2017). DeepStack: Expert-level artificial intelligence in heads-up no-limit poker.361
- Libratus
- Brown, N. & Sandholm, T. (2017). Safe and nested subgame solving for imperfect-information games.362
- Adds test-time search/planning to just using precomputed policies.
- Brown, N. & Sandholm, T. (2018). Superhuman AI for heads-up no-limit poker: Libratus beats top professionals.363
- bet and card abstraction
- MCCFR used to find a solution of the abstracted game: blueprint
- Brown, N. & Sandholm, T. (2019). Solving imperfect-information games via discounted regret minimization.364
- Brown, N., Lerer, A., Gross, S., & Sandholm, T. (2019). Deep counterfactual regret minimization.365
- Brown, N. & Sandholm, T. (2017). Safe and nested subgame solving for imperfect-information games.362
- Pluribus
- Brown, N. & Sandholm, T. (2019). Superhuman AI for multiplayer poker.366
- Brown, N. (2019). Facebook, Carnegie Mellon build first AI that beats pros in 6-player poker.
- No limit: AI poker bot is first to beat professionals at multiplayer game
- ReBeL
- Brown, N. et al. (2020). Combining deep reinforcement learning and search.367
- ReBeL: A general game-playing AI bot that excels at poker and more
- YouTube by Brown: Combining deep reinforcement learning and search for imperfect-information games.
- Brown, N. (2020). Equilibrium finding for large adversarial imperfect-information games.368
- Player of Games
- Schmid, M. et al. (2021). Player of games.369
- More
- Kovarik, V. et al. (2022). Rethinking formal models of partially observable multiagent decision making.370
- Brown, N. (2024). Talk: Parables on the power of planning in AI: From poker to diplomacy.
Applications in physics
- Denby, B. (1988). Neural networks and cellular automata in experimental high energy physics.371
- Denby, B. (1993). The use of neural networks in high-energy physics.372
- HEPML-LivingReview: A Living Review of Machine Learning for Particle Physics
- Spears, B.K. et al. (2018). Deep learning: A guide for practitioners in the physical sciences.373
- Cranmer, K., Seljak, U., & Terao, K. (2021). Machine learning (Review in the PDG).374
- Liu, Z. et al. (2024). KAN: Kolmogorov-Arnold Networks.375
- Jiao, L. et al. (2024). AI meets physics: A comprehensive survey.376
See also:
Theoretical machine learning
Algorithmic information theory
- Ray Solomonoff (1926-2009)
- Solomonoff induction
- Naturally formalizes Occam’s razor
- Incomputable
- Cilibrasi, R. & Vitanyi, P.M.B. (2005). Clustering by compression.377
- Hutter, M. (2007). Universal Algorithmic Intelligence: A mathematical top-down approach.378
- Rathmanner, S. & Hutter, M. (2011). A philosophical treatise of universal induction.379
- Hutter, M. (2022). Talk: Introduction to Algorithmic Information Theory and University Learning.
No free lunch theorems
- David Wolpert and William G. Macready
- No free lunch theorems for search (1995)380
- The lack of a priori distinctions between learning algorithms (1996)381
- No free lunch theorems for optimization (1997)382
- Shalev-Shwarz, S. & Ben-David, S. (2014).383
- McDermott, J. (2019). When and why metaheuristics researchers can ignore “no free lunch” theorems.384
- Wolpert, D.H. (2007). Physical limits of inference.385
- Wolpert, D.H. & Kinney, D. (2020). Noisy deductive reasoning: How humans construct math, and how math constructs universes.386
- Blogs:
- Fedden, L. (2017). The no free lunch theorem.
- Lokesh, M. (2020). The intuition behind the no free lunch theorem.
- Mueller, A. (2019). Don’t cite the no free lunch theorem.
- Quora answer by Luis Argerich
- Inductive bias
- Yudkowsky, E. (2007). Inductive bias. LessWrong.
- Ugly duckling theorem
- Hamilton, L.D. (2014). The inductive biases of various machine learning algorithms.
- Mitchell, T.M. (1980). The need for biases in learning generalizations.387
- Gerhard Schurz
- Dan A. Roberts. (2021). Why is AI hard and physics simple?388
- See also: Unreasonable effectiveness
- More
- Goldreich, O. & Ron, D. (1997). On universal learning algorithms.389
- Joyce, T. & Herrmann, J.M. (2017). A review of no free lunch theorems, and their implications for metaheuristic optimisation.390
- Lauc, D. (2020). Machine learning and the philosophical problems of induction.391
- Nakkiran, P. (2021). Turing-universal learners with optimal scaling laws.392
- Bousquet, O., Hanneke, S., Moran, S., Van Handel, R., & Yehudayoff, A. (2021). A theory of universal learning.393
- Andrews, M. (2023). The devil in the data: Machine learning & the theory-free ideal.394
Raissi et al.:
encoding such structured information into a learning algorithm results in amplifying the information content of the data that the algorithm sees, enabling it to quickly steer itself towards the right solution and generalize well even when only a few training examples are available.395
Roberts:
From an algorithmic complexity standpoint it is somewhat miraculous that we can compress our huge look-up table of experiment/outcome into such an efficient description. In many senses, this type of compression is precisely what we mean when we say that physics enables us to understand a given phenomenon.396
- TODO: Note Dennett’s discussion of compression.397
Connectivists vs symbolicists
- Chollet, F. (2024). Talk: It’s not about scale, it’s about abstraction.
Graphical tensor notation
- Penrose graphical notation
- Predrag Cvitanovic
- Matrices as Tensor Network Diagrams
- Multi-layer perceptions
Universal approximation theorem
- Minsky, M. & Papert, S. (1969). Perceptrons: An Introduction to Computational Geometry.398
- Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators.399
- Lu, Z., Pu, H., Wang, F., Hu, Z., & Wang, L. (2017). The expressive power of neural networks: A view from the width.400
- Lin, H. & Jegelka, S. (2018). ResNet with one-neuron hidden layers is a universal approximator.401
- Ismailov, V. (2020). A three layer neural network can represent any multivariate function.402
- Multi-layer perceptions with two or more layers are universal approximators.403
- Seemed to slow the interest in deeper networks?
Relationship to statistical mechanics
- Logistic/softmax and Boltzman factors
- Opper, M. & Kinzel, W. (1996). Statistical mechanics of generalization.404
- Opper, M. (2001). Learning to generalize.405
- Wang, L. (2018). Generative models for physicists.
- Bahri406
- Halverson407
- Canatar, A., Bordelon, B., & Pehlevan, C. (2020). Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks.408
- Roberts, Yaida, & Hanin. (2021). The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks.409
- Introduced in Facebook AI’s blog
- Cantwell, G.T. (2022). Approximate sampling and estimation of partition functions using neural networks.410
- Wang, L. (2022). Generative AI for science.
- Dinan, E., Yaida, S., & Zhang, S. (2023). Effective theory of transformers at initialization.411
- Sohl-Dickstein, J. (2020). Two equalities expressing the determinant of a matrix in terms of expectations over matrix-vector products.412
- Aifer, M. et al. (2023). Thermodynamic linear algebra.413
- Geshkovski, B., Letrouit, C., Polyanskiy, Y., & Rigollet, P. (2023). A mathematical perspective on Transformers414
- Sompolinsky, H. (2023). Lecture video: Statistical mechanics of deep learning.
Relationship to gauge theory
Invariant:
\[ f(g x) = f(x) \]
Equivariant:
\[ f(g x) = g' f(x) \]
Same-equivariant is the case that \(g' = g\).
- Dieleman, S., Fauw, J.D., & Kavukcuoglu, K. (2016). Exploiting cyclic symmetry in convolutional neural networks.415
- Cohen, T.S. & Welling, M. (2016). Group equivariant convolutional networks.416
- Cohen & Welling. (2016). Group equivariant convolutional networks.
- Cohen, T.S., Weiler, M., Kicanaoglu, B., & Welling, M. (2019). Gauge equivariant convolutional networks and the icosahedral CNN.417
- Pavlus, J. (2020). An idea from physics helps AI see in higher dimensions.
- SE(3)-Transformers418 and blog post.
- Bogatskiy, A., Hoffman, T., Miller, D.W., Offermann, J.T., & Liu, X. (2023). Explainable equivariant neural networks for particle physics: PELICAN.419
- e3nn: a modular PyTorch framework for Euclidean neural networks
- List of papers: Chen-Cai-OSU/awesome-equivariant-network
- Marchetti, G.L., Hillar, C., Kragic, D., & Sanborn. S. (2023). Harmonics of learning: Universal fourier features emerge in invariant networks.420
- Battiloro, C. et al. (2024). E(n) equivariant topological neural networks.421
Thermodynamics of computation
- Bérut, A., Arakelyan, A., Petrosyan, A., Ciliberto, S., Dillenschneider, R., & Lutz, E. (2012). Experimental verification of Landauer’s principle linking information and thermodynamics.422
- Bérut, A., Petrosyan, A., & Ciliberto, S. (2015). Information and thermodynamics: Experimental verification of Landauer’s erasure principle.423
- Wolpert, D. (2018). Why do computers use so much energy?
- Sante Fe Institute: Thermodynamics of Computation
Information geometry
Introduction
- Smith, L. (2019). A gentle introduction to information geometry.424
- Nielsen, F. (2020). An elementary introduction to information geometry.425
- Amari, S. (1998). Natural gradient works efficiently in learning.426
- Amari, S. (2016). Information Geometry and Its Applications.427
- Geomstats tutorial: Information geometry
Geometric understanding of classical statistics
- Balasubramanian, V. (1996). A geometric formulation of Occam’s razor for inference of parametric distributions.428
- Balasubramanian, V. (1996). Statistical inference, Occam’s razor and statistical mechanics on the space of probability distributions.429
- Calin, O. & Udriste, C. (2014). Geometric Modeling in Probability and Statistics.430
- de Carvalho, M., Page, G.L., & Barney, B.J. (2019). On the geometry of Bayesian inference.431
- Cranmer: Information geometry (coming soon?)
Geometric understanding of deep learning
- Lei, N. et al. (2018). Geometric understanding of deep learning.432
- Gao, Y. & Chaudhari, P. (2020). An information-geometric distance on the space of tasks.433
- Bronstein, M.M. et al. (2021). Geometric deep learning: Grids, groups, graphs, geodesics, and gauges.434
Automation
AutoML
- Neural Architecture Search (NAS)
- AutoML frameworks
- RL-driven NAS
- learned sparsity
Surrogate models
- Autoencoders, latent variables
- The manifold hypothesis
- Olah, C. (2014). Neural networks, manifolds, and topology.
- Fefferman, C., Mitter, S., & Narayanan, H. (2016). Testing the manifold hypothesis.435
- Physical constraints in loss functions
- Raissi, M., Perdikaris, P., & Karniadakis, G.E. (2017). Physics informed deep learning (Part I) and (Part II).436
- Karniadakis, G.E. et al. (2021). Physics-informed machine learning.437
- Howard, J.N. et al. (2021). Foundations of a fast, data-driven, machine-learned simulator.438
- Thuerey, N. et al. (2021). Physics-based deep learning.439
- physicsbaseddeeplearning.org
- Simulation-based inference
- Cranmer, K., Pavez, J., & Louppe, G. (2015). Approximating likelihood ratios with calibrated discriminative classifiers.440
- Cranmer, K., Brehmer, J., & Louppe, G. (2019). The frontier of simulation-based inference.441
- Baydin, A.G. et al. (2019). Etalumis: Bringing probabilistic programming to scientific simulators at scale.442
Lectures:
- Paul Hand. (2020). Invertible neural networks and inverse problems.
AutoScience
- Automated discovery
- Anderson, C. (2008). The End of Theory: The data deluge makes the scientific method obsolete.443
- Cranmer, K. (2017). Active sciencing.
- Asch, M. et al. (2018). Big data and extreme-scale computing: Pathways to Convergence-Toward a shaping strategy for a future software and data ecosystem for scientific inquiry.444
- D’Agnolo, R.T. & Wulzer, A. (2019). Learning New Physics from a Machine.445
- Note that this description of abduction is missing that it is normative (i.e. “best-fit”).
- Krenn, M. et al. (2022). On scientific understanding with artificial intelligence.446
- Symbolic regression
- Udrescu, S. & Tegmark, M. (2020). Symbolic pregression: Discovering physical laws from raw distorted video.447
- Cranmer, M. et al. (2020). Discovering symbolic models from deep learning with inductive biases.448
- Video: Discussion by Yannic Kilcher
- Liu, Z., Madhavan, V., & Tegmark, M. (2022). AI Poincare 2.0: Machine learning conservation laws from differential equations.449
- Cranmer, M. (2024). Video of seminar: The next great scientific theory is hiding inside a neural network.
- Lu, C. et al. (2024). The AI Scientist: Towards fully automated open-ended scientific discovery.450
See also:
Implications for the realism debate
Introduction
See also:
Real clusters
- Nope: Hennig
See also:
Word meanings
- Note that NLP has implications to the philosophy of language and realism
- Olah, C. (2014). Deep learning, NLP, and representations.
- Perone, C.S. (2018). NLP word representations and the Wittgenstein philosophy of language.455
- Belloni, M. (2019). Neural networks and philosophy of language: Why Wittgenstein’s theories are the basis of all modern NLP.
- Goldhill, O. (2019). Google Translate is a manifestation of Wittgenstein’s theory of language.
- Tenney, I. et al. (2019). What do you learn from context? Probing for sentence structure in contextualized word representations.456
- Nissim, M., van Noord, R., & van der Goot, R. (2019). Fair is better than sensational: Man is to doctor as woman is to doctor.457
- Skelac, I. & Jandric, A. (2020). Meaning as use: From Wittgenstein to Google’s Word2vec.458
- Bender, E.M. & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data.459
- Boccelli, D. (2022). Word embeddings align with Kandinsky’s theory of color.
- Elhage, N. et al. (2022). Toy models of superposition.460
- Patel, R. & Pavlick, E. (2022). Mapping language models to grounded conceptual spaces.461
- Lovering, C. & Pavlick, E. (2022). Unit testing for concepts in neural networks.462
- Tweet by Joscha Bach, Mar 25, 2023
- Debate: Do language models need sensory grounding for meaning and understanding? (NYU).
- Piantadosi, S. (2023). Talk: Meaning in the age of large language models.
- Huh, M., Cheung, B., Wang, T., & Isola, P. (2024). The platonic representation hypothesis.463
- Musser, G. (2024). Can an emerging field called neural systems understanding explain the brain
- Jamali, M. et al. (2024). Semantic encoding during language comprehension at single-cell resolution.464
Wittgenstein in PI:
The meaning of a word is its use in the language.465
and
One cannot guess how a word functions. One has to look at its use, and learn from that.466
Piantadosi:
Modern large language models integrate syntax and semantics in the underlying representations: encoding words as vectors in a high-dimensional space, without an effort to separate out e.g. part of speech categories from semantic representations, or even predict at any level of analysis other than the literal word. Part of making these models work well was in determining how to encode semantic properties into vectors, and in fact initializing word vectors via encodings of distribution semantics from e.g. Mikolov et al. 2013 (Radford et al. 2019). Thus, an assumption of the autonomy of syntax is not required to make models that predict syntactic material and may well hinder it.467
See also:
My thoughts
My docs:
My talks:
- Likelihood functions for supersymmetric observables
- Machine learning and realism
- Primer on statistics: MLE, confidence intervals, and hypothesis testing
Annotated bibliography
Mayo, D.G. (1996). Error and the Growth of Experimental Knowledge.
- Mayo (1996)
My thoughts
- TODO
Cowan, G. (1998). Statistical Data Analysis.
- Cowan (1998) and Cowan (2016)
My thoughts
- TODO
James, F. (2006). Statistical Methods in Experimental Physics.
- F. James (2006)
My thoughts
- TODO
Cowan, G. et al. (2011). Asymptotic formulae for likelihood-based tests of new physics.
- Cowan et al. (2011)
- Glen Cowan, Kyle Cranmer, Eilam Gross, Ofer Vitells
My thoughts
- TODO
ATLAS Collaboration. (2012). Combined search for the Standard Model Higgs boson.
- ATLAS Collaboration (2012)
- arxiv:1207.0319
My thoughts
- TODO
Cranmer, K. (2015). Practical statistics for the LHC.
- Cranmer (2015)
My thoughts
- TODO
More articles to do
Links and encyclopedia articles
SEP
- Abduction
- Analysis of knowledge
- Bayes’ theorem
- Bayesian epistemology
- Carnap, Rudolf (1891-1970)
- Causal models
- Causal processes
- Confirmation
- Dutch book arguments
- Epistemology
- Foundationalist Theories of Epistemic Justification
- Hume, David (1711-1776)
- Identity of indiscernibles
- Induction, The problem of
- Logic and Probability
- Naturalized epistemology
- Peirce, Charles Sanders (1839-1914)
- Popper, Karl (1902-1994)
- Principle of sufficient reason
- Probability, Interpretations of
- Probabilistic pausation
- Reichenbach, Hans (1891-1953)
- Scientific explanation
- Scientific research and big data
- Statistics, Philosophy of
IEP
- Carnap, Rudolf (1891-1970)
- Epistemology
- Hempel, Carl Gustav (1905-1997)
- Hume, David (1711-1776)
- Naturalism
- Naturalistic Epistemology
- Peirce, Charles Sanders (1839-1914)
- Reductionism
- Safety Condition for Knowledge, The
- Simplicity in the philosophy of science
- William of Ockham (1280-1349)
Scholarpedia
Wikipedia
- Abductive reasoning
- Akaike_information_criterion
- Algorithmic information theory
- Algorithmic probability
- Analysis of variance
- Aumann’s agreement theorem
- Bayes, Thomas (1701-1761)
- Bayesian inference
- Bernoulli, Jacob (1655-1705)
- Birnbaum, Allan (1923-1976)
- Bootstrapping
- Carnap, Rudolf (1891-1970)
- Confidence interval
- Cosmic variance
- Cramér-Rao bound
- Cramér, Harald (1893-1985)
- Data science
- Decision theory
- Deductive-nomological model
- Empiricism
- Epistemology
- Exploratory data analysis
- Fisher, Ronald (1890-1962)
- Frequentist inference
- Foundations of statistics
- Gauss, Carl Friedrich (1777-1855)
- German tank problem
- Gosset, William Sealy (1876-1937)
- Graunt, John (1620-1674)
- History of probability
- History of statistics
- Hume, David (1711-1776)
- Induction, The problem of
- Inductive reasoning
- Interval estimation
- Inverse probability
- Inverse problem
- Ivakhnenko, Alexey (1913-2007)
- Jeffrey, Richard (1926-2002)
- Jeffreys, Harold (1891-1989)
- Jeffreys prior
- Kolmogorov, Andrey (1903-1987)
- Kolmogorov complexity
- Lady tasting tea
- Laplace, Pierre-Simon (1749-1827)
- Likelihood principle
- Likelihoodist statistics
- List of important publications in statistics
- Machine learning
- Maximum likelihood estimation
- Mill, John Stuart (1806-1873)
- Misuse of p-values
- Neyman, Jerzy (1894-1981)
- Neyman construction
- Neyman-Pearson lemma
- Ockham, William of (1287-1347)
- Pearson, Egon (1895-1980)
- Pearson, Karl (1857-1936)
- Peirce, Charles Sanders (1839-1914)
- Poisson, Siméon Denis (1781-1840)
- Popper, Karl (1902-1994)
- Principle of sufficient reason
- Precision and recall
- Proteus phenomenon
- P-value
- Rao, C.R. (b. 1920)
- Replication crisis
- Rule of three
- Savage, Leonard Jimmie (1917-1971)
- Solomonoff, Ray (1926-2009)
- Solomonoff’s theory of inductive inference
- Statistical classification
- Statistical hypothesis testing
- Statistical inference
- Statistical sensitivity and specificity
- Statistical significance
- Statistics
- Statistics, Founders of
- Statistics, History of
- Statistics, Mathematical
- Statistics, Outline of
- Statistics, Philosophy of
- Student’s t-test
- Systematic error
- Thorp, Edward O. (b. 1932)
- Trial and error
- Tukey, John (1915-2000)
- Type-I and type-II errors
- Uncomfortable science
- Uniformitarianism
- Unsolved problems in statistics, List of
- Venn, John (1834-1923)
- Wilks, S.S. (1906-1964)
- Wilks’s theorem
Others
- Deep Learning: Our Miraculous Year 1990-1991 - Schmidhuber
- errorstatistics.com - Deborah Mayo’s blog
- Graunt, John (1620-1674) - statprob.com
- Peng, R. (2016). A Simple Explanation for the Replication Crisis in Science. - simplystatistics.org
- Why is binary classification not a hypothesis test? - stackexchange.com
- If the likelihood principle clashes with frequentist probability then do we discard one of them? - stackexchange.com
- Wilks’s theorem - fiveMinuteStats
- Dallal, G.E. (2012). The Little Handbook of Statistical Practice.
References
Rao (1997), p. x.↩︎
Edwards (1974), p. 9.↩︎
Hacking (1971).↩︎
Bernoulli, J. (1713). Ars Conjectandi, Chapter II, Part IV, defining the art of conjecture [wikiquote].↩︎
Venn (1888).↩︎
Jevons (1873a).↩︎
Jevons (1873b).↩︎
Peirce (1883), p. 126–181.↩︎
Pearson (1900).↩︎
Keynes (1921).↩︎
Fisher (1912).↩︎
Fisher (1915).↩︎
Fisher (1921).↩︎
Fisher (1955).↩︎
Salsburg (2001).↩︎
Reid (1998).↩︎
Neyman (1955).↩︎
Stuart, Ord, & Arnold (2010).↩︎
F. James (2006).↩︎
Cowan (1998) and Cowan (2016).↩︎
Cranmer (2015).↩︎
Jaynes (2003).↩︎
Lista (2016b).↩︎
Lista (2016a).↩︎
Cox (2006).↩︎
Behnke, Kröninger, Schott, & Schörner-Sadenius (2013).↩︎
Cousins (2018).↩︎
Weisberg (2019).↩︎
Gelman & Vehtari (2021).↩︎
Otsuka (2023).↩︎
Carnap (1947).↩︎
Carnap (1953).↩︎
Goodfellow, Bengio, & Courville (2016), p. 72-73.↩︎
Cowan (1998), p. 20-22.↩︎
Arras (1998).↩︎
Fienberg (2006).↩︎
Weisberg (2019), ch. 15.↩︎
Fisher (1921), p. 15.↩︎
Stein (1956).↩︎
W. James & Stein (1961).↩︎
van Handel (2016).↩︎
Vershynin (2018).↩︎
Leemis & McQueston (2008).↩︎
Cranmer, K. et al. (2012).↩︎
This assumption that the model models the data “reasonably” well reflects that to the degree required by your analysis, the important features of the data match well within the systematic uncertainties parametrized within the model. If the model is incomplete because it is missing an important feature of the data, then this is the “ugly” (class-3) error in the Sinervo classification of systematic uncertainties.↩︎
Cowan (1998) and Cowan (2016), p. TODO.↩︎
Aldrich (1997).↩︎
F. James (2006), p. 234.↩︎
Cox (2006), p. 11.↩︎
Murphy (2012), p. 222.↩︎
Fréchet (1943), Cramér (1946), Rao (1945), and Rao (1947).↩︎
Rice (2007), p. 300–2.↩︎
Nielsen (2013).↩︎
Cowan (1998), p. 130-5.↩︎
F. James (2006), p. 234.↩︎
F. James & Roos (1975).↩︎
Cowan, Cranmer, Gross, & Vitells (2012).↩︎
Wainer (2007).↩︎
Tegmark, Taylor, & Heavens (1997).↩︎
Clopper & Pearson (1934).↩︎
Agresti & Coull (1998).↩︎
Hanley & Lippman-Hand (1983).↩︎
L. D. Brown, Cai, & DasGupta (2001).↩︎
Casadei (2012).↩︎
Fisher (1935), p. 16.↩︎
Goodman (1999a). p. 998.↩︎
J. Cohen (1992).↩︎
ATLAS and CMS Collaborations (2011).↩︎
Cowan, Cranmer, Gross, & Vitells (2011).↩︎
Neyman & Pearson (1933).↩︎
Feldman & Cousins (1998).↩︎
Sinervo (2002) and Cowan (2012).↩︎
Cowan et al. (2011), p. 2–3.↩︎
Cowan et al. (2011), p. 3.↩︎
Cousins & Highland (1992).↩︎
Junk (1999).↩︎
Read (2002).↩︎
ATLAS Statistics Forum (2011).↩︎
Wilks (1938).↩︎
Wald (1943).↩︎
Cowan et al. (2011).↩︎
Bhattiprolu, Martin, & Wells (2020).↩︎
Murphy (2012), p. 197.↩︎
Goodman (1999b).↩︎
Goodman (1999a). p. 995.↩︎
Sinervo (2003).↩︎
Heinrich & Lyons (2007).↩︎
Caldeira & Nord (2020).↩︎
Lyons (2008), p. 890.↩︎
Rubin (1974).↩︎
Lewis (1981).↩︎
Pearl (2018).↩︎
Pearl (2009).↩︎
Robins & Wasserman (1999).↩︎
Peters, Janzing, & Scholkopf (2017).↩︎
Lundberg, Johnson, & Stewart (2021).↩︎
Ismael (2023).↩︎
Chevalley, Schwab, & Mehrjou (2024).↩︎
Tukey (1977).↩︎
Chen, X. et al. (2018).↩︎
Carnap (1945).↩︎
Royall (1997), p. 171–2.↩︎
Cranmer (2015), p. 6.↩︎
Neyman & Pearson (1933).↩︎
Kruschke & Liddell (2018).↩︎
Edwards (1974).↩︎
Birnbaum (1962).↩︎
Hacking (1965).↩︎
Berger & Wolpert (1988).↩︎
O’Hagan (2010), p. 17–18.↩︎
Gandenberger (2015).↩︎
Evans (2013).↩︎
Mayo (2014).↩︎
Mayo (2019).↩︎
Dawid (2014).↩︎
Mayo (2019).↩︎
Lyons (2008), p. 891.↩︎
Sznajder (2018).↩︎
Carnap (1952).↩︎
Hacking (1965).↩︎
Neyman (1977).↩︎
Rozeboom (1960).↩︎
Meehl (1978).↩︎
Zech (1995).↩︎
Royall (1997).↩︎
Berger (2003).↩︎
Mayo (1981).↩︎
Mayo (1996).↩︎
Mayo & Spanos (2006).↩︎
Mayo & Spanos (2011).↩︎
Mayo (2018).↩︎
Gelman & Hennig (2017).↩︎
Murphy (2012), ch. 6.6.↩︎
Murphy (2022), p. 195–198.↩︎
Gandenberger (2016).↩︎
Wakefield (2013), ch. 4.↩︎
Efron & Hastie (2016), p. 30–36.↩︎
Kruschke & Liddell (2018).↩︎
Steinhardt (2012).↩︎
Goodman (1999a). p. 999.↩︎
Ioannidis (2005).↩︎
Wasserstein & Lazar (2016).↩︎
Wasserstein, Allen, & Lazar (2019).↩︎
Benjamin, D.J. et al. (2017).↩︎
Fisher (1935), p. 13–14.↩︎
Rao & Lovric (2016).↩︎
Mayo (2021).↩︎
Gorard & Gorard (2016).↩︎
Benjamini, Y. et al. (2021), p. 1.↩︎
Hastie, Tibshirani, & Friedman (2009).↩︎
MacKay (2003).↩︎
Murphy (2012).↩︎
Murphy (2022), p. 195–198.↩︎
Shalev-Shwarz & Ben-David (2014).↩︎
Vapnik, Levin, & LeCun (1994).↩︎
Shalev-Shwarz & Ben-David (2014), p. 67–82.↩︎
McCarthy, Minsky, Rochester, & Shannon (1955).↩︎
Solomonoff (2016).↩︎
Kardum (2020).↩︎
Rosenblatt (1961).↩︎
Minsky & Papert (1969).↩︎
Murphy (2012), p. 21.↩︎
Note: Label smoothing is a regularization technique that smears the activation over other labels, but we don’t do that here.↩︎
“Logit” was coined by Joseph Berkson (1899-1982).↩︎
McFadden & Zarembka (1973).↩︎
Blondel, Martins, & Niculae (2020).↩︎
Goodfellow et al. (2016), p. 129.↩︎
Freund & Schapire (1997).↩︎
T. Chen & Guestrin (2016).↩︎
Aytekin (2022).↩︎
Grinsztajn, Oyallon, & Varoquaux (2022).↩︎
Coadou (2022).↩︎
Slonim, Atwal, Tkacik, & Bialek (2005).↩︎
Batson, Haaf, Kahn, & Roberts (2021).↩︎
Hennig (2015).↩︎
Lauc (2020), p. 103–4.↩︎
Ronen, Finder, & Freifeld (2022).↩︎
Fang, Z. et al. (2022).↩︎
Bengio (2009).↩︎
LeCun, Bengio, & Hinton (2015).↩︎
Sutskever (2015).↩︎
Goodfellow et al. (2016).↩︎
Kaplan, J. et al. (2019).↩︎
Rumelhart, Hinton, & Williams (1986).↩︎
LeCun & Bottou (1998).↩︎
Scardapane (2024).↩︎
Bottou (1998).↩︎
Norvig (2011).↩︎
Sutton (2019).↩︎
Frankle & Carbin (2018).↩︎
Bengio (2009).↩︎
Belkin, Hsu, Ma, & Mandal (2019).↩︎
Muthukumar, Vodrahalli, Subramanian, & Sahai (2019).↩︎
Nakkiran, P. et al. (2019).↩︎
Chang, Li, Oymak, & Thrampoulidis (2020).↩︎
Holzmüller (2020).↩︎
Dar, Muthukumar, & Baraniuk (2021).↩︎
Balestriero, Pesenti, & LeCun (2021).↩︎
Belkin (2021).↩︎
Nagarajan (2021).↩︎
Bach (2022), p. 225–230.↩︎
Ghosh & Belkin (2022).↩︎
Singh, Lucchi, Hofmann, & Schölkopf (2022).↩︎
Hastie, Montanari, Rosset, & Tibshirani (2022).↩︎
Bubeck & Sellke (2023).↩︎
Gamba, Englesson, Björkman, & Azizpour (2022).↩︎
Schaeffer, R. et al. (2023).↩︎
Yang & Suzuki (2023).↩︎
Maddox, Benton, & Wilson (2023).↩︎
Steinhardt (2022).↩︎
Henighan, T. et al. (2023).↩︎
Mishra, D. (2020). Weight Decay == L2 Regularization?↩︎
S. Chen, Dobriban, & Lee (2020).↩︎
Chiley, V. et al. (2019).↩︎
Kiani, Balestriero, Lecun, & Lloyd (2022).↩︎
Huang, L. et al. (2020).↩︎
Hu, E.J. et al. (2021).↩︎
Dettmers, Pagnoni, Holtzman, & Zettlemoyer (2023).↩︎
Zhao, J. et al. (2024).↩︎
Huh, Cheung, Wang, & Isola (2024).↩︎
Fukushima & Miyake (1982).↩︎
LeCun, Y. et al. (1989).↩︎
LeCun, Bottou, Bengio, & Haffner (1998).↩︎
Ciresan, Meier, Masci, & Schmidhuber (2012).↩︎
Krizhevsky, Sutskever, & Hinton (2012).↩︎
Simonyan & Zisserman (2014).↩︎
He, Zhang, Ren, & Sun (2015).↩︎
Haber & Ruthotto (2017) and Haber, E. et al. (2018).↩︎
Howard, A.G. et al. (2017).↩︎
R. T. Q. Chen, Rubanova, Bettencourt, & Duvenaud (2018).↩︎
Tan & Le (2019).↩︎
Dosovitskiy, A. et al. (2020).↩︎
Tan & Le (2021).↩︎
H. Liu, Dai, So, & Le (2021).↩︎
Dhariwal & Nichol (2021).↩︎
Liu, Y. et al. (2021).↩︎
Ingrosso & Goldt (2022).↩︎
Park & Kim (2022).↩︎
Zhao, W.X. et al. (2023).↩︎
Nakkiran, Bradley, Zhou, & Advani (2024).↩︎
Firth (1957).↩︎
Nirenburg (1996).↩︎
Hutchins (2000).↩︎
Jurafsky & Martin (2022).↩︎
Z. Liu, Lin, & Sun (2023).↩︎
Mikolov, Chen, Corrado, & Dean (2013), Mikolov, Yih, & Zweig (2013), and Mikolov, T. et al. (2013).↩︎
Kun (2018), p. 176–8.↩︎
Hochreiter & Schmidhuber (1997).↩︎
Graves (2013).↩︎
Sutskever, Vinyals, & Le (2014), p. 4.↩︎
Zhang (1998).↩︎
Zhou & Hansen (2005).↩︎
Collobert, Hannun, & Synnaeve (2019).↩︎
Werbos (1990).↩︎
Sutskever et al. (2014).↩︎
Bahdanau, Cho, & Bengio (2015).↩︎
Wu, Y. et al. (2016).↩︎
Stahlberg (2019).↩︎
Vaswani, A. et al. (2017).↩︎
Devlin, Chang, Lee, & Toutanova (2018).↩︎
Liu, Y. et al. (2019).↩︎
Raffel, C. et al. (2019).↩︎
Lan, Z. et al. (2019).↩︎
Lewis, M. et al. (2019).↩︎
Radford, Narasimhan, Salimans, & Sutskever (2018).↩︎
Radford, A. et al. (2019).↩︎
Brown, T.B. et al. (2020).↩︎
Yang, Z. et al. (2019).↩︎
Zaheer, M. et al. (2020).↩︎
Edelman, Goel, Kakade, & Zhang (2021).↩︎
Tay, Dehghani, Bahri, & Metzler (2022).↩︎
Phuong & Hutter (2022).↩︎
Chowdhery, A. et al. (2022).↩︎
Ouyang, L. et al. (2022).↩︎
Wolfram (2023).↩︎
OpenAI (2023).↩︎
Mohamadi, S. et al. (2023).↩︎
Zhao, W.X. et al. (2023).↩︎
Golovneva, Wang, Weston, & Sukhbaatar (2024).↩︎
Dao, T. et al. (2022).↩︎
Gu, Goel, & Ré (2021).↩︎
Merrill & Sabharwal (2022).↩︎
Bulatov, Kuratov, & Burtsev (2022).↩︎
Bulatov, Kuratov, & Burtsev (2023).↩︎
Bertsch, Alon, Neubig, & Gormley (2023).↩︎
Mialon, G. et al. (2023).↩︎
Peng, B. et al. (2023).↩︎
Sun, Y. et al. (2023).↩︎
Gu & Dao (2023).↩︎
Wang, H. et al. (2023).↩︎
Ma, S. et al. (2024).↩︎
Ma, X. et al. (2024).↩︎
Bhargava, Witkowski, Shah, & Thomson (2023).↩︎
Dao & Gu (2024).↩︎
Banerjee, Agarwal, & Singla (2024).↩︎
Hestness, J. et al. (2017).↩︎
Church & Hestness (2019).↩︎
Kaplan, J. et al. (2020).↩︎
Rae, J.W. et al. (2022).↩︎
Hoffmann, J. et al. (2022).↩︎
Caballero, Gupta, Rish, & Krueger (2022).↩︎
Muennighoff, N. et al. (2023).↩︎
Pandey (2024).↩︎
Bach (2024).↩︎
Mahowald, K. et al. (2023).↩︎
Kosinski (2023).↩︎
Watson & Floridi (2019).↩︎
Gurnee, W. et al. (2023).↩︎
Meng, Bau, Andonian, & Belinkov (2023).↩︎
McDougall, C. et al. (2023).↩︎
Alain & Bengio (2016).↩︎
Belinkov (2022).↩︎
Gurnee & Tegmark (2023).↩︎
Sutton & Barto (2018).↩︎
Arulkumaran, Deisenroth, Brundage, & Bharath (2017).↩︎
Cesa-Bianchi & Lugosi (2006).↩︎
Silver, Singh, Precup, & Sutton (2024).↩︎
Bellman (1952).↩︎
Mnih, V. et al. (2013) and Mnih, V. et al. (2015).↩︎
Silver, D. et al. (2016).↩︎
Silver, D. et al. (2017b).↩︎
Silver, D. et al. (2017a).↩︎
Hart & Mas‐Colell (2000).↩︎
Roughgarden (2016).↩︎
Zinkevich, Johanson, Bowling, & Piccione (2007).↩︎
N. Brown (2020), p. 12.↩︎
Zinkevich et al. (2007), p. 4.↩︎
Tammelin (2014).↩︎
Tammelin, Burch, Johanson, & Bowling (2015).↩︎
Burch, Moravcik, & Schmid (2019).↩︎
N. Brown & Sandholm (2019a).↩︎
Zinkevich et al. (2007) and Lanctot, Waugh, Zinkevich, & Bowling (2009).↩︎
N. Brown (2020), p. 6.↩︎
N. Brown (2020), p. 12.↩︎
Lanctot et al. (2009).↩︎
Neller & Lanctot (2013).↩︎
Burch, Lanctot, Szafron, & Gibson (2012).↩︎
Johanson, M. et al. (2012).↩︎
Schmid, M. et al. (2019).↩︎
Li, H. et al. (2020).↩︎
Habara, Fukuda, & Yamashita (2023).↩︎
Lanctot (2013).↩︎
Gibson (2014).↩︎
Johanson (2016).↩︎
Burch (2018).↩︎
Horacek (2022).↩︎
Lisy & Bowling (2016), p. 2.↩︎
See NashConv exploitability defined in Lanctot, M. et al. (2017).↩︎
Timbers (2020), p. 3.↩︎
Johanson, Waugh, Bowling, & Zinkevich (2011).↩︎
Ponsen, De Jong, & Lanctot (2011).↩︎
Lisy & Bowling (2016).↩︎
Timbers (2020).↩︎
Kuhn (1950).↩︎
Southey, F. et al. (2012).↩︎
Billings, Davidson, Schaeffer, & Szafron (2002).↩︎
Billings, D. et al. (2003).↩︎
Johanson (2013).↩︎
Bowling, Burch, Johanson, & Tammelin (2015).↩︎
Heinrich & Silver (2016).↩︎
Moravcik, M. et al. (2017).↩︎
N. Brown & Sandholm (2017).↩︎
N. Brown & Sandholm (2018).↩︎
N. Brown & Sandholm (2019a).↩︎
N. Brown, Lerer, Gross, & Sandholm (2019).↩︎
N. Brown & Sandholm (2019b).↩︎
N. Brown, Bakhtin, Lerer, & Gong (2020).↩︎
N. Brown (2020).↩︎
Schmid, M. et al. (2021).↩︎
Kovarik, V. et al. (2022).↩︎
Denby (1988).↩︎
Denby (1993).↩︎
Spears, B.K. et al. (2018).↩︎
Cranmer, Seljak, & Terao (2021).↩︎
Liu, Z. et al. (2024).↩︎
Jiao, L. et al. (2024).↩︎
Cilibrasi & Vitanyi (2005).↩︎
Hutter (2007).↩︎
Rathmanner & Hutter (2011).↩︎
Wolpert & Macready (1995).↩︎
Wolpert (1996).↩︎
Wolpert & Macready (1997).↩︎
Shalev-Shwarz & Ben-David (2014), p. 60–66.↩︎
McDermott (2019).↩︎
Wolpert (2007).↩︎
Wolpert & Kinney (2020).↩︎
Mitchell (1980).↩︎
Roberts (2021).↩︎
Goldreich & Ron (1997).↩︎
Joyce & Herrmann (2017).↩︎
Lauc (2020).↩︎
Nakkiran (2021).↩︎
Bousquet, O. et al. (2021).↩︎
Andrews (2023).↩︎
Raissi, Perdikaris, & Karniadakis (2017a), p. 2.↩︎
Roberts (2021), p. 7.↩︎
Dennett (1991), p. TODO.↩︎
Minsky & Papert (1969), p. TODO.↩︎
Hornik, Stinchcombe, & White (1989).↩︎
Lu, Z. et al. (2017).↩︎
Lin & Jegelka (2018).↩︎
Ismailov (2020).↩︎
Bishop (2006), p. 230.↩︎
Opper & Kinzel (1996).↩︎
Opper (2001).↩︎
Bahri, Y. et al. (2020).↩︎
Halverson, Maiti, & Stoner (2020).↩︎
Canatar, Bordelon, & Pehlevan (2020).↩︎
Roberts, Yaida, & Hanin (2021).↩︎
Cantwell (2022).↩︎
Dinan, Yaida, & Zhang (2023).↩︎
Sohl-Dickstein (2020).↩︎
Aifer, M. et al. (2023).↩︎
Geshkovski, Letrouit, Polyanskiy, & Rigollet (2023).↩︎
Dieleman, Fauw, & Kavukcuoglu (2016).↩︎
T. S. Cohen & Welling (2016).↩︎
T. S. Cohen, Weiler, Kicanaoglu, & Welling (2019).↩︎
Fuchs, Worrall, Fischer, & Welling (2020).↩︎
Bogatskiy, A. et al. (2023).↩︎
Marchetti, Hillar, Kragic, & Sanborn (2023).↩︎
Battiloro, C. et al. (2024).↩︎
Bérut, A. et al. (2012).↩︎
Bérut, Petrosyan, & Ciliberto (2015).↩︎
Smith (2019).↩︎
Nielsen (2020).↩︎
Amari (1998).↩︎
Amari (2016).↩︎
Balasubramanian (1996a).↩︎
Balasubramanian (1996b).↩︎
Calin & Udriste (2014).↩︎
de Carvalho, Page, & Barney (2019).↩︎
Lei, Luo, Yau, & Gu (2018).↩︎
Gao & Chaudhari (2020).↩︎
Bronstein, Bruna, Cohen, & Velickovic (2021).↩︎
Fefferman, Mitter, & Narayanan (2016).↩︎
Raissi et al. (2017a) and Raissi, Perdikaris, & Karniadakis (2017b).↩︎
Karniadakis, G.E. et al. (2021).↩︎
Howard, Mandt, Whiteson, & Yang (2021).↩︎
Thuerey, N. et al. (2021).↩︎
Cranmer, Pavez, & Louppe (2015).↩︎
Cranmer, Brehmer, & Louppe (2019).↩︎
Baydin, A.G. et al. (2019).↩︎
Anderson (2008).↩︎
Asch, M. et al. (2018).↩︎
D’Agnolo & Wulzer (2019).↩︎
Krenn, M. et al. (2022).↩︎
Udrescu & Tegmark (2020).↩︎
Cranmer, M. et al. (2020).↩︎
Z. Liu, Madhavan, & Tegmark (2022).↩︎
Lu, C. et al. (2024).↩︎
Asch, M. et al. (2018).↩︎
Korb (2001).↩︎
Williamson (2009).↩︎
Bensusan (2000).↩︎
Perone (2018).↩︎
Tenney, I. et al. (2019).↩︎
Nissim, Noord, & Goot (2019).↩︎
Skelac & Jandric (2020).↩︎
Bender & Koller (2020).↩︎
Elhage, N. et al. (2022).↩︎
Patel & Pavlick (2022).↩︎
Lovering & Pavlick (2022).↩︎
Huh et al. (2024).↩︎
Jamali, M. et al. (2024).↩︎
Wittgenstein (2009), §43.↩︎
Wittgenstein (2009), §340.↩︎
Piantadosi (2023), p. 15.↩︎
Wasserman (2003).↩︎
Savage (1954).↩︎