madhadron

Feuding schools of inference

Status: Needs revision
Confidence: Likely

Every so often some comparison of Bayesian and frequentist statistics comes to my attention. Today it was on a blog called Pythonic Perambulations. It’s the work of amateurs. Their description on noninformative priors is simplified to the point of distortion. They insist on kludging their tools instead of fixing their model when it is clearly misspecified. They use a naive construction for 95% confidence intervals and are surprised when it fails miserably, and even use this as an argument against 95% confidence intervals.1 Normally I would shrug and move on, but it happened to catch me in a particularly grumpy mood, so here we are.

Essays discussing frequentist versus Bayesian statistics follow a fairly standard form. The author lays out both positions, then argues for the one he (it seems invariably to be a he) likes. The two positions are both quite subtle, but each tries to make the concept of a probability correspond to something in the real world. Frequentists operationalize probability as the fraction of elements of an ensemble of hypothetical outcomes of a trial with a certain property. Bayesians operationalize probability as degree of belief. Both have mathematical models which justify this. All the models have limitations which are rarely justified in practice. Which one is right?

The answer, as usual when faced with a dichotomy, is neither. van Kampen wrote a paper2 about quantum mechanics that has some dicta which can be translated almost directly to statistics, notably:

The quantum mechanical probability is not observed but merely serves as an intermediate stage in the computation of an observable phenomenon.

and

Whoever endows ψ\psi with more meaning than is needed for computing observable phenomena is responsible for the consequences.

Probability, as a mathematical theory, has no need of an interpretation. Mathematicians studying combinatorics use it quite happily with nothing in sight that a frequentist or Bayesian would recognize. The real battleground is statistics, and the real purpose is to choose an action based on data. The formulation that everyone uses for this, from machine learning to the foundations of Bayesian statistics, is decision theory. A decision theoretic formulation of a situation has the following components:3

Given these components, the task is to find a function tt from XX to MM which minimizes the loss. The loss is a function, though, not a single value, and there are many ways we can make this well defined. Each of those ways has different uses.

For example, if we are engaged in a contest against an opponent, we may want to minimize the maximum loss we can have. Thus we choose tt to minimize the maximum value LL achieves over any combination of (ω,x)(Ω,X)(\omega, x) \in (\Omega, X) which can occur.

Alternately, we can choose to integrate LL against some measure μ\mu. Usually we decompose the measure into a measure on XX given Ω\Omega (the probability of getting a particular value from XX given that some element of Ω\Omega is the true state of nature) and a measure on Ω\Omega. This is a Bayes procedure, with the measure on Ω\Omega the prior. We could also integrate over XX but not Ω\Omega and use some other technique to eliminate that variable.

Almost any of the tricks of defining norms that you can dig out of functional analysis can be used and will have a use, but in the end you have a procedure tt. You apply it to the data from your trial, and take the action dictated. Probability does not enter the picture.4

We can and should fight over the specification of the states of nature Ω\Omega, of the possible decisions MM, over the loss function LL5 We should discuss the norm we use to choose our optimal procedure tt. These are hard questions. There is no reason to make the situation any more difficult by attaching unnecessary ideas to probability, which is a tool for calculation and no more.


  1. Naive constructions typically fail wildly for non-Gaussian distributions. See Brown, Cai, and DasGupta, Interval estimation for a binomial proportion, Statistical Science, 2001, Vol.16, No.2, 101-133 for the binomial case.↩︎

  2. N.G. van Kampen, Ten theorems about quantum mechanical measurements, Physica A 153 (1988) 97-113↩︎

  3. I learned this from Kiefer’s Introduction to Statistical Inference↩︎

  4. This was the lesson I took away from Leonard Savage’s Foundations of Statistics; everyone else seems to have read a different book than I did.↩︎

  5. In practice, we try to find procedures that are optimal under a range of loss functions to make this decision less subjective.↩︎