I’ve been attending Statistical Theory, a course in Part III Maths at Cambridge, taught by Richard Samworth. In the latest lecture, Richard showed the Stein estimator for the mean of multivariate Gaussian has lower expected loss (measured as the Euclidean distance between the true and estimated values of the mean) than the MLE for any value of . From a frequentist perspective this appears completely unintuitive, but from a Bayesian perspective it appears much more reasonable.

Assume we observe vector drawn from a multivariate normal of dimension p, with mean and identity covariance matrix. The MLE of is then just , but the Stein estimator

The fact that this estimator performs better than the ML is termed shrinkage, because the estimator is shrunk towards 0.

How would a Bayesian approach this problem? First let’s put a Gaussian prior on , so

where is a precision (inverse variance). In a fully Bayesian setting we would then put a Gamma prior on , but unfortunately we would then have to resort to sampling to infer the posterior over . Assuming is known, then the posterior of is

Thus the expected value of is

Now let’s find the MLE of . This is not ideal, but is tractable at least. To do this we’ll first integrate out :

where

.

An unbiased estimate of is given by

.

Substituting for and rearranging gives

Substituting into the expression for above and rearranging gives

,

which is very close to the Stein estimate. I suspect that some choice of prior on would result in a MAP estimate which would give the term.

The conclusion is that an estimator which has unintuitively desirable properties in a frequentist framework, is intuitively a sensible estimator using under a Bayesian framework.