*TLDR:* Yes, and there are precise results, although they are not as well known
as they perhaps should be.

Over the last few years I had many conversations in which the statement was made that Bayesians methods are generally immune to overfitting, or at least, robust against overfitting, or---everybody would have to agree, right?---it clearly is better than maximum aposteriori estimation.

Various loose arguments in support include the built-in Bayesian version of Occam's razor, and the principled treatment of any uncertainty throughout the estimation. However, over the years it has always bothered me that this argument is only made casually and for many years I was not aware of a formal proof or discussion except for the well-known result that in case the model is well-specified the Bayes posterior predictive is risk-optimal.

Until recently! A colleague pointed me to a book written by Sumio Watanabe (reference and thanks below) and this blog post is the result of the findings in this nice book.

# Overfitting

In machine learning, the concept of *overfitting* is very important in practice.
In fact, it is perhaps the most important concept to understand when learning from data.
Many practices and methods aim squarely at measuring and preventing
overfitting. The following are just a few examples:

*Regularization*limits the*capacity*of a machine learning model in order to avoid overfitting;*Separating data into a training, validation, and test set*, is best practice to assess generalization performance and to avoid overfitting;*Dropout*, a regularization scheme for deep neural networks, is popularly used to mitigate overfitting.

But what is overfitting? Can we formally define it?

## Defining Overfitting

The most widely used loose definition is the following.

Overfitting is the gap between the performance on the training set and the performance on the test set.

This definition makes a number of assumptions:

- The data is independent and identically distributed and comes separated in a training set and a test set.
- There is a clearly defined performance measure.
- There are no remaining degrees of freedom in the learning procedure or model.
- The test set is of sufficient size so that the performance estimation error is negligible.

For example, in a classification task the performance measure may be the classification error or the softmax-cross-entropy loss (log-loss).

However, in practice this definition of overfitting can be too strict: in many cases we care about minimizing generalization error, not about the difference between generalization error and training error. For deep learning in particular, the training error is often zero for the model that is selected as the one minimizing validation error. The recent paper (Belkin, Ma, Mandal, "To Understand Deep Learning We Need to Understand Kernel Learning", ICML 2018) is studying this phenomenon.

Is overfitting relevant for Bayesians as well?

## The Bayesian Case

(This paragraph summarizes Bayesian prediction and contains nothing new or controversial.)

Since *de Finetti*, a subjective Bayesian measures the performance of any model
by the predicted likelihood of future observables.
Given a sample \(D_n=(x_1, \dots, x_n)\), generated from some true
data-generating distribution \(x_i \sim Q\), a Bayesian proceeds by setting up a
model \(P(X|\theta)\), where \(\theta\) are unknown parameters of the model, with
prior \(P(\theta)\).
The data reveals information about \(\theta\) in the form of a posterior distribution \(P(\theta|D_n)\).
The posterior distribution over parameters is then useful in constructing our best guess of what we will see next, in the form of the
*posterior predictive distribution*,

Note that in particular the only degrees of freedom are in the choice of model \(P(X|\theta)\) and in the prior \(P(\theta)\).

How good is \(P(x|D_n)\)?
A Bayesian cares about the predicted likelihood of future observables, which
corresponds to the cross-entropy loss, and is also called the *Bayesian
generalization loss*,

Likewise, given our training sample \(D_n\), we can define the *Bayesian training
loss*,

However, the concept of a "Bayesian training loss'' is unnatural to a Bayesian because it uses the data twice: first, to construct the posterior predictive \(P(x|D_n)\), and then a second time, to evaluate the likelihood on \(D_n\). Nevertheless, we will see below that the concept, combined with the so called Gibbs training loss, is a very useful one.

The question of whether Bayesians overfit is then clearly stated as:

# A Simple Experiment

We consider an elementary experiment of sampling data from a Normal distribution with unknown mean.

In this case, exact Bayesian inference is feasible because the posterior and posterior-predictive distributions have a simple closed-form solution, each of which is a Normal distributions.

For varying sample size \(n\) we perform 2,000 replicates of generating data according to the above sampling procedure and evaluate the Bayesian generalization loss and the Bayesian training loss. The following plot shows the average errors over all replicates.

Clearly \(B_t < B_g\), and there is overfitting.

What about non-Bayesian estimators, such as MAP estimation and maximum likelihood estimation?

## Maximum Aposteriori (MAP) and Maximum Likelihood (MLE)

Two popular point estimators are the maximum aposteriori estimator (MAP), defined as

and the maximum likelihood estimator (MLE), defined as

Each of these estimators also has a generalization loss and a training loss. In our experiment the MLE estimator is dominated by the MAP estimator, which is in turn dominated by the Bayesian estimate, which is optimal in terms of generalization loss.

The gap between the MLE generalization error (top line, dotted) and the MAP generalization error (black dashed line) is due to the use of the informative prior about \(\mu\). The gap between the Bayesian generalization error (black solid line) and the MAP generalization error (black dashed line) is due to the Bayesian handling of estimation uncertainty. In this simple example the information contained in the prior is more important than the Bayesian treatment of estimation uncertainty.

Can we estimate \(B_g\) except via prediction on hold-out data?

# WAIC: Widely Applicable Information Criterion

It turns out that we can estimate \(B_g\) to order \(O(n^{-2})\) from just our training set. This is useful because it provides us an estimate of our generalization performance, and hence can be used for model selection and hyperparameter optimization.

The *Widely Applicable Information Criterion (WAIC)*, invented by Sumio
Watanabe, estimates
the Bayesian generalization error,

where \(G_t\) is the *Gibbs training loss*, defined as the average loss of individual models from the posterior,

Due to Jensen's inequality we always have \(G_t > B_t\) so the right hand summand in \(\textrm{WAIC}\) is always positive. Importantly, given a training set we can actually evaluate \(\textrm{WAIC}\), but we cannot evaluate \(B_g\).

Watanabe showed that

Evaluating the previous experiment we can see that \(\textrm{WAIC}\) indeed accurately estimates \(B_g\).

Even better, Watanabe also showed that \(\textrm{WAIC}\) continues to estimate
the Bayesian generalization error accurately in singular models and in case the
model is misspecified.
Here, *singular* means that there is not a bijective map between model parameters and distributions.
*Misspecified* means that no parameter exists which matches the true data-generating distribution.

# Conclusion

Clearly, Bayesians do overfit, just like any other procedure does.

- (Sumio Watanabe, "Algebraic Geometry and Statistical Learning Theory", Cambridge University Press, 2009), a monograph summarizing in detail earlier results. The results are particularly relevant for neural networks (which are singular models) and for Bayesian neural networks.
- For WAIC, see also Section 7.1 in (Sumio Watanabe, "A Widely Applicable Bayesian Information Criterion", JMLR, 2013).
- (Gelman, Hwang, Vehtari, "Understanding predictive information criteria for Bayesian models", Statistics and Computing, 2013)
have good things to say about WAIC when comparing multiple information
criteria (AIC, DIC, WAIC), "
*WAIC is fully Bayesian (using the posterior distribution rather than a point estimate), gives reasonable results in the examples we have considered here, and has a more-or-less explicit connection to cross-validation*"

*Acknowledgements*. I thank Ryota
Tomioka for exciting
discussions and for pointing me to Watanabe's book.