In this tutorial, we will look at how model saturation can be used to compute Bayes factors.
We start with a model assuming no preference
Now look at the standard output, in the last round, take note of "logNormalizationContantProgress [ round=11 value=___ ]"
This is an estimate of the natural logarithm of the probability of the data (evidence or marginal likelihood). This estimate is based on the stepping stone method combined with parallel tempering, methods that we will cover in detail in a few weeks.
Take note of the estimate of the marginal likelihood for the second model.
Consider now an augmented model based on model saturation:
Compare the approximation based on model saturation versus those based on separate stepping stone estimates.
Suppose now the dataset is $$x_1 = 12, x_2 = 13$$. Clearly, a reasonable model selection should then give more weight to the simpler model, i.e. it should penalize the additional complexity of the 2-parameter model. Do you observe this?