Blang

Testing correctness: overview

There is considerable emphasis in the MCMC and SMC literatures on efficiency (in the computational sense of efficiency), but much less on correctness. Here we define correctness as follows: an MCMC procedure producing samples \( X_1, X_2, \dots \) is correct if ergodic averages based any integrable functions \( f \) admit a law of large number converging to the posterior expectation of the function under the target posterior distribution \( \pi \), i.e.

\[\begin{align} \frac{1}{N} \sum_{i=1}^N f(X_i) \to \int f(x) \pi(x) \text{d}x \text{ a.s., } N\to\infty. \end{align}\]

For SMC-based algorithm, we use a similar definition where instead the number of particles is increased to infinity.

Correctness can fail for two main reasons: some mathematical derivations used to construct the MCMC or SMC algorithm are incorrect, or, the software implementation is buggy. The tests we describe here can detect both kinds of problems.

Correctness has nothing to do with what the literature vaguely calls "convergence diagnostic". Whether a LLN holds or not does not depend on "burn-in", having low autocorrelation, etc.

In full generality, testing if a deterministic sequence converges to a certain limit is not possible. On top of that, the sequence we have here is composed of random variables, and moreover these random variables are relatively expensive to simulate in problems of practical interest. Hence our definition of correctness appears hopelessly difficult to check.

Surprisingly, there exists a set of effective tools to check MCMC and SMC correctness. Some of these tools are in the literature but not emphasized. We compile and improve them in this document, and describe their implementation in Blang.

Analysis of MCMC and SMC correctness is a great example of an area where fairly theoretical concepts can have a large impact to a practical problem. Crucially, as we show here, MC theory will allow us to derive useful tests that are not based on asymptotics and that have exact, easy to compute false positive probabilities. Moreover, in some cases, we can even get non-probabilistic tests, i.e. with zero false positive and zero false negative probabilities. In other cases, tests do not have formal guarantees to find all bugs, but in practice they are useful to identify common ones.

Testing strategies

We provide a non-standard replacement implementation of bayonet.distributions.Random which can be used to enumerates all the probability traces used by an arbitrary discrete random process. In particular, many inference engines' code manipulate models through interfaces that are agnostic to the model being continuous or discrete, so we can achieve code coverage of the inference engines using discrete models. See bayonet.distributions.ExhaustiveRandom.

We use this for example to test the unbiasness of the normalization constant estimate provided by our SMC implementation.

package blang.validation import java.util.function.Supplier import bayonet.distributions.ExhaustiveDebugRandom class UnbiasnessTest { def static double expectedZEstimate(Supplier<Double> logZEstimator, ExhaustiveDebugRandom exhausiveRand) { var expectation = 0.0 var nProgramTraces = 0 while (exhausiveRand.hasNext) { val logZ = logZEstimator.get expectation += Math.exp(logZ) * exhausiveRand.lastProbability nProgramTraces++ } println("nProgramTraces = " + nProgramTraces) return expectation } }

This can be called with a small finite model, e.g. a short HMM here, but making it is large enough to achieve code coverage (Blang and the BlangIDE are compatible with the eclemma code coverage tool).

The output of the test has the form:

nProgramTraces = 23868 true normalization constant Z: 0.345 expected Z estimate over all traces: 0.34500000000000164

Showing that indeed our implementation of SMC is unbiased.

This can also be used to test numerically that transition probabilities of small state space discrete kernels are indeed invariant with respect to the target (facilities to help automating this will be developed as part of a future release).

Finally, when a model is fully discrete and all generate{..} blocks have the property that for each realization there is at most one execution trace generating it, then we can check that the logf and randomness used in the generate block match by using the arguments:

--engine Exact --engine.checkLawsGenerateAgreement true

Background

The basis of this test is from Geweke (2004) (this is the Geweke paper on correctness of MCMC, not to be confused with Geweke's convergence diagnostic, which as mentioned above, is unrelated), but we modify it in an important way.

Given a Blang model, as in the Geweke paper, we assume it supports two methods for simulating the random variables in the model: one via forward simulation (and since no data is used in this correctness check, it is reasonable to assume all variables can be filled in this fashion; this will be true as long appropriate generate blocks are provided for the constituents); the other, via application of MCMC transition kernels.

Let \( X \) denote all the variables in the model. Let \( X \sim \pi \) denote the forward simulation process. Let \( X' | X \sim T(\cdot | X) \) denote a combination of kernels including those we want to test for invariance with respect to \( \pi \). Let \( T_i \) denote the individual kernels which are combined into \( T \) via either mixture or composition. Typically, \( T \) is irreducible but not the individual \( T_i \)'s.

Both our test and Geweke's depend on one or several real-valued test functions \( f \), and on comparing two sets of simulations. However these two sets will be defined differently.

In Geweke's method, the two sets are:

  1. The marginal-conditional simulator:

  2. The successive-conditional simulator:

Then an approximate test comparing \( \{F_1, F_2, \dots, F_{M_1}\} \) and \( \{G_1, G_2, \dots, G_{M_2}\} \) is derived based on an asymptotic result.

The method has several limitations:

We call our alternative the Exact Invariance Test (EIT), and we use it heavily to establish Blang's SDK correctness. EIT has the following properties:

EIT compares the following two sets of samples, \( \{F_1, F_2, \dots, F_{M_1}\} \) and \( \{H_1, H_2, \dots, H_{M_3}\} \):

  1. Those coming from the marginal-conditional simulator, as described above, \( \{F_1, F_2, \dots, F_{M_1}\} \).

  2. The exact invariant simulator:

By construction, for any \( K \ge 1, j \in \{1, \dots, M_1\}, l \in \{1, \dots, M_3\} \), the random variable \( F_j \) is equal in distribution to the random variable \( H_l \) if and only if the kernel \( T_i \) is \( \pi \) invariant. This means that an exact test can be trivially constructed (for example if \( f \) takes on a finite number of values, Fisher's exact test can be used). Alternatively, tests with simple to analyze asymptotics such as the Kolmogorov–Smirnov can be used. Critically, since the \( H_m \)'s are independent, the asymptotics here do not depend on irreducibility or geometric ergodicity, and off-the-shelf iid tests can be used directly. The terminology "Exact" in EIT refers to the random variables being exactly equal in distribution under the null rather than the exactness of the frequentist procedure being used to assess if indeed the two sets are equal in distribution. Note that we have here a rare instance where a point null hypothesis is indeed a well grounded approach, even when using very large values for \( M_1, M_3 \).

Here \( K \ge 1 \) is a parameter controlling the power of test. Exact tests are available for any finite value.

For completeness, we also review below the Cook et al. test.

Then if the code is correct, the distribution of \( Q_i \) converges to the uniform distribution as \( K_2 \to \infty \), provided the chain is irreducible. Hence this method suffers from the same limitation as Geweke's in terms of not being able to narrow down the error to a single \( T_i \), and being approximate for any finite \( K_2 \).

Implementation of EIT in Blang

Prepare the EITs by assembling objects of type Instance, for which the constructor's first argument is a Blang model, and the following arguments are one or more test functions, playing the role of \( f \) in the previous section. See for example the list of instances used to test Blang's SDK.

Once the instances are available, you can create an automatic test suite by adding a file under the src/test directory based on the following example:

package blang import blang.validation.ExactInvarianceTest import org.junit.Test import blang.validation.DeterminismTest import blang.distributions.Generators import blang.validation.internals.fixtures.Examples class TestSDKDistributions { @Test def void exactInvarianceTest() { val oldThreshold = Generators._poissonSwitchToNormalThreshold Generators._poissonSwitchToNormalThreshold = Examples.largeLambda - 10 test(new ExactInvarianceTest) Generators._poissonSwitchToNormalThreshold = oldThreshold } def static void test(ExactInvarianceTest test) { setup(test) println("Corrected pValue = " + test.correctedPValue) test.check() } def static void setup(ExactInvarianceTest test) { test => [ nPosteriorSamplesPerIndep = 500 // 1000 creates a travis time out for (instance : new Examples().all) { test.add(instance) } ] } @Test def void determinismTest() { new DeterminismTest => [ for (instance : new Examples().all) { check(instance) } ] } }

This takes multiple testing correction under account. The example above also shows a related test, DeterminismTest which ensure reproducibility, i.e. that using the same random seed leads to exactly the same result.

By virtue of being standard JUnit tests, it is easy to use continuous integration tools such as Travis so that the tests are ran on a remote server each time a commit is made.

For individual univariate continuous distributions, this test checks using numerical integration that the normalization is one.

package blang; import org.junit.Assert; import org.junit.Test; import blang.core.IntDistribution; import blang.distributions.Generators; import blang.distributions.YuleSimon; import blang.types.StaticUtils; import blang.validation.NormalizationTest; import blang.distributions.BetaNegativeBinomial; import blang.validation.internals.fixtures.Examples; public class TestSDKNormalizations extends NormalizationTest { private Examples examples = new Examples(); @Test public void normal() { // check norm from -infty to +infty (by doubling domain of integration) checkNormalization(examples.normal.model); } @Test public void beta() { // check norm on a close interval checkNormalization(examples.beta.model, Generators.ZERO_PLUS_EPS, Generators.ONE_MINUS_EPS); } @Test public void testExponential() { // approximate 0, infty interval checkNormalization(examples.exp.model, 0.0, 10.0); } @Test public void testGamma() { checkNormalization(examples.gamma.model, 0.0, 15.0); } @Test public void testYuleSimon() { IntDistribution distribution = YuleSimon.distribution(StaticUtils.fixedReal(3.5)); double sum = 0.0; for (int i = 0; i < 100; i++) sum += Math.exp(distribution.logDensity(i)); Assert.assertEquals(1.0, sum, 0.01); } @Test public void testBNB() { IntDistribution distribution = BetaNegativeBinomial.distribution(StaticUtils.fixedReal(3.5), StaticUtils.fixedReal(1.2), StaticUtils.fixedReal(3.0)); double sum = 0.0; for (int i = 0; i < 1000; i++) sum += Math.exp(distribution.logDensity(i)); Assert.assertEquals(1.0, sum, 0.01); } }

Having correct normalization is important when we put priors on the parameters and sample the values of the parameters. In such cases the value of the normalization constant plays an often overlooked role in the correctness of transition kernels.