In a previous post I have set the problem of estimating parameters for a state-space model (SSM). That post is a required read, also because I set some notation. My final goal is to show how to construct exact Bayesian inference for the vector of parameters . To achieve this task, I need to introduce special Monte Carlo methods for multi-dimensional integration. This post rapidly screens some notions of Monte Carlo, importance and sequential importance sampling methods for SSM.
We have already learned that SSM have an intractable likelihood. This means that the analytic expression of the likelihood function for the vector of parameters is not known. We can also say that a likelihood function is intractable when it is difficult to approximate, though this concept is kind of vague. What is considered to be “difficult” is relative: let’s say that the integrals involved in said likelihood cannot be solved using standard numerical methods such as quadrature.
As a reminder, for given data we want to approximate the likelihood function
and I have shown that it is possible to write
So the question is how to find an approximation for this -dimensional integral. We can approach the problem from different angles, all interconnected.
Standard Monte Carlo integration
We can write as
That is the integration problem can be interpreted as taking the expectation of the conditional density with respect to the law
. This means writing
. The interpretation of an integral as a probabilistic expectation is at the core of Monte Carlo methods.
It is natural to approximate an expectation using an empirical mean, as follows: using a computer program, generate independently draws from
, then invoke the law of large numbers.
- Produce
independent draws
,
.
- For each
compute
. Notice the conditioning on the sampled
.
- By the law of large numbers, we have
and the approximation improves for increasing
. In fact we can write
.
The error term for the approximation is regardless the dimension of
, which is remarkable. Notice, the convergence property 3 implies that the Monte Carlo estimate is consistent. Another important property, that will be discussed in a future post, is that the estimate is also unbiased.
The issue here is, how to generate “good” draws (hereafter particles) ? Here “good” means that we want particles such that the values of
are not negligible. Since these particles are the points where the integrand function
gets evaluated, we want those particles that are “important” for the evaluation of the integral, and we wish most of the
to end up in a region of high probability for
.
It turns out that for SSM sequential Monte Carlo (SMC, or “particle filters”) is the winning strategy.
I will not give a thorough introduction to SMC methods. I will only consider a few notions useful to solve our parameter inference problem. I first consider importance sampling as a useful introduction to SMC.
Importance Sampling
For ease of reading I remove the dependence of quantities on . This is a static parameter (i.e. it does not vary with time), and we can consider as implicit the dependence of all probability densities on
.
Consider the following:
where is an arbitrary density function called “importance density” that is non-zero whenever
is non-zero (the support of
needs to be greater or equal to the support of
). The purpose of introducing the importance density is to use it as a sampler of random numbers, assuming we don’t know how to sample from the
appearing in the penultimate equality. Therefore we should choose a
easy to simulate from.
Notice the two crucial simplifications occurring in the fourth equality: we have and
. The first result is from the Markov property defined in the previous post, and the second one states the independence of
from the “history of
prior to time
” when conditioning on
(see the graph in the previous post). Conditional independence was also stated in the previous post.
In conclusion, the importance sampling approach approximates the integral using the following Monte Carlo procedure:
- simulate
samples
independently,
.
- Construct “importance weights”
.
.
Notice that here we wish to simulate sequences
with
. Clearly, generating at each time
a “cloud of particles”
spanning the entire interval
is not really computationally appealing, and it is not even clear how to construct the importance density. We need some strategy enabling a sort of sequential mechanism. Moreover, even if we are able to simulate the particles, are we able to evaluate all the densities appearing in
? For some complex models we might not know the analytic expressions for
and
. This is addressed in next post.
Sequential importance sampling
When the importance density is chosen intelligently, an important property is the one that allows a sequential update of the importance weights. That is, for an appropriate
we can write
and this sequential update is a key result to make the computation of the -dimensional integral bearable. Here I am going to show how to obtain the weights update above.
First of all, consider that it is up to the analyst to design a density that is appropriate for the model at hand (easy to evaluate, easy to sample from, generating “important” particles). For example, let’s write an importance density as the product of two densities
and
:
Notice that while depends on measurements up to time
, instead
depends on measurements up to
. This is because we have freedom in designing
, as
is not subject to the constraints imposed on the state space model (Markovianity, conditional independence). For example
does not have to result in
, though this choice is allowed. For simplicity, in the following I remove subscripts for
and
and just write
.
I use the decomposition for the importance density to show the sequential update of the importance weights , though for ease of writing I remove the superscript
. We have:
Now, note that we can use the Bayes theorem to write the following
and in the derivation I have used both the conditional independence of observations and the Markovianity of the latent state (see the previous post).
We can use this derivation to write , so we can express the weights as
which is the sequential update of the importance weights we wanted to prove. The sequential update is convenient, as when the time index advances in the simulation we do not need to produce particles always starting at time
, instead we can perform computations online, by only retaining the weights computed at time
.
Notice I used the proportionality symbol as we do not need to carry the constant term
resulting from the denominator of the Bayes theorem. This term is independent of
, hence is not going to be relevant for parameters optimization (maximum likelihood approach) nor in Bayesian sampling from the posterior distribution of
.
Once particles are simulated, we have the following approximations (notation-wise, I now recover the parameter I previously removed):
Adding back in the notation is important not to make confusion between the likelihood function
and the “evidence”
,where the latter is really independent of
(it’s the denominator of the Bayes theorem).
In conclusion, I have outlined some approaches to approximate the likelihood function, by assuming the ability to sample from some importance density . However, I have skipped discussing how to construct such density (indeed, this is a major and problem dependent issue), and a simple possibility is covered in the next post.
It is fair to say that, even though we managed to derive the relations above with success, in practice the computation of the required quantities does not always end up with a good likelihood approximation. This is investigated in the next post, together with strategies to overcome numerical issues.
Summary
I have started a quick excursus into Monte Carlo methods for the approximation of the likelihood function for state space models (SSM). I covered a naïve Monte Carlo approach, then importance sampling and finally sequential importance sampling. Each of these topics has been investigated at great length in available literature (there would be so much more to consider, for example quasi-Monte Carlo). However, my focus is to give an idea of possibilities, while quickly moving forward towards introducing the simplest sequential Monte Carlo strategy, which is based on sequential importance sampling. Once sequential Monte Carlo is introduced, I will move to Bayesian parameter estimation for SSM.
Further reading
Possibilities for self study are endless. Here are a couple of excellent and accessible resources:
- Chapter 7 in Simo Särkkä (2013) Bayesian Filtering and Smoothing, Cambridge University Press. Notice a PDF version is freely available at the author’s webpage. Companion software is available at the publisher’s page (see the Resources tab).
- Cappé, O., Godsill, S.J. and Moulines, E., 2007. An overview of existing methods and recent advances in sequential Monte Carlo. Proceedings of the IEEE, 95(5), pp.899-924.
If you have access to the titles below it’s a plus:
- Liu (2004). Monte Carlo Strategies in Scientific Computing, Springer.
- Robert, Casella (2004). Monte Carlo Statistical Methods, Springer.
Pingback: Sequential Monte Carlo and the bootstrap filter | Umberto Picchini's research blog
Pingback: Why and how pseudo-marginal MCMC work for exact Bayesian inference | Umberto Picchini's research blog