![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
The problem of filtering of unobservable components x(t) of a multidimensional continuous diffusion Markov process , given the observations of the (multidimensional) process y(t) taken at discrete consecutive times with small time steps, is analytically investigated. On the base of that investigation the new algorithms for simulation of unobservable components, x(t), and the new algorithms of nonlinear filtering with the use of sequential Monte Carlo methods, or particle filters, are developed and suggested. The analytical investigation of observed quadratic variations is also developed. The new closed-form analytical formulae are obtained, which characterize dispersions of deviations of the observed quadratic variations and the accuracy of some estimates for x(t). As an illustrative example, estimation of volatility (for the problems of financial mathematics) is considered. The obtained new algorithms extend the range of applications of sequential Monte Carlo methods, or particle filters, beyond the hidden Markov models and improve their performance.
Public Interest Statement
The problems of filtering of an unobservable process of interest, x(t), or estimation of the value of some function f(x(t)) (or functional) based on observations of the other random process y(t) related to x(t) arise in many areas: signal detection and filtering in radio physical devices; design of control and intelligent systems; financial mathematics; physical chemistry. In the last two decades, a great deal of works has been devoted to development and investigation of Monte Carlo methods for filtering, or particle filtering, for hidden Markov models where x(t) is a Markov process in itself. In the present paper, the new algorithms for the solution of nonlinear filtering problems (with the use of Monte Carlo calculations) are derived and obtained in explicit and closed analytical form for the general case where (x(t),y(t)) represents a multidimensional continuous diffusion Markov process. The new, effective methods for solution of nonlinear filtering difficult problems are obtained.
1. Introduction
In the last two decades, a great deal of works has been devoted to the development and the investigation of particle filters, or sequential Monte Carlo algorithms, for filtering of an unobservable process given observations of the other process
, taken at discrete times
with small time steps
(see, e.g. survey (Crisan & Doucet, Citation2002), collection (Doucet, Freitas, & Gordon, Citation2001), works (Carvalho, Del Moral, Monin, & Salut, Citation1997; Del Moral, Citation1998; Doucet, Godsill, & Andrieu, Citation2000), recent complete survey (Doucet & Johansen, Citation2011), survey (Del Moral & Doucet, Citation2014), and references wherein). The observations
are being obtained consecutively, and an estimate for
should be updated at each time
. It is assumed that the whole process
is a Markov process.
In order to solve the problem of filtering of x(tn), given the sequence of observations with the use of Monte Carlo methods, the samples of the random sequences
(from the distributions of
when
and
are given) should have been simulated numerically for i = 1, …, N.
In general case, first of all, the problem arises: how to obtain an explicit and compact analytical expression (exactly or approximately) for the conditional probability density in order to simulate samples of
when
and
are given.
Suppose that such sample sequences are being simulated, for i = 1, …, N. The joint probability density for the given observations
and the sequence
could be computed:
(1)
(1)
where represents the transition probability density of the Markov process
, and
is the probability density for the initial joint distribution of the initial value x(t0) and the initial observation y(t0).
In a large number of works devoted to particle filters the various algorithms of resampling were introduced in order to obtain the most a posteriori probable samples. For the problem of searching for the maximum a posteriori probable sample, the following algorithm of resampling can be suggested: introduce the values , for
. Then Wi(n) can be considered as a weight of the sample sequence
(as well as of the sample point xi(tn)), which characterizes the a posteriori probability (or the importance) of the sample sequence
in comparison with the other samples
given
. Due to the Markov property of the process
, the recursive formulae for
and Wi(n) can be written in general form; we have
(2)
(2)
Then, it is possible to calculate recursively the new values Pi(n + 1) and Wi(n + 1), when the new measurement y(tn+1) is obtained and the new sample point xi(tn+1) is augmented to the sequence , so that
. It is easy to see, that it is not necessary to keep in memory all the sequence
, but only the last point
Then the samples
with small weights Wi(n) could be deleted, but the “more important” samples
should be continued with a few “offsprings”, so that the number of all sample sequences under consideration is equal to N (exactly or approximately). The point
and the sequence
that have the maximum weight
correspond to the point xj(tn) and to the sample sequence that is maximally a posteriori probable given the observations
, among all the considered sample points xj(tn) and sample sequences
. Then the value of
can be taken as the sought estimate of
On the base of the Laws of Large Numbers, it could be proved that this Monte Carlo estimate converges to the maximum a posteriori probability estimate for x(tn) if N increases (see also Miguez, Crisan, & Djuric, Citation2013).
In most of works the additional assumption is accepted that the process x(t) is a Markov process in itself, and that the conditional probability density (for the observable process
) can be presented in explicit and simple analytical form. Such cases are often referred to as hidden Markov models. Then, in many works, the sample sequences
are being simulated as trajectories of the Markov process
in itself, since such a simulation can be easily done. The joint probability density, corresponding to the constructed sample
and to the observations
, is equal to
(3)
(3)
Then the weights Wj(n) (or the other similar weights) could be easily calculated, and the above procedure of sampling and resampling could be realized, in order to obtain the estimate for x(tn). In some cases it would be better to simulate samples xj(tk) with the use of the conditional probability density which includes the observed value y(tk). But this could have required a large amount of computations if we do not have an explicit and compact analytical expression for that conditional probability density. Meantime, if the process x(t) is being simulated just as a Markov process in itself, the sample sequences
represent samples from the a priori distribution for
that can be far apart from the a posteriori distribution for
given
. Then resampling with the use of weights and significant increase in the number N are needed in order to find the most a posteriori probable values of
using the algorithm described above. Nevertheless, for hidden Markov models the particle filters with such a simple simulation of the sample sequences
, with resampling based on consideration of weights, but with large N, have been implemented and proved to be useful in some applications (Carvalho et al., Citation1997; Thrun, Fox, Burgard, & Dellaert, Citation2001).
In general case when x(t) represents only some components of a multidimensional Markov process (and x(t) is not a Markov process in itself), it was not shown in the literature how to simulate sample sequences
when the values of the other components,
, are given (i.e. how to simulate sample sequences
without a formidable amount of computations at each time tk).
Note that the difficulties in obtaining the samples that correspond to the distributions
have led to the introduction and the use of some “proposal sampling distributions” (which can be simulated easier) and “auxiliary particle filters” in many works (see, e.g. collection (Doucet, De Freitas, & Gordon, Citation2001) and survey (Doucet & Johansen, Citation2011)). But the “optimal proposal sampling distribution” would be equal to the true distribution
if the latter could have been found in closed form. In the present work the precise, explicit and compact analytical formulae and algorithms for simulation of the sample sequences
, when
is given, and for recursive calculations of Pi(n), weights, and estimates are obtained for the general case of a multidimensional continuous diffusion Markov process
.
For the problem of estimation of a function or
, the new estimates in explicit and closed analytical form are obtained in the following Section 2.2.
Moreover, in the quotients the “scale” of all the Pj(n) is cancelled, and the information that could characterize the smallness of the a posteriori probability of all the generated samples
is lost. In the present work, the new sequential Monte Carlo algorithms are derived that include some tests (developed in Section 2.4) in order to discard the samples of low a posteriori probability before the calculation of all the weights is done. The implementation of the suggested tests guarantees that the samples that remain under consideration belong to the domain where the a posteriori probability density is localized.
In some important cases of hidden Markov models, in the particle filters that were suggested and studied in the literature, where the samples (with i = 1, …, N) are being simulated as the trajectories of the Markov process x(t) in itself, it is possible that the large part of those N generated samples
would not be localized in the “vicinity” of the “true” realization of the trajectory,
, because they are simulated with the use of the a priori transition probability density for x(t), and they could be of low a posteriori probability given
. For example, if the “level of intensity of noise” in observations y(t) is small, the chance to generate randomly the sample
that belongs to the domain where the a posteriori probability density (given
) is concentrated might be as small as the chance to catch randomly a needle in a haystack. Thus, although theoretically in those cases the Monte Carlo estimates converge as N → ∞, the required number N increases if the “level of intensity of noise” decreases. Meanwhile, a large amount of calculations would be wasted for processing of those samples
of low a posteriori probability. In the present paper, in the following Section 2.6, we shall show (for the problem of nonlinear filtering of a signal) that with the use of the new algorithm (17) of simulation
given
and with some appropriate change of the mathematical model of description of the process of observations (which could be justified in applications), the new algorithm (17) will generate at once the samples
that are localized in the domain that is a posteriori probable given
, for all i = 1, …, N.
The special case when the diffusion coefficients of the observable process y(t) depend on unobservable components x(t) is considered. In that case “observed quadratic variations” of the process y(t) can be introduced (when observations y(tk) are taken in discrete time), and they contain a lot of information about x(t). In the present paper, the analytical formulae that characterize the observed quadratic variations are obtained in explicit and closed form. On the base of these results, the algorithms of filtering and estimation with the use of the observed quadratic variations are developed.
Theoretically, the derivative of the process of quadratic variation could be incorporated into the set of observed, known processes. If that derivative process were known, then with the use of the new algorithm of simulation (17), new estimates (26) and algorithms developed in the present paper, the filtering problem would be effectively solved, as shown in Section 2.5. But in practice we cannot assume that this derivative process is straightforwardly observed. However, if additional observations of y(t) are available, namely, y(t0 + sδt), with δt ≪ Δt, then the estimates
for the derivative u(tk) can be obtained and used for the filtering, as shown in Section 2.5.
The similar systems arise if some additional precisely known observations are available, for example, measurements of some function . In that case the solution of the filtering problem, described in Section 2.5, could be implemented.
The obtained new algorithms improve performance of particle filters, or sequential Monte Carlo methods, and extend the range of their applications beyond the hidden Markov models.
The implementation of particle filters, as it is usual for Monte Carlo methods, requires a lot of repeating computations, but it became accessible and useful in some applications, since the speed of computations and the capacity of computers have increased dramatically in the last two decades. Note that the speed can be increased further with the use of parallel computing. The computational cost of implementation of a general scheme of particle filtering on some existing processing units that allow parallel programming was considered e.g. in the paper by Hendeby, Karlsson, and Gustafsson Citation(2010).
2. Derivation of new recursive algorithms for Monte Carlo simulation and filtering of unobservable components of multidimensional diffusion Markov processes
2.1. Simulation of trajectories of unobservable components. Analytical investigation of a multidimensional diffusion Markov process observed at discrete times
Consider the multidimensional diffusion Markov process . The components
are unobservable, but the other components
are available for observations, which are being taken at discrete times tk.
A diffusion Markov process with continuous trajectories z(t) can be characterized by its drift and diffusion coefficients:(4)
(4)
(5)
(5)
where Δt is a small time step. Denote
From the assumption that the trajectories of the process z(t) are continuous functions of t it follows (Kolmogorov, Citation1931/1986, Citation1933/1986) that(6)
(6)
The matrix ∥ bij ∥ is symmetric and nonnegative definite. Therefore, in general case, that matrix can be represented with the use of its “square root” in the form ∥ bij ∥ = ∥ aij ∥ ∥ aij ∥ T, where ∥ aij ∥ T stands for transpose matrix ∥ aij ∥ .
Then the process could be constructed as a solution of the system of stochastic differential equations:
(7)
(7)
where are independent Wiener processes. The initial condition at t = t0 is given as
, where z0 is a random variable independent of all
with given probability density P0(z0). The system (7) should be interpreted as the system of stochastic integral equations:
(8)
(8)
with stochastic integrals in the sense of Ito (Doob, Citation1953). It is assumed that the drift coefficients Ai(z, t) satisfy the Lipschitz condition(9)
(9)
It is also assumed that the diffusion coefficients bij(z, t) are continuous and differentiable functions of z and t. The Lipschitz conditions (9) guarantee that trajectories z(t) do not have finite escapes, i.e. does not tend to infinity when t tends to some finite moments of time.
The system of integral equations (8) with any given continuous trajectory can be solved with the use of successive approximations, which converge and define the continuous trajectory
. Thus, the trajectories of the process z(t) are continuous with the probability 1.
The diffusion Markov process z(t) can be also constructed as the limit solution of the system of finite difference equations:(10)
(10)
where the random impacts are independent random variables (for all
,
), with
and
;
;
. In particular, the increments of Wiener processes,
can be used in the finite difference equations (Equation10
(10)
(10) ) instead of
:
(11)
(11)
Here again it is assumed that the Lipschitz conditions (9) are satisfied, and that the functions Ai(z, t) and bij(z, t) are differentiable with respect to z. The construction of diffusion Markov processes by the passage to the limit in the scheme of finite difference equations when time steps Δtk tend to zero was first introduced and investigated by Academician S.N. Bernstein in 1934, 1938 years. The works (Bernstein, Citation1934/1964, Citation1938/1964) were published at first in French. They are republished in Collected works of S.N. Bernstein, vol.4 (in Russian), Publishing House Nauka (Academy of Science of USSR, Moscow, 1964). The results, obtained in these works (Bernstein, Citation1934/1964, Citation1938/1964), define analytically all the joint probability distributions for the limit process z(t), for all the random variables ,
,…,
for any r and
. By the Theorem of A.N. Kolmogorov on extension (or continuation) of measures in function space (CitationKolmogorov, 1933/1950), those multidimensional distributions can be continued to the measure on the σ-algebra that corresponds to the process z(t) with continuous time t. The convergence of the linear broken lines, which represent interpolation of the solutions of finite difference stochastic equations (10) or (11), to the continuous trajectories of diffusion Markov processes was also proved in (Kushner, Citation1971). Thus, the diffusion Markov process
is well defined by the system of finite difference equations (which could be more general, but similar to (10)) and by the passage to the limit from those Markov processes with discrete time (Bernstein, Citation1934/1964, Citation1938/1964). Analytical methods and solutions for some problems of filtering and estimation, based on recursive finite difference stochastic equations, with the passage to the limit (as the time steps tend to zero), similarly to the scheme of S.N. Bernstein, were developed and investigated in (Khazen, Citation1968, Citation1971, Citation1977; Chapter 3; Khazen, Citation2009)
In the following Section 2.2, the goal is to obtain the estimates of or
with the use of Monte Carlo calculation of some integrals that can be interpreted as mathematical expectations. We should notice the following properties that are important in order to achieve that goal. Denote
the joint probability density for the random process zΔ(t) that represents the solution of the finite difference equations (10) or (11) with small time steps Δtk = Δ (and with piecewise constant or piecewise linear interpolation on the small time intervals
,
). Denote
the joint probability density for the limit diffusion Markov continuous process z(t) obtained as Δ → 0. It was established in the works by Bernstein (Citation1934/1964, 1938/1964) that
as Δ → 0. Hence, the contribution to the error of the Monte Carlo estimates of the integrals
and
(which are considered below in (23), (24)), caused by approximation of the limit diffusion process z(t) by its pre-limit finite difference model (11), tends to zero as Δ → 0. For the scheme (11), it was also proved in (Kushner, Citation1971, Chapter 10) that trajectories zΔ(t) converge in mean square to the trajectories of the limit diffusion process z(t) as Δ → 0, so that E
. Consequently,
, if the function f(z) satisfies Lipschitz condition. Meanwhile, in the problems of filtering the best estimate of
(by the criterion of minimum mean square error) is
, and, in general case, the value of the mean square error of filtering,
, remains finite, positive, limited from below, even if the error of calculation of that conditional expectation tends to zero. Thus, the accuracy of the estimate of x(tn) or
could not be noticeably improved even if the accuracy of approximation of the limit diffusion process z(t) by its finite difference model (11) increased as Δ decreased. Therefore, it is justified to use the finite difference model (11) for description and simulation of the considered process z(t), in order to obtain the estimates of x(tn) or
that provide solutions to the considered filtering problems (with feasible precision) in case if the observations y(tk) are being taken in discrete consecutive times tk with small time steps.
` In the present work, we are interested to describe analytically the conditional probability density . Consider the local increment Δz of the Markov process
when the value
is given, with small time step Δt; denote
. We have the relations (4)–(6) for the moments of Δz given
Then the characteristic function for the random value Δz can be presented in the form:
(12)
(12)
where is a real vector. The last equality follows from the known expression for the characteristic function of a random variable with the use of its moments (see, for example, course of the Theory of Probability (Gnedenko, Citation1976). The inverse transformation provides the representation
(13)
(13)
where is the inverse matrix of ∥ bij ∥ (with 1
The above expressions (12), (13) show that the local increment Δz (in the small, but finite time interval Δt) of the multidimensional diffusion Markov process z(t) with continuous trajectories, when
is given, can be considered as a multidimensional Gaussian random variable.
Using the Theorem on Normal Correlation, we find the following expressions for the first and second moments of the conditional probability distribution of the increments Δxα (with (α = 1, …, m) provided that the increments Δyρ (with ρ = (m + 1), …, p) and the value are given (Khazen, Citation2009, Chapter 3, Sections 3.1.2 and 3.3.1, pages 79–81, 101–106):
(14)
(14)
(15)
(15)
Hereinafter the summation is being made over repeated indices, with 1 ≤ α, β ≤ m, the matrix ∥ cσρ ∥ represents the inverse or pseudo-inverse (Moore–Penrose) matrix of the matrix of the diffusion coefficients of the observable components yρ(t), so that
For the probability density of the increments Δy, we obtain the following expression:(16)
(16)
where C(z, t) is the normalization factor.
Note, that in the theory of filtering of diffusion Markov processes with observations made in continuous time t, i.e. if is supposed to be known exactly on the time interval t0 ≤ s ≤ t, in such case it is assumed that the diffusion coefficients bσρ, with
, do not depend on unobservable components
In the contrary case (as it was pointed out by this author (Khazen, Citation1977; Chapter 3; Khazen, Citation2009, Chapter 3), since the diffusion coefficients
could be (at least, theoretically) precisely restored on the base of a single realization of the observed trajectory y(s) on the small time interval t − δ ≤ s ≤ t + δ (no matter how small the taken value of δ is), the functions bσρ(x(t), y(t), t) could have been incorporated into the set of observable functions. In the problem at hand, when filtering of the process x(t) should be obtained on the base of observations
taken at discrete times tk with small, but finite time steps Δtk, that restriction may be cancelled. We shall consider further (in Sections 2.4, 2.5) how to incorporate some estimates of the values of
given
in order to improve filtering of
The matrix ∥ bαβ − bασcσρbρβ ∥ is symmetric and nonnegative definite, and it can be presented in the following form:
∥ bαβ − bασcσρbρβ ∥
with 1 ≤ α, β, γ, δ ≤ m,
Then the samples xj(tk), when and
are given, can be simulated as
(17)
(17)
where are independent samples from Gaussian standard distribution, with
,
(for all β = 1, …, m,
,
; Δtk = tk - tk-1.
We shall denote shortly
Note, that the above expressions (14)–(17) show that if all the diffusion coefficients bαρ vanish (with 1 ≤ α ≤ m, ) and if
, bαβ(z, t) do not depend on y(t), then
will be simulated without the use of
. Only the initial measurement
will be taken into account: the initial sample point
should be simulated as a random variable with the probability density
(18)
(18)
This particular case do not correspond to all the hidden Markov models: The process x(t) could be a Markov process in itself, but it is still possible, that some coefficients bαρ are not equal to zero, for example, in case, if there are some “white noises”, which enter simultaneously into equations for x(t) and into “random disturbances or random errors” of the observable process y(t).
In general case of a multidimensional continuous Markov diffusion process , the influence of the observed data
is manifested in the expressions (Equation17
(17)
(17) ), which describe simulation of the random samples
.
The values Pi(n) and Wi(n) can be easily calculated:
(19)
(19)
where the conditional probability densities for the increments, Δxi(tk), Δy(tk)
are Gaussian densities with moments determined by (14), (15), and (16). And the above procedure of sampling and resampling, which corresponds to a search for the point x(tn) that provides maximum to the a posteriori probability density, could be implemented. In the next Sections 2.2–2.4, the results of the further analytical investigation and development of the algorithms are presented.
2.2. Estimating of a function f(x(tn))
Consider the problem of estimation of some functions or
given
The general expression for the estimate can be written in the following form:
(20)
(20)
It is assumed that for the considered functions f(x) or this conditional expectation exists. In case of a Markov process
, we can write:
(21)
(21)
In our case of a multidimensional continuous diffusion Markov process, we have derived the explicit analytical expressions (19), (16). Hence, in this case, we obtain(22)
(22)
where ) represents the known normalization factor. Denote shortly
.
If N samples , with
and N ≫ 1, have been independently taken from the distribution
then, in accordance with basics of Monte Carlo methods, the following integrals can be interpreted as mathematical expectations. Then we obtain(23)
(23)
and similarly(24)
(24)
Due to the Laws of Large Numbers, the accuracy of those approximate estimates increases as N increases.
Denote shortly(25)
(25)
Then the sought estimate can be written in the following form:
(26)
(26)
In most of applications, we can assume that there exists the mathematical expectation of the random value , where
is fixed and
represent the results of independent random simulations with the use of (17); i.e.
. This assumption is equivalent to the following:
. Denote
The random values ζi are independent, and they have one and the same probability distribution, and
. Then the Strong Law of Large Numbers (the Theorem of Kolmogorov) guarantees that the sums (i.e. the arithmetic means
) in right-hand side of the formulae (23), (24) converge with probability one (i.e. almost surely) to the integrals in left-hand side if N → ∞.
Note that in case if the diffusion coefficients bσρ (for ) of the observable process
do not depend on unobservable process x(t), then
and
do not depend on i, and cofactors
should be cancelled in the numerator and denominator of the formula (Equation2
(2)
(2) 6). In that case we can put:
(27)
(27)
Note that the calculations of the values Yi(n) (25) or (27) can be implemented recurrently since they are determined by the values of the recursively accumulated sums.
In case of hidden Markov models where the unobservable process is a Markov process in itself, and the observable process y(t) is described by a stochastic differential equation (SDE):
(28)
(28)
where h(x, t) is a given nonlinear function with respect to x, and w(t) is a standard Wiener process, the above estimate
takes the form, which is similar to the estimates for
that were earlier constructed and studied for this particular case in the literature. The novelty of the estimates (23)–(27) and the simulation (17), obtained in the present paper, is that the simulation of unobservable components and estimates
are obtained in explicit and closed analytical form for the general case of partially observable multidimensional continuous diffusion Markov processes, and the obtained Monte Carlo estimate (26) converges with probability one to the sought posterior expectation of
given
, if N → ∞.
Now we shall demonstrate that the Monte Carlo estimates (23)–(26), (27) for (obtained in the present paper) hold true also in the case if the sample sequences
are being generated with the use of some branching sampling procedures.
Consider the following branching resampling. As it was already pointed out in the Introduction, it is purposeful to discard the samples that have negligibly small weights Wi(n) or
, in order to decrease the amount of all the computations.
Suppose that in the end of each time interval (with
; T0 = t0; Tk+1 = Tk + mkΔt, and
and Nk are given numbers) each sample sequence
that still existed at the time Tk+1 is being continued with Nk+1 “offsprings”. At the initial moment of time, T0, there are N0 independently taken initial sample points xi(t0),
. When tn = Tk+1 and
, sample points
are being augmented to the sequence
with
, in order to construct the “offsprings”. The sample points
are being independently taken from the distribution
. Then those sample sequences (with “offsprings”) will be continued until the next time of branching, Tk+2, except for some sample sequences that are discarded before Tk+2 since their weights become “too small” (for example, smaller than some chosen threshold
); and so on. For the simplicity of discussion, we can renumber again all the current sample sequences (that still exist at the time tn) as
. Consider at first the case when all the sample sequences are being continued without discarding. Then their number is growing, and at the time Tk+1 it is equal to
(with k > 0).
Consider the estimate
under condition that the above branching sampling procedure is being implemented. Denote the time interval (between the moments of branching) that contains tn, so that
.
From the Markov property of the random process and the factorization (21), (22), and from the “tower property” of conditional expectations:
it follows that the formulae (23)–(26), (27) hold true for the Monte Carlo estimate of integral in the case if the sample sequences
are generated with the use of the above branching procedure.
We can consider the Monte Carlo estimates for the integrals (23), (24) for each i-th “tree” of the branching sample sequences, which begins at the point , with
. Denote these random values as ξi(n), with
. Then the random values ξi(n) are independent of each other, and they have one and the same probability distribution, and
. Then they obey the Strong Law of Large Numbers, so that
converges with probability one to the sought integral as N0 → ∞.
But many of the exponential weights are rapidly decreasing with increase in the number n of time steps. Therefore, it is possible to discard some highly a posteriori improbable sample sequences
, which do not provide noticeable contribution into the estimates
. The numbers mk, Nk, and the threshold Wcr can be adjusted (in practical implementation for a particular class of applications) in order to decrease the amount of all the calculations and make them feasible, and at the same time to keep the current sample points xi(tn) in the domain where the a posteriori probability density is localized. In practical implementation, it can be also purposeful to begin branching not at the fixed moment of time Tk but at some current moment of time tn when the number of all existed sample sequences (or sample points) becomes less than some given fraction of N.
It is worth noting that in the following Section 2.4 the tests (47), (48) are obtained that allow detection and rejection of samples xi(tn) of low a posteriori probability to be done without the use of the weights Wi(n) or (for
, i.e. independently for each sample sequence
. Thus, in the above algorithm we can use such tests instead of the comparison of the weight
with the threshold Wcr. Then the algorithm described above can be effectively implemented with the use of parallel computing, since the simulation and continuation (the branching resampling) of each sample sequence
can be done independently of the other samples
with k ≠ i. The values Yi(n) determined by (25) or (27) will be also calculated recurrently and independently for each sample sequence
. Thus, that large amount of calculations could be effectively performed with the use of parallel computing. And only on the other stage all the obtained accumulated values Yi(n) (with
) will be used for the calculation of the weights
, which are needed for the calculation of the sought estimate
.
The new Monte Carlo algorithm of estimation of or
(presented above) is derived straightforwardly from Bayes formula (Equation2
(2)
(2) 0). The estimates are constructed with the use of the samples xi(tn) or
that are simulated by (17), and their weights
are defined by (23)–(26), (27).
For the particular case of hidden Markov models, another algorithm with random branching resampling for “particle filtering” of unobservable Markov “signal” x(tn) was developed, that is the Sequential Importance Resampling (SIR) algorithm; it is studied in the works (Crisan & Doucet, Citation2002; Del Moral, Citation1998), for the processes with discrete time. In that SIR algorithm with random branching resampling, the a posteriori probability distribution for x(tn) is approximated by some “cloud of random particles” xi(tn) with weights wi(n) = 1/N, With the use of our new algorithm of simulation of unobservable components x(tn) given
(see Section 2.1, (17)) and the new closed form analytical expressions (14)–(17), (23)–(27), that SIR algorithm with random branching can be generalized for the general case of filtering of unobservable components x(t) of a multidimensional diffusion Markov process
, in the following way.
In the generalized algorithm that we are proposing the sample sequence should be chosen for continuation at the time of branching Tm = tr(m) (and kept up to the next time of branching, Tm+1) with probability pi(m) (which will be determined below) in each of N independent attempts to continue their ensemble, so that the expectation of the random number Ki(m) of its “offsprings” is equal to
, with i = 1, …, K(m), where K(m) denotes the number of all the sample points xi(tr(m)) that still existed at the time tr(m). We can suggest that the times of branching Tm+1 = Tm + MΔt, with M ≥ 1,
(Note, that M = 1 in the standard SIR algorithms). The probability pi(m) depends on all the sample points x1(tm), …,
or sample sequences
. Such a procedure of random resampling is similar to the standard SIR algorithm, but we have to determine the probability pi(m) for the general case of multidimensional diffusion Markov processes
.
For the general case, with the use of analytical expressions (16)–(27), we derive the following expression:
where
and Qi(m) denotes the normalization factor of the probability density . Here the notations are the same as earlier in Sections 2.1, 2.2.
In the case if the diffusion coefficients bσρ of the observed process y(t) do not depend on the unobservable process x(t), the above expression can be written in the following more concise form:
where
In case of hidden Markov models (28), which can be considered as a particular case of a multidimensional diffusion Markov process (x(t), y(t)), from the above expression follows:
where
For the simplicity of notation, the above formula is written for the case of one-dimensional processes and
, which satisfy (28). The last expression for pi(m) (for the case M = 1) is in agreement with the determination of the probabilities of continuation of the samples xi(tn) presented in (Crisan & Doucet, Citation2002; Del Moral, Citation1998) for the case of hidden Markov models.
With the use of the above procedure of random branching and with our algorithm (17) of simulation of given
, we can generalize the SIR algorithm of particle filtering with random branching, which was developed for hidden Markov models, to the general case of multidimensional diffusion Markov processes
.
Thus, in the present paper a few different possible versions of algorithms of particle filters are provided for filtering of unobservable components x(tn) given , which are justified theoretically. The new algorithm developed above, with branching sampling and recurrent calculation of the values (25) or (27), appears to be preferable for implementation with the use of parallel computing. The further practical implementation and comparison for various applications could be achieved in the future works.
2.3. Analytical investigation of observed quadratic variations
Consider the case when some diffusion coefficients bσρ of the observable process y(t) depend on the unobservable process x(t). We assume throughout this paper that the diffusion coefficients are continuous and differentiable functions with respect to z. For simplicity of notation, consider at first the case when y(t) is one-dimensional. For example, consider the following model that plays an important role in financial mathematics (Hull, Citation2000):
(29)
(29)
Here are independent Wiener processes, μ and σ2 are known constants, and the functions A(x, t) and σ1(x, t) are continuous and differentiable with respect to x. In the famous Black–Scholes models (Hull, Citation2000), the observable process y(t) in (29) represents the natural logarithm of a stock price, S(t), so that
, while the process
corresponds to volatility, which is to be estimated given the path y(s), t0 ≤ s ≤ t. If the process y(s) were available for observations continuously over the time interval t0 ≤ s ≤ t, the value
could have been restored precisely, at least theoretically.
It is well known that a diffusion Markov process z(t) is not differentiable at any time t. It can be proved that its quadratic variations(30)
(30)
if time steps Δtk tend to zero, while M → ∞; here , with given δ > 0, and the value of δ can be arbitrarily small. Note that the relation (Equation30
(30)
(30) ) will be also obtained as a consequence of the new analytical estimates that are developed below.
For the system (29), , and from (30) it follows that the value of x2(t) could be restored with any desirable accuracy. Then
, and the filtering problem would be solved exactly.
But the observations are being taken at discrete times tk with small, but finite time steps
, so that it is impossible to observe precisely the values of
In general case, we can calculate the “observed quadratic variations”,
(31)
(31)
and consider it as an estimate for The similar estimates for volatilities were introduced and considered in (Hull, Citation2000, chapter 15). It is more convenient to use the recursive averaging instead of the moving averaging (31). Then
(32)
(32)
with In the limit, if all Δtk decreased and tended to zero, we would have obtained
(33)
(33)
We can incorporate the observed quadratic variations into the set of observed data. We shall show further, in Section 2.4, how to use that “observed quadratic variations” in order to reject at once the samples
that are highly improbable given
.
The recursive formula (Equation32(32)
(32) ) implies that
(34)
(34)
We may assume that the considered realization of the process z(t), , is being obtained consecutively in accordance with finite difference stochastic equations (Equation11
(11)
(11) ). The value of observed quadratic variation
is being accumulated in parallel, along with that realization of z(t), so that the random increment
will be produced after
is realized. We are interested to describe the properties of the observed quadratic variations
under condition that the realization of unobservable components,
, is fixed, although it is unknown, and that the next measurement y(ts) will arrive after the previous measurement
is already given.
The conditional expectation
In general case, as it was demonstrated above, the increment Δy = (Δym+1, …, Δyp) in small time step Δt, given , can be described as the Gaussian multidimensional random variable with the probability density (16). The increments Δy(tk) are independent from the past history before tk-1 if z(tk-1) is given. The following properties of Gaussian distributions will be useful in the sequel: If ξ is a Gaussian random variable with Eξ = 0, then Eξ4 = 3(Eξ2)2. If (ξ1, ξ2, ξ3, ξ4) is a four-dimensional Gaussian random variable with
(for i = 1, …, 4), then
In general case of multidimensional process y(t), we obtain
Denote shortly . Introduce the following value:
(35)
(35)
Formula (35) can be written in recursive form, similar to (32):(36)
(36)
Consider deviation
(37)
(37)
For simplicity of discussion suppose that all the time steps , we obtain:
. That notation means that this small value is proportional to a value which decreases faster than Δt if Δt decreases.
The increments Δyρ(ts) and Δyρ(tq) with tq < ts are independent when is given. We obtain that the dispersion of the deviation ψσρ(tk) (37) (under condition that
is fixed) is equal to
(38)
(38)
In case if the diffusion coefficients of the observable process, are the functions of x and t only, so that
(with
we obtain
(39)
(39)
The value (39) also can be calculated recursively, similarly to (36). Note, that the value is a value proportional to Δt. If σ = ρ, the above expression takes more concise form:
(40)
(40)
In general case, if depend on y(t), we can find the estimates
, where
may be determined either as some functions of
or as some constants
. Then for the variance (38), we obtain the following estimates:
(41)
(41)
The process z(t) is considered on the finite time interval t0 ≤ s ≤ T, so that the values of are limited. Thus, the variance of the deviation (37) is a value proportional to Δt, and if Δt → 0 the observed quadratic variation tends to the limit (33). We shall demonstrate below, that the probability distribution for the deviation ψρρ or
tends to the Gaussian one as Δt → 0. It follows that the deviations converge to zero if Δt → 0.
Besides, we obtained new analytical formulae (38)–(41), which describe the dispersion of the deviation (37) of the observed quadratic variation when observations are taken in discrete times, with small time steps.
For the system (29), the general analytical formulae (38)–(40), obtained in the present paper, provide characterization of the accuracy of the estimate of volatility , which is constructed with the use of the observed quadratic variation. If this accuracy satisfies requirements, the problem of estimation is solved. In that case the value of the given time step Δt can be considered as “small enough”.
In general case, if the value of x(tk) cannot be uniquely recovered only on the base of observed quadratic variations, the further filtering may be needed.
Finally, we are going to demonstrate that the probability density of the random value ψρρ(tk) can be approximately described as a Gaussian one. The deviation ψρρ(tk) contains the sum of squares of the increments , which are independent of the past history before ts-1 given y(ts-1), x(ts-1). The increments
(given y(ts-1), x(ts-1)) can be described as the Gaussian random variables with probability density (16). The sum of squares of Gaussian random variables is not a Gaussian random variable. However, the probability density for the random deviation ψρρ(tk) can be approximately described as the Gaussian probability density. Consider the case when
(with
are the functions of x(t) and t only. Then the variance of deviation ψρρ(tk) is equal to
(40), which represents a value proportional to Δt (here the time step Δt can be small, but Δt is finite).
Note, that for a Gaussian random variable ξ with Eξ = 0 the following relations hold true: , where Q(r) is a constant, and for the odd moments
, with
. Those relations can be useful in estimating of the moments of
when (y(ts-1),x(ts-1)) is given. Then the consideration of the higher moments of the deviation proves that
, if q > 2. Then the characteristic function of the random deviation ψρρ(tk) takes the form similar to (12), which implies the Gaussian expression for its inverse transformation, similar to (13). Hence, the probability density of that deviation can be approximately described as a Gaussian probability density. The smaller Δt the higher the precision of that approximation, although the deviation itself tends to zero if Δt → 0. But in practice the time step should not be chosen too small since in our mathematical model y(t) is considered as a diffusion random process, which is not differentiable at any time t. That is not the case in practice where we have a smooth trajectory
, and the quadratic variation of
is equal to zero. Diffusion approximation can be accepted when the time steps Δts ≥ τcor, where τcor represents the characteristic time span of decay of correlation, so that correlation between the random values
and
(given
) would be negligibly small (Khazen, Citation1968, Citation1971, Citation1977; Chapters 2, 3; Khazen, Citation2009).
2.4. Detection and rejection of highly a posteriori improbable samples
In the process of resampling, the a posteriori improbable samples are being rejected when their weights become small. Besides, it is possible to discard some highly a posteriori improbable samples before all the weights
or
are calculated. It can be done with the use of the following tests that are based on analysis of the obtained analytical expressions (Equation19
(19)
(19) ), (21), (Equation22
(22)
(22) ).
The measurements are being obtained under condition that there is an unobservable realization of the process
, which is unknown. (Here the superscript “tr” stands for “true”). The first and the second moments of the random variables Δyρ(ts) given
are equal to
(42)
(42)
For simplicity of discussion we can assume that all the time steps are equal to Δt.
Consider at first the case when the diffusion coefficients bσρ of the observable process y(t) do not depend on x(t); then .
The value Pi(k) (19) contains the following cofactor:
(43)
(43)
where the normalization factor C does not depend on in that case.
Consider the hypothesis Hi that the sample is situated in the vicinity of the true realization
, and the “closeness” means that
. Here the function Dρ(x, y, t) can be determined on the basis of some preliminary analysis of the considered dynamical system, for example, we can put
, where Rα are constants, 1 ≤ α ≤ m. In order to distinguish between two hypotheses, Hi and its negation
, consider the following value:
(44)
(44)
Note, that Qi(tk) can be recursively calculated. Denote
(45)
(45)
The larger the difference the farther the sample
can be from
Denote
,
.
Consider the variance of the random value Ui(tk) when the realization is given. We obtain
If the matrix ∥ bσρ ∥ is not degenerated, then bσηcητ = δστ, where δστ is Kronecker symbol, δστ = 1 if σ = τ and δστ = 0 if σ ≠ τ. In that case, the above expression for can be written in more concise form:
(46)
(46)
In order to reject the samples , which are highly improbable given
, we can use approximately the following test. If the considered dynamic system is stable, the following value can be used as an approximate estimate for the above expression (46), if the hypothesis Hi is true and the measurements
are obtained:
The probability of the following event:(47)
(47)
is small under condition that Hi is true. Hence, if the inequality (47) is satisfied for the considered sample , the hypothesis
should be rejected, and that sample
should be discarded. If all the considered sample sequences are discarded, the new sample points
should be generated as initial points, i.e. as samples from the initial distribution
(18). For the samples
that remain under consideration the values Pi(k) and Wi(k) or
will be recursively computed. The samples that were not discarded can be continued with a few “offsprings”. Due to the test (47) the samples
that are highly a posteriori improbable (when the sequence of observations
is given) will be rejected at once, before the weights Wj(k) or
are computed.
Consider the case when some diffusion coefficients depend on x(t). In some cases, as it was shown above, the unobservable value of can be restored with the use of the observed quadratic variations. In other cases, if x(t) cannot be uniquely recovered, it is purposeful to use observed quadratic variations
in order to reject the samples
that are highly a posteriori improbable. That test is similar to the above test (47). Assume that if hypothesis Hi is true, the sample
is “close” to the true realization
, and the “closeness” means that
Here the function Bρ(x, y, t) can be determined as , where Rα are constants. Denote shortly (similarly to (35), (36), (38), (39)):
Consider the case when (with
). Then the probability of the following event
(48)
(48)
is small if the hypothesis Hi is true. Hence, if the inequality (48) is satisfied at least for one number ρ, with (m + 1) ≤ ρ ≤ p, then the hypothesis Hi should be rejected, and the sample should be discarded at once, before the calculation of all the weights Wj(k) or
is done.
In case if , with
, the following test can be used instead of (48):
(49)
(49)
2.5. Incorporation of the derivative of the process of quadratic variation into the set of observed data. Systems with some additional precise observations
In case of continuous time of observation, the process of quadratic variation can be included into the set of observed processes, as it was demonstrated in Section 2.3.
For simplicity of notation, consider the system of one-dimensional processes x(t), y(t):
(50)
(50)
(51)
(51)
where w1(t), w2(t) are independent Wiener processes, the functions m(x, t), h(x, t) satisfy Lipschitz condition, and the functions , σ2(x, t) are twice differentiable with respect to x. The process
(52)
(52)
can be considered theoretically as an observed process, given the trajectory for t0 ≤ s ≤ t. Here
The process V(t) obeys the ordinary differential equation
For continuous time of observation and multidimensional diffusion processes , in some cases the possible values of x(t) are precisely determined by the equalities similar to (52) (and, consequently, x(t) can be restored precisely, at least theoretically), and in other cases the trajectory x(s) must be localized in some “layer”, where those equalities are satisfied, given the observed path of the process of quadratic variation, V(s), for t0 ≤ s ≤ t.
Theoretically, it is possible to consider the derivative as an observed process, and incorporate it into the set of observed components:
. For the system (50), (51), the process u(t) satisfies the following stochastic differential equation (in the sense of Ito):
(53)
(53)
The above SDE (53) could be augmented to the system of SDE (50)–(51) for x(t), y(t).
But in practice we cannot assume that the process u(t) (or, equivalently, the process ) is straightforwardly observed and known precisely. Practically, the process V(tk) is not precisely given. Even in case if the deviations of “observed quadratic variation” V(obs)(tk) from V(tk) are small, in obtaining the estimates for derivative,
, the sought values could be inevitably corrupted by errors.
However, if additional observations of y(t) are available, namely, y(t0 + sδt), with δt ≪ Δt, then the estimates
for the values of u(tk) can be improved. Suppose that there is a special digital device that takes measurements of
at times
, with
, and use them in order to calculate the estimate
. For the moment, consider the case if we had
. Then the problem of filtering (based on observations y(tk), u(tk) taken at discrete times tk, with small time steps Δtk) could be solved effectively with the use of the new algorithm of simulation (17) and the estimates (26) and algorithms suggested in the present paper. For the system (50), (51), and (53), we obtain in (17): b11 - b1σcσρbρ1 ≡ 0. That means that the sample paths
would be deterministically continued given the observations
and the initial values xi(t1), with i = 1, …, N. It is purposeful to choose the probability distribution of the initial random samples xi(t1) in such way that
, with some random scattering. Meanwhile, the weights
are being defined by (25), (26) with the use of observations
. Thus, the a posteriori distribution is being determined not only by
but also by
. Then, as it was shown in Section 2.2, the estimate (26) converges with probability one (due to the Strong Law of Large Number, as N → ∞) to the sought a posteriori expectation that provides solution to the filtering problem. That algorithm could be implemented with the use of parallel computing.
The random errors of the obtained estimates can be taken into account in the following way. The random deviations
can be approximately described as independent of each other (for different k) Gaussian random values. Then
can be considered as measurements (taken at discrete times ) of the process
that satisfies the following SDE:
(54)
(54)
where is an independent Wiener process, and
is proportional to the variance of
, i.e.
. Thus, σ3 can be determined as some function of x, t. Then the solution of the filtering problem for the system (50), (51), (54) can be obtained in the similar way as above.
Note that the systems that are similar to (50), (51), (53) arise if the measurements of some given function are known without errors. Here the function H(x, t) is supposed to be twice differentiable with respect to x. Then the solution of the filtering problem (given observations of y(t) and
at discrete times tk = t0 + kΔt, with small time step Δt) can be obtained with the use of the algorithm of simulation (17) and the estimates (26), similarly to the solution described above for the system (50), (51), (53).
2.6. The problem of nonlinear filtering of a signal. Change of the mathematical model that describes the process of observations
For simplicity of notations, consider one-dimensional processes x(t), y(t). In most of works devoted to nonlinear filtering problems the following hidden Markov model is accepted for description of a signal process x(t) and an observed process y(t):(55)
(55)
(56)
(56)
where w1(t), w2(t) are independent Wiener processes, σ2 ≡ Const, the functions m(x, t), σ1(x, t) satisfy Lipschitz condition, and the functions h(x, t), σ1(x, t) are twice differentiable with respect to x.
In many applications, it would be possible to describe an observed process Y(t) as follows:(57)
(57)
where u(t) represents the errors of measurements. The “noise process” u(t) can be considered as a Gaussian Markov process, which can be described by the following stochastic differential equation:(58)
(58)
where β and σ3 are constant, and w3(t) is an independent Wiener process. Then , the variance
, and the correlation function
.
Denote ,
,
.
Then the model (55), (57), (58) of the signal process x(t) and the observed process Y(t) can be described as follows:(59)
(59)
Here z3(t) represents the observed process, and z1(t), z2(t) are unobservable components of the process . It is easy to see that the diffusion coefficients
,
, b12 = 0,
,
,
. Consider the matrix ∥ rαβ ∥ = ∥ bαβ - bασcσρbρβ ∥ , which was introduced in (15): This is the variance–covariance matrix for the conditional probability density of the increments
,
, given z(t) and the increment
. For the system (59) we find:
,
The case when σ3 is small corresponds to the small errors of measurements. All the entries of the matrix ∥ rαβ ∥ (shown above) is small if σ3 is small. That means that most of the samples (given
) simulated with the use of the algorithm (17) continue to be localized in the domain where the a posteriori probability density is concentrated. Besides, the test (48) can be implemented (unless the function
is linear with respect to
and σ1 ≡ Const) in order to discard a posteriori improbable samples with the use of “observed quadratic variation” (for the observations made in discrete time) of the observed process z3(t).
Thus, although the dimensionality of the unobservable process increased for the model of description (59) in comparison with the model (55), (56), the problem of nonlinear filtering (with observations made in discrete time) will be solved effectively with the use of the new Monte Carlo estimates (23)–(27) and with the use of the new algorithm of simulation (17), derived in the present paper.
Note that the model (59) is a hidden Markov model where unobservable process is a Markov process in itself. But if the samples
were simulated as trajectories of the Markov process
in itself (as it is suggested in particle filters, developed and studied in the literature), then, in case of the low “level of intensity of noise” (i.e. if σ3 is small) the large part of generated samples would be localized in the domain of low a posteriori probability (and that part is growing when
decreases). Then, in order to obtain some feasible accuracy of filtering, the number N should be increased significantly if the value of σ3 decreased. Meanwhile, the large amount of calculations would be wasted in vain for processing of the samples of low a posteriori probability. Thus, this example (the system (59)) also proves the advantage of the new algorithms, developed in the present paper, in comparison with the known algorithms for hidden Markov models.
3. Conclusion
In the Conclusion, the new results obtained and presented by this author are shortly summarized and underscored.
(1) | It was analytically proved by this author at first in the book (Khazen, Citation2009) that the increment of a multidimensional continuous diffusion Markov process z(t), | ||||
(2) | The new Monte Carlo estimates of functions | ||||
(3) | For the first time, some tests are developed in order to discard at once the sample sequences | ||||
(4) | Some branching sampling procedures are developed. The standard Sequential Importance Resampling (SIR) procedure is generalized for the general case of partially observed multidimensional diffusion Markov process, when the new algorithm of simulating of samples (17) should be used. Some new versions of branching resampling procedures are proposed, which can be easier implemented with the use of parallel computing. | ||||
(5) | Analytical investigation of observed quadratic variations is developed (in case when diffusion coefficients of observed components, y(t), depend on unobservable components x(t)) (see Section 2.5). For the first time, the analytical formulae (in explicit and closed form) that determine the dispersions of deviations of observed quadratic variations are obtained (see (38), (39)), and it is also proved that the deviations obey asymptotically (when Δt → 0) to the Gaussian distribution. | ||||
(6) | The important particular case of nonlinear filtering of a signal is considered in Section 2.6. Significant advantage of the new algorithms and estimates obtained in the present paper in comparison with the particle filters suggested earlier in the literature for Hidden Markov Models is demonstrated. | ||||
(7) | In Section 2.5 it is demonstrated that the new filtering algorithms, developed in the present paper, provide opportunity to incorporate the observed quadratic variations into the set of observed data. They also provide opportunity to use additional precise observations in case if such observations are available. These results are new, and they also confirm the advantage of the proposed new solution of the nonlinear filtering problem. |
The obtained new algorithms and estimates extend the range of applications of particle filtering beyond Hidden Markov Models and improve performance.
Additional information
Funding
Notes on contributors
Ellida M. Khazen
Ellida M. Khazen graduated from Department of Mathematics of Lomonosov Moscow State University (Moscow, Russia, USSR) with Honor Diploma in 1959 (Master Degree). She received her PhD Degree in Mathematics from Department of Mathematics of Lomonosov Moscow State University in 1962. She received her Doctor Degree in Applied Mathematics in 1994. She was working as Senior Scientist Researcher Mathematician at Moscow Scientific Research Institute of Device Automation in 1962–1996, and also as a Visiting Lecturer at Department of Mathematics of Lomonosov Moscow State University and as a Visiting Lecturer at Moscow Institute of Radio Engineering, Electronics and Automation (MIREA). She has published two scientific monographs (in Russian and in English), chapters in books (in Russian), and more than 50 scientific research papers in the areas of the theory of turbulence onset, the theory of random processes, filtering and signal detection, optimal statistical decisions and statistical sequential analysis, informational estimation of risks, and the theory of optimal control.
References
- Bernstein, S. N. (1934/1964). Principes de la theorie des equations differentielles stochastiques. Proceeding of the Phys.-Mat. Steklov Institute, 5, 95–124. Republished in Collected works of S.N. Bernstein (Vol. 4). Nauka: Moscow Publishing House ( in Russian).
- Bernstein, S. N. (1938/1964). Equations differentielles stochastiques. Actualités Scientifiques et Industrielles, 738, 5– 31. Conference International Sci. math. Univ. Geneve. Theorie des probabilites, V. Les fonctions aleatoires. Republished in Collected works of S.N. Bernstein, vol.4. Nauka: Moscow Publishing House. ( in Russian).
- Carvalho, H., Del Moral, P., Monin, A., & Salut, G. (1997). Optimal nonlinear filtering in GPS/INS integration. IEEE Transactions on Aerospace and Electronic Systems, 33, 835–850.10.1109/7.599254
- Crisan, D., & Doucet, A. (2002). A survey of convergence results on particle filtering methods for practitioners. IEEE Transactions on Signal Processing, 50, 736–746.10.1109/78.984773
- Del Moral, P. (1998). Measure-valued processes and interacting particle systems. Application to nonlinear filtering problems. Annals of Applied Probability, 8, 438–495.
- Del Moral, P., & Doucet, A. (2014). Particle methods: An introduction with applications. ESAIM: Proceedings, 44, 1–46.10.1051/proc/201444001
- Doob, J. L. (1953). Stochastic processes. John Wiley & Sons, Chapman & Hall, New York, NY, London.
- Doucet, A., & Johansen, A. (2011). A tutorial on particle filtering and smoothing: Fifteen years later. In D. Crisan & B. Rozovsky (Eds.), The Oxford handbook of nonlinear filtering (39 pp.). Oxford: Oxford University Press. Retrieved from http://www.stats.ox.ac.uk/~doucet/doucet_johansen_tutorialPF2011.pdf
- Doucet, A., Godsill, S., & Andrieu, C. (2000). On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197–208.10.1023/A:1008935410038
- Doucet, A., Freitas, N., & Gordon, N. (Eds.). (2001). Sequential Monte Carlo methods in practice. Information science and statistics. New York, NY: Springer Verlag.
- Gnedenko, B. V. (1976). The theory of probability. Moscow: Mir.
- Hendeby, G., Karlsson, R., Gustafsson, F. (2010). Particle Filtering: The Need for Speed. EURASIP Journal of Advances in Signal Processing, 2010, Article ID 181403. doi:10.1155/2010/181403
- Hull, J. C. (2000). Options, Futures, & Other derivatives (4th ed.). Upper Saddle River, NJ: Prentice Hall.
- Khazen, E. M. (1968). Methods of optimal statistical decisions and optimal control problems (in Russian). Moscow: Soviet Radio.
- Khazen, E. M. (1971). On stochastic differential equations for the a posteriori probability distribution in problems of adaptive filtering and signal detection. Automation and Remote Control, 32, 1776–1782.
- Khazen, E. M. (1977). Chapters 2, 3, 5 by Khazen, E.M., pp. 47-66, 67-102, 193-243, in the book: Petrov, B.N., Ulanov, G.M., Ulyanov, S.V., Khazen, E.M. Information theory in processes of optimal control and organization. Nauka: Moscow Publishing House. ( in Russian).
- Khazen, E. M. (2009). Methods of optimal statistical decisions, optimal control, and stochastic differential equations. Bloomington, IN: Xlibris.
- Kolmogorov, A.N. (1931/1986). Uber die analytischen Methoden in der Wahrscheinlichkeitsrechnung. Math. Ann., Bd. 104, S. 413-458. Russian translation in “Uspekhi Mat. Nauk”, 1938, issue 5, pages 5-41. Republished in Collected works of A.N. Kolmogorov, The Theory of probability and mathematical statistics. Nauka: Moscow Publishing House. ( in Russian).
- Kolmogorov, A.N. (1933/1950). Foundations of the Theory of Probability. Published at first in German as “Grundbegriffe der Wahrscheinlichkeitsrechnung” in (1933). Russian editions in 1936, 1974 (English ed.). New York, NY: Chelsea.
- Kolmogorov, A. N. (1933/1986). Zur Theorie der stetigen zufalligen Prozesse. Math. Ann., Bd. 108, S. 149–160. Republished in Collected works of A.N. Kolmogorov, The Theory of Probability and Mathematical Statistics. Nauka: Moscow Publishing House. ( in Russian).
- Kushner, H. (1971). Introduction to stochastic control. New York,NY: Holt, Rinchart, and Winston.
- Míguez, J., Crisan, D., & Djurić, P. (2013). On the convergence of two sequential Monte Carlo methods for maximum a posteriori sequence estimation and stochastic global optimization. Statistics and Computing, 23, 91–107.10.1007/s11222-011-9294-4
- Thrun, S., Fox, D., Burgard, W., & Dellaert, F. (2001). Robust Monte Carlo localization for mobile robots. Artificial Intelligence, 128, 99–141.10.1016/S0004-3702(01)00069-8