5,441
Views
46
CrossRef citations to date
0
Altmetric
Research Article

State-of-the-art stochastic data assimilation methods for high-dimensional non-Gaussian problems

, , , , , , & show all
Pages 1-43 | Received 05 Jul 2017, Accepted 19 Feb 2018, Published online: 21 Mar 2018

Abstract

This paper compares several commonly used state-of-the-art ensemble-based data assimilation methods in a coherent mathematical notation. The study encompasses different methods that are applicable to high-dimensional geophysical systems, like ocean and atmosphere and provide an uncertainty estimate. Most variants of Ensemble Kalman Filters, Particle Filters and second-order exact methods are discussed, including Gaussian Mixture Filters, while methods that require an adjoint model or a tangent linear formulation of the model are excluded. The detailed description of all the methods in a mathematically coherent way provides both novices and experienced researchers with a unique overview and new insight in the workings and relative advantages of each method, theoretically and algorithmically, even leading to new filters. Furthermore, the practical implementation details of all ensemble and particle filter methods are discussed to show similarities and differences in the filters aiding the users in what to use when. Finally, pseudo-codes are provided for all of the methods presented in this paper.

1. Introduction

Data assimilation (DA) is the science of combining observations of a system, including their uncertainty, with estimates of that system from a dynamical model, including its uncertainty, to obtain a new and more accurate description of the system including an uncertainty estimate of that description. The uncertainty estimates point to an efficient description in terms of probability density functions, and in this paper we discuss methods that perform DA using an ensemble of model states to represent these probability density functions.

Ensemble Kalman filters are currently highly popular DA methods that are applied to a wide range of dynamical models including oceanic, atmospheric, and land surface models. The increasing popularity of Kalman-Filter-based ensemble (EnKF) methods in these fields is due to the relative ease of the filter implementation, increasing computational power and the natural forecast error evolution in EnKF schemes with the dynamical model in time. However, due to technological and scientific advances, three significant developments have occurred in the last decade that force us to look beyond standard Ensemble Kalman Filtering, which is based on linear and/or Gaussian assumptions. Firstly, continuous increase in computational capability has recently allowed to run operational models at high resolutions so that the dynamical models have become increasingly non-linear due to the direct resolution of small-scale non-linear processes in these models, e.g. small-scale turbulence. Secondly, in several geoscientific applications, such as atmosphere, ocean, land surface, hydrology and sea – i.e. it is of interest to estimate variables or parameters that are bounded requiring DA methods that can deal with non-Gaussian distributions. Thirdly, the observational network around the world has increased manyfold for weather, ocean and land surface areas providing more information about the real system with greater accuracy and higher spatial and temporal resolution. Often the so-called observation operators that connect model states to observations of these new observations are non-linear, again asking for non-Gaussian DA methods. Thus, the research in non-linear DAmethods, which can be applied to high-resolution dynamical models and/or complex observation operators, has seen major developments in the last decade with the aim to understand how existing ensemble methods cope with non-linearity in the models, to develop new ensemble methods that are more suited to non-linear dynamical models, as well as to explore non-linear filters that are not limited to Gaussian distributions, such as particle filters or hybrids between particle and ensemble Kalman filters.

The origin of this paper lies in the EU-funded research project SANGOMA (Stochastic Assimilation for the Next Generation Ocean Model Applications). The project focused on generating a coherent and transparent database of the current ensemble-based data assimilation methods and development of data assimilation tools suitable for non-linear and high-dimensional systems concentrating on methods that do not require tangent linear approximations of the model or its adjoint. The methods described within this paper have been applied in operational oceanography, like the TOPAZ system (Sakov et al., Citation2012) or the FOAM system of the UK Met Office (see Blockley et al., Citation2014). While TOPAZ is already using an EnKF, FOAM applies an optimal interpolation scheme that takes in less dynamical information to estimate the error covariance matrix. That said this paper is aimed at a very broad audience and data assimilation methods discussed in this paper are not limited to applications to ocean or atmosphere models, hence, the methods are presented without the context of any specific dynamical model, allowing the reader to make the most of each technique for their specific application.

A number of reviews have been published recently, each collating parts of the development in data assimilation, e.g. Bannister (Citation2017) gives a comprehensive review of operational methods of variational and ensemble-variational data assimilation, Houtekamer and Zhang (Citation2016) review the ensemble Kalman filter with a focus on application to atmospheric data assimilation, Law and Stuart (Citation2012) review variational and Kalman filter methods, and Bocquet et al. (Citation2010) present a review of concepts and ideas of non-Gaussian data assimilation methods and discusses various sources of non-Gaussianity. The merits of this paper lie within:

  • coherent mathematical description of the main methods that are used in the current data assimilation community for application to high-dimensional and non-Gaussian problems allowing the reader to easily oversee the differences between the methods and compare them;

  • discussing ensemble Kalman Filters, particle filters, second-order exact filters and Gaussian Mixture Filters within the same paper using consistent notation;

  • inclusion of practical application aspects of these methods, discussing computational cost, parallelising, localisation and inflation techniques;

  • provision of pseudo-code algorithms for all of the presented methods;

along with inclusion of recent developments, such as the error-subspace transform Kalman filter (ESTKF) and recent particle filters, this paper goes beyond earlier reviews (e.g. Tippett et al., Citation2003; Hamill et al., Citation2006; van Leeuwen, Citation2009; Houtekamer and Zhang, Citation2016; Bannister, Citation2017).

The paper is organised as follows: in Section 2 the common ground through Bayes theorem is established, in Section 3 a historical overview is given for both ensemble Kalman and particle filter fields, and in Section 4 we define the basic problem solved by all of the methods presented in this paper. In Section 5 we discuss the most popular types of ensemble Kalman filter methods. Then, in Section 6, we discuss several particle filter methods that can be applied to high-dimensional problems. In Section 7, we describe ensemble filters with second-order accuracy, namely the particle filter with Gaussian resampling (PFGR), the non-linear ensemble transform filter (NETF), and the moment-matching ensemble filter. The Gaussian mixture filter is discussed in Section 8. The practical implementation of the filters including localisation, inflation, parallelisation and the computation cost as well as the aspect of non-linearity are discussed in Section 9. Finally, Appendix 1 provides pseudo codes for resampling techniques often used in particle filter methods, and Appendix 2 contains pseudo codes for all of the methods discussed in this paper.

We note that many of the filters discussed in this paper are available freely from Sangoma project websiteFootnote1 along with many other tools valuable and/or necessary for data assimilation systems.

1.1. Notation

In the data assimilation community the currently most accepted notation is described in Ide et al. (Citation1997). We adhere to this notation where possible while also making this paper acceptable and intuitive not only to data-assimilation experts but also to a wider audience including those who might like to explore data assimilation methods simply as tools for their specific needs. To this end, throughout this paper dimensions will always be described by capital N with an underscore indicating the space in question, that is

  • Nx - dimension of state space;

  • Ny - dimension of observation space;

  • Ne - dimension of ensemble/particle space.

Further, the time index is always denoted in parentheses in the upper right corner of the variables, i.e. .(m), except for operators such as M (dynamical model) and H (observation operator) where it is in the lower right corner. However, we will omit the time index when possible to ease the notation. We willrefer to each ensemble member (or each particle) by xj where the index j=1,,Ne and Ne is the total number of the ensemble members (or particles).

When discussing Bayesian theory in Sections 2 and 6 purely random variables will be denoted by capital letters, and fixed or deterministic or observed quantities will be denoted by lowercase letters. Probability density functions will be denoted by p.., and q(..) and we will use lower case arguments in this context.

Throughout the paper, Greek letters will refer to various errors, e.g. observational or model errors. Finally, bold lowercase letters will denote vectors and bold uppercase letters will denote matrices.

2. Common ground through Bayes theorem

Various types of data assimilation methods, e.g. variational, ensemble Kalman filters, particle filters, etc., have originated from different fields and backgrounds, due to the needs of a particular community or application. However, all of these methods can be unified through Bayes theorem. In this section, we will give a summary of Bayes theorem showing how both ensemble Kalman filter (KF) methods and particle filter (PF) methods are linked in this context and what problems each of them solve. For an introduction to the Bayesian theory for data assimilation, the reader is referred to e.g. van Leeuwen and Evensen (Citation1996) and Wikle and Berliner (Citation2006).

Data assimilation is an approach for combining observations with model forecasts to obtain a more accurate estimate of the state and its uncertainty. In this context, we require

  • data, that is observations y and a knowledge of their associated error distributions and

  • a prior, that is a model forecast of the state, xf, and knowledge of the associated forecast and model errors;

to obtain the posterior, i.e. the analysis state xa, and its associated error. The posterior can be computed through Bayes theorem which states that(1) px|y=py|xpxpy,(1)

where px|y is the posterior or analysis probability density function, py|x is the observation probability density function or also called the likelihood, px is the prior or forecast probability density function, and py is the marginal probability density function of the observations, which can be thought of as a normalising constant. From now on, for the ease of the readability, we will abbreviate ‘probability density function’ with ‘pdf’.

Typically, data assimilation methods make the Markovian assumption for the dynamical model M and the conditional independence assumption of the observations. That is, we assume that the model state or the prior at time m, when conditioned on all previous states only depends on the state at the time m-1,(2) px(0:T)=px(0)m=1Tpx(m)|x(m-1).(2)

Here, the superscript 0 : T is to be read for time indices from initial time to time T, which is typically called assimilation window in data assimilation. Further, observations are also usually assumed to be conditionally independent, i.e. they are assumed to be independent in time,(3) py(1:T)|x(0:T)=m=1Tpy(m)|x(m).(3)

Using Equations (Equation2) and (Equation3) we can rewrite Bayes theorem in Equation (Equation1) as(4) px(T)|y(T)p(x0)m=1Tpy(m)|x(m)px(m)|x(m-1).(4)

The Markovian assumption allows us to use new observations as they become available by updating the previous estimate of the state process without having to start the calculations from scratch. This is called sequential updating and the methods described in this paper all follow this approach.

Ensemble Kalman filter methods solve this problem using Gaussian assumptions for both prior and likelihood pdf’s. Multiplying two Gaussian pdf’s leads again to a Gaussian pdf, i.e. the posterior or analysis pdf will also be Gaussian. The posterior pdf will have only one global maximum, which will correspond to the ensemble mean (also mode and median since the pdf is Gaussian). In other words, the posterior pdf in ensemble Kalman filter methods described in Section 5 is found in terms of the first two moments (mean and covariance) of the prior and likelihood pdf’s. This is also true when ensemble Kalman filters are applied to non-linear dynamical models or observation operators, in which case the information from higher moments in an ensemble KF analysis update is ignored. This is a shortcoming in ensemble Kalman filters when applied to non-Gaussian problems. However, in general ensemble Kalman filter methods are robust when applied to non-linear models and catastrophic filter divergence, where the filter deviates strongly from the observations while producing unrealistically small error estimates, occurs mainly due to sparse or inaccurate observations (Verlaan and Heemink, Citation2001; Tong et al., Citation2016). It should, of course, be realised that in non-linear settings the estimates of the posterior mean and covariance might be off.

In particle filter methods, the posterior is obtained using the prior and likelihood pdf’s directly in Equation (Equation1) without restricting them to being Gaussian. If both prior and likelihood are Gaussian the resulting posterior or analysis pdf is also Gaussian and has a global maximum corresponding to the mean state. However, if either or both prior and likelihood pdf’s are non-Gaussian then the resulting posterior pdf will also not be Gaussian. In other words, if the dynamical model or mapping of the model variables to observation space are non-linear then particle filter methods will produce an analysis pdf which will provide knowledge of more than the first two statistical moments (mean and covariance), in contrast to ensemble Kalman filter methods. Thus, the analysis pdf could be skewed, multi-modal or of varying width in comparison to a Gaussian pdf. Hence, particle filters are, by design, able to produce analysis pdf’s for non-Gaussian problems. While standard particle filter methods suffer from filter divergence for large problems recently several particle filter variants have been developed that avoid this divergence.

In what follows, we will describe numerous filtering methods in Sections 5, 6, 7, and 8 and discuss how each method attempts to produce an analysis pdf for non-Gaussian and high-dimensional problems. However, firstly we provide an overview of the historical development of both ensemble Kalman filters and particle filter methods to show how these fields have evolved and what has given rise in the development of each of the methods.

3. History of filtering for data assimilation

Before we precede to the main point of our paper – describing in unified notation current state-of-the-art ensemble and particle filter methods for non-linear and non-Gaussian applications, their implementation, and practical application, a short summary is in order on the historical development in both ensemble Kalman filter and particle filter areas.

3.1. Development history of ensemble Kalman filters

Ensemble data assimilation (EnDA) started in 1994 with the introduction of the Ensemble Kalman filter (EnKF, Evensen (Citation1994)). The use of perturbed observations was introduced a few years later simultaneously by Burgers et al. (Citation1998) and Houtekamer and Mitchell (Citation1998) to correct the previously too low spread of the analysis ensemble. This filter formulation defines today the basic ’Ensemble Kalman filter’, which we will denote as the Stochastic Ensemble Kalman Filter, with a slightly different interpretation and implementation, as will be described later. The first alternative variant of the original EnKF was introduced by Pham et al. (Citation1998a) in the form of Singular ’Evolutive’ Interpolated Kalman (SEIK) filter. The SEIK filter formulates the analysis step in the space spanned by the ensemble and hence is computationally particularly efficient. In contrast to the EnKF, which was formulated as a Monte Carlo method, the SEIK filter was designed to find the analysis ensemble by writing each posterior member as a linear combination of prior members without using perturbed observations. Another ensemble Kalman filter that uses the space spanned by the ensemble was introduced with the Error-Subspace Statistical Estimation (ESSE) method (Lermusiaux and Robinson, Citation1999).

The filters mentioned above were all introduced for data assimilation in oceanographic problems. A new set of filter methods was introduced during the years 2001 and 2002 for meteorological applications. The Ensemble Transform Kalman Filter (ETKF, Bishop et al., Citation2001) was first introduced in the context of meteorological adaptive sampling. Further, the Ensemble Adjustment Kalman Filter (EAKF, Anderson, Citation2001) and the Ensemble Square Root Filter (EnSRF, Whitaker and Hamill, Citation2002) were introduced. The motivation for these three filters was to avoid the use of perturbed observations, which were found to introduce additional sampling error into the filter solution, with the meteorological community apparently being unaware of the development of the SEIK filter. The new filters were classified as ensemble square root Kalman filters and presented in a uniform notation by Tippett et al. (Citation2003). Nerger et al. (Citation2005a) further classified the EnKF and SEIK filters as error-subspace Kalman filters because the filters compute the correction in the error-subspace spanned by the ensemble. This likewise holds for the ETKF, EAKF, and EnSRF, however, these filters do not explicitly use a basis in the error subspace but use the ensemble to represent the space. When the EAKF and EnSRF formulations are used to assimilate all observation at once, these filters exhibit a much larger computational cost compared to the ETKF. To reduce the cost, the original study on the EnSRF (Whitaker and Hamill, Citation2002) already introduced a variant in which observations are assimilated sequentially, which assumes that the observation errors are uncorrelated. A similar serial formulation of the EAKF was introduced by Anderson (Citation2003). This sequential assimilation of observations was assessed by Nerger (Citation2015) and it was shown that this formulation can destabilise the filtering process in cases when the observations have a strong influence.

With regard to the classification as an ensemble square root Kalman filter, the SEIK filter is the first filter method that was clearly formulated in square root form. The original EnKF uses the square root form only implicitly but an explicit square root formulation of the EnKF was presented by Evensen (Citation2003).

The methods above all solve the original equations of the Kalman filter but use the sample covariance matrix of the ensemble to represent the state error covariance matrix. An alternative was introduced with the Maximum-Likelihood Kalman Filter (MLEF, Zupanski, Citation2005). This filter represents the first variant of the class of hybrid filters that were introduced in later years. The filter computes the maximum-a posteriori solution (in contrast to the minimum-variance solution of the Kalman filter) by an iterative scheme.Footnote2

While the EnKFs were very successful in making the application of the Kalman filter feasible for the high-dimensional problems in oceanography and meteorology, the affordable ensemble size was always very limited. To counter the issue of sampling error in ensemble covariances (the ensemble-sampled covariance has a rank of not more than the ensemble size minus one while the applications were of very high state dimension) the method of covariance localisation was introduced by Houtekamer and Mitchell (Citation1998) and Houtekamer and Mitchell (Citation2001). Later, an alternative localisation was introduced for the ETKF (LETKF, Hunt et al., Citation2007) which uses a local analysis (also used previously, e.g. by Cohn et al. (Citation1998)) where observations are down-weighted with increasing distance from the local analysis point through a tapering of the inverse observation covariances.

The relationship between the SEIK filter and the ETKF was investigated by Nerger et al. (Citation2012a). The study leads to a new filter formulation, the Error-Subspace Transform Kalman Filter (ESTKF), which combined the advantages of both filter formulations.

The filters mentioned above represent main developments of the ensemble Kalman filters. However, there are many other developments, which are not included here. Some of them are discussed in the sections below, in particular with regard to localisation. Overall, while there are different reviews of selections of ensemble Kalman filters, a complete and coherent overview of the different methods is still missing.

3.2. Development history of particle filters

Particle filters, like ensemble Kalman filters, are variants of Monte Carlo methods in which the probability distribution of the model state given the observations is approximated by a number of particles; however, unlike ensemble Kalman filters, particle filters are fully non-linear data assimilation techniques. From a sampling point of view, Ensemble Kalman Filters draw samples directly from the posterior since the probability distribution function (pdf) is assumed to be a Gaussian. In a particle filter application, the shape of the posterior is not known, and hence one cannot sample directly from it. In its simplest form, samples are generated from the prior after which importance sampling is employed to turn them into samples from the posterior where each sample is weighted with its likelihood value.

Particle filters emerged before ensemble Kalman filters, and when Gordon et al. (Citation1993) introduced resampling in the sequential scheme the method became mainstream in non-linear filtering. This basic scheme has been made more efficient for specific applications in numerous ways, like looking ahead, adding small perturbations to resampled particles to avoid that they are the same etc. (see Doucet et al., Citation2001 for a very useful review of the many methods available at that time). Attempts to apply the particle filter to geophysical systems are as old as 1996 (van Leeuwen and Evensen, Citation1996), with the first partially successful application by van Leeuwen (Citation2003a). However, until recently, particle filters have been deemed to be computationally unfeasible for large-dimensional systems due to the filter degeneracy problem (Bengtsson et al., Citation2008; Snyder et al., Citation2008; van Leeuwen, Citation2009). This means that the likelihood weights vary substantially between the particles when the number of independent observations is large, such that one particle obtains a weight close to one, while all the others have weight very close to zero. New developments in the field generated particle filter variants that have been shown to work for large dimensional systems with a limited number of particles. These methods can be divided in two classes: those that use localisation (starting with van Leeuwen, Citation2003b; Bengtsson et al., Citation2003), followed more recently by local variants of the ensemble transform particle filter (ETPF, Reich, Citation2013) and the Local Particle Filter (Poterjoy, Citation2016a) and those that exploit the future observational information via proposal densities, such as the Implicit Particle Filter (Chorin and Tu, Citation2009), the Equivalent Weights Particle Filter (EWPF, van Leeuwen, Citation2010; van Leeuwen, Citation2011; Ades and van Leeuwen, Citation2013), and the Implicit Equal Weights Particle Filter (IEWPF, Zhu et al., Citation2016).

In another development, second-order exact filters have been developed that ensure that the first two moments of the posterior pdf are consistent with the particle filter, and higher-order moments are not considered. The first paper of this kind was the Particle Filter with Gaussian Resampling of Xiong et al. (Citation2006), followed by the Merging Particle Filter (Nakano et al., Citation2007) and the Moment Matching Ensemble Filter (Lei and Bickel, Citation2011). All these filters seem to have been developed independently. The Non-linear Ensemble Transform Filter (Tödter and Ahrens, Citation2015) can be considered a local version of the filter by Xiong et al. (Citation2006), ironically again developed independently.

A further approximation to particle filtering is the Gaussian Mixture Filter first introduced in the geosciences by Bengtsson et al. (Citation2003), followed by the adaptive Gaussian mixture filter variants (Hoteit et al., Citation2008; Stordal et al., Citation2011). The advantage of these filters over the standard particle filter is that each particle is ’dressed’ by a Gaussian such that the likelihood weights are calculated using a covariance that is broader than the pure observational covariance, leading to better behaving weights at the cost of reducing the influence of the observations on the posterior pdf (see e.g. van Leeuwen, Citation2009).

4. The problem

Consider the following non-linear stochastic discrete-time dynamical system at a time when observations are available:(5) x(m)=Mmx(m-1)+β(m)(5) (6) y(m)=Hmx(m)+βo(m),(6)

where x(m)RNx is the Nx dimensional state vector, y(m)RNy is the observation vector of size NyNx, Mm:RNxRNx is the forward model operator, Hm:RNxRNy is the observation operator, β(m)RNx is the model noise (or error) distributed Gaussian with a covariance matrix Q(m), and βo(m)RNy is the observation noise (or error) distributed Gaussian with covariance matrix R(m).

Then we can define an ensemble of model forecasts obtained using Equation (Equation5) for each ensemble or particle member as follows,(7) Xf,(m)=x1f,(m),x2f,(m),,xNef,(m)RNx×Ne,(7)

where superscript .f stands for forecast.

The aim of the stochastic data assimilation methods is to produce a posterior pdf or analysis distribution of the state, Xa, at the time of the observations through combining the ensemble model forecast Xf with observations y. In Section 5, we will discuss ensemble Kalman filter based methods and in Sections 68 we will discuss particle, second-order exact, and adaptive Gaussian mixture filter methods all achieving this aim through different approaches.

5. Ensemble Kalman filters

Given an initial ensemble X(0)RNx×Ne, the different proposed variants of the ensemble Kalman filter have the following steps in common:

  • Forecast step: the ensemble members at each time step between the observations 0<km are propagated using the full non-linear dynamical model: (8) xjf,(k)=Mkxjf,(k-1)+βj(k),(8) starting at the previous analysis ensemble (if k=1, then this would be xjf,(1)=M1xj(0)+βj(1)), where j=1,,Ne is the ensemble member index.

  • Analysis step: at the observation time k=m the ensemble forecast mean and covariance are updated using the available observations to obtain a new analysis ensemble.

The various ensemble methods differ in the analysis step. Here we will discuss current methods applicable for large-dimensional systems, namely, the original ensemble Kalman filter (EnKF) (Evensen, Citation1994) with stochastic innovations (Burgers et al., Citation1998; Houtekamer and Mitchell, Citation1998), the singular evolutive interpolated Kalman filter (SEIK) (Pham et al., Citation1998a), the error-subspace statistical estimation (ESSE) (Lermusiaux and Robinson, Citation1999; Lermusiaux et al., Citation2002; Lermusiaux, Citation2007), the ensemble transform Kalman filter (ETKF) (Bishop et al., Citation2001), the ensemble adjustment Kalman filter (EAKF) (Anderson, Citation2001), the original ensemble square root filter (EnSRF) (Whitaker and Hamill, Citation2002) with synchronous and serial observation treatment, the square root formulation of the EnKF (Evensen, Citation2003), the error subspace transform Kalman filter (ESTKF) (Nerger et al., Citation2012a), and the maximum likelihood ensemble filter (MLEF) (Zupanski, Citation2005; Zupanski et al., Citation2008). We will present these methods in the square root form and point out the different ways the analysis ensemble is obtained in each of the methods. Tippett et al. (Citation2003) gives a uniform framework for EnSRFs, which we follow closely here. In the rest of this section for ease of notation we omit the time index (·)(k) since all of the analysis operations are done at time m.

The ensemble methods discussed in this section are based on the Kalman filter (Kalman, Citation1960) where the updated ensemble mean follows the Kalman update for the state, given by(9) x¯a=x¯f+Ky-H(x¯f)=x¯f+Kd(9)

where d=y-H(x¯f) is the innovation. The ensemble covariance update follows the covariance update equation in the Kalman Filter, given by(10) Pa=(I-KH)Pf,(10)

where K is the Kalman gain given by(11) K=PfHT(HPfHT+R)-1.(11)

The matrix H is the linearised observation operator H(..) at the forecast mean x¯f. Initially the Kalman filter was derived for a linear observation operator, but in the Extended Kalman Filter the non-linear observation operator is used as above.

Since for high-dimensional systems it is computationally not feasible to form the error covariance matrix P, the analysis update of the covariance matrix in Equation (Equation10) is formulated in a square root form by computing a transform matrix and applying it to the ensemble perturbation matrix, which is a scaled square root of P. That is, the analysis ensemble is then given by(12) Xa=X¯a+Xa,(12)

where X¯a=(x¯a,,x¯a)RNx×Ne is a matrix with the ensemble analysis mean in each column and the ensemble analysis perturbations are a scaled matrix square root of(13) Pa=XaXaTNe-1.(13)

To obtain the general square root form we write, using (Equation10)(14) Xa(Xa)T=I-PfHT(HPfHT+R)-1HXf(Xf)T=XfI-STF-1S(Xf)T,(14)

where S=HXf is the ensemble perturbation matrix in observation space and(15) F=SST+(Ne-1)R(15)

is the innovation covariance.

It is possible to use a slightly different way to calculate matrix S using the non-linear observation operator as S=H(Xf)-H(X¯f), in which, with a slight abuse of notation, H(Xf)=H(x1f),,H(xNef), and similarly for H(X¯f). This can be used in any of the ensemble Kalman filters discussed below.

To find the updated ensemble analysis perturbations Xa we need to compute the square root T of the matrix(16) I-STF-1S=TTT,(16)

where T is called a transform matrix. Different ways exist to compute the transform matrix T and here we will discuss the current methods applicable to large-dimensional systems.

For the ensemble-based Kalman filters presented in this paper we can write the analysis update as linear transformations using a weight vector w¯ for the ensemble mean and a weight matrix W for the ensemble perturbations as(17) x¯a=x¯f+Xfw¯,(17) (18) Xa=XfW.(18)

Notice, that the ensemble analysis perturbation matrix, Xa, in Equation (Equation18) has a zero mean by construction. Further, we note that for most of the methods discussed in this section, matrix W is the transformation matrix T in Equation (Equation16). However, this is not the case for EnKF, SEnKF and MLEF. Further, we can compute the analysis ensemble directly by(19) Xa=X¯f+XfW¯+W,(19)

where W¯=w¯,,w¯. In the sections below we will derive the weight matrices for each of the ensemble-based Kalman filter methods we discuss. The updated ensemble can then be obtained using Equation (Equation19).

To aid simplicity in discussing the different methods we use the same letter for the variables with the same meaning, i.e. W is always the perturbation analysis transform matrix that transforms Xf into Xa. Clearly, such variables do not necessarily have the same values for the various methods listed below. Thus, we subscript these variables common to all methods with a specific letter for each method. This letter is underlined in the title of each subsection that follows here, e.g. for EnKF we use WN. Note that some of the variables can have the same values for different methods, though. At the end of this section we will provide a table of the common variables with their dimensions and whether they are equal to the same variable in a different method.

5.1. The Stochastic Ensemble Kalman filter (En̲KF)

The Stochastic EnKF was introduced at the same time by Burgers et al. (Citation1998) and Houtekamer and Mitchell (Citation1998). It is a modified version of the original under-dispersive EnKF as introduced by Evensen (Citation1994) by adding measurement noise to the innovations so that the filter maintains the correct spread in the analysis ensemble and prevents filter divergence.

Although the scheme was initially interpreted as perturbing observations, a more consistent interpretation is that the predicted observations are perturbed with the observation noise. The reason for this is that it doesn’t make sense to perturb observations since they already contain measurement noise (errors), e.g. from measuring instruments, and thus have already departed from the true state of the system. Also, Bayes Theorem, see Section 2 tells us that we need the probabilities of the states given this set of observations, not a perturbed set. The idea is that each ensemble member is statistically equivalent to the true state of the system, and the true observation is a perturbed measurement of the true state. So to compare that observation with the predicted observations the latter have to be perturbed with the measurement noise too to make this comparison meaningful. This reasoning is identical to that used in rank histograms in which observations are ranked in the perturbed predicted observations from the ensemble to be statistically equivalent.

Each ensemble member individually is explicitly corrected using the Kalman filter equations, and hence the square root form is implicit only as the transform matrix and its square root are never explicitly computed. In contrast to the other filters, the stochastic EnKF perturbs the predicted observations by forming a matrix(20) Yf=H(x1f),H(x2f),,H(xNef)+YRNy×Ne,(20)

where the observational noise (perturbation) matrix Y is given by:(21) Y=ϵ1,ϵ2,,ϵNeRNy×Ne(21)

with the noise vectors ϵj drawn from a Gaussian distribution with mean zero and covariance R. We also introduce the observation matrix Y=y,y,,yRNy×Ne consisting of Ne identical copies of the observation vector.

The Stochastic EnKF uses the matrix F defined in Equation (Equation15) with prescribed matrix R and proceeds by transforming all ensemble members according to(22) Xa=Xf+1Ne-1XfSTFN-1Y-Yf,(22)

Similar to the Equations (Equation17)–(Equation19), this can be written as(23) Xa=Xf+XfWN(23)

with(24) WN=1Ne-1STFN-1Y-Yf.(24)

Due to the use of the observation ensemble Y no explicit transformation of the ensemble mean needs to be performed.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnKF method.

We note that while the above description of the stochastic EnKF is widely accepted and implemented, it does produce the correct posterior covariance only in a statistical sense due to extra sampling errors while the ensemble mean is not affected by the sampling error by ensuring that observation noise matrix, Y, has zero mean. However, in the limit of infinite ensemble size and when all sources of error (both observation and model) are correctly sampled, the stochastic EnKF does produce the correct posterior covariance (Whitaker and Hamill, Citation2002).

5.2. The singular evolutive interpolated Kalman filter (SE̲IK)

The SEIK filter (Pham et al., Citation1998b Pham, Citation2001) was the first filter method that allowed for non-linear model evolution and that was explicitly formulated in square root form. The filter uses the Sherman–Morrison–Woodbury identity (Golub and Van Loan, Citation1996) to rewrite TTT (Equation Equation16) as(25) TTT=I-STF-1S=I+1Ne-1STR-1S-1.(25)

Note, that the performance of this scheme depends on whether the product of the inverse of the observation error matrix, R-1, and a given vector can be efficiently computed, which is for instance the case when we assume that the observation errors are uncorrelated.

The SEIK filter computes the analysis step in the ensemble error subspace. This is achieved by defining a matrix(26) LE=XfAE,(26)

where AERNe×(Ne-1) is a matrix with full rank and zero column sums. Commonly, matrix AE is identified as(27) AE=INe-1×Ne-101×Ne-1-1Ne1Ne×Ne-1,(27)

where 0 is a matrix whose elements are equal to zero and 1 is a matrix whose elements are equal to one (Pham et al., Citation1998b). Matrix AE implicitly subtracts the ensemble mean when the matrix L is computed. In addition, AE removes the last column of Xf. Thus, L is an Ne×Ne-1 matrix that holds the first Ne-1 ensemble perturbations. The product of the square root matrices in the ensemble error space becomes now(28) TETET=AETAE+1Ne-1(HLE)TR-1(HLE)-1.(28)

The matrix TETET is of size Ne-1×Ne-1. The square root TE is obtained from the Cholesky decomposition of (TETET)-1. Then, the ensemble transformation weight matrices in Equations (Equation17)–(Equation19) are given by(29) WE=AETEΩ,(29) (30) w¯E=1Ne-1AETETET(HLE)TR-1d.(30)

Here, the columns of ΩRNe-1×Ne are orthonormal and orthogonal to the vector (1,,1)T. Ω can be either random or a deterministic rotation matrix. However, if a deterministic Ω is used then Nerger et al. (Citation2012a) shows that a symmetric square root of TETET should be used for a more stable ensemble.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the SEIK method.

5.3. The error-subspace statistical estimation (ES̲SE)

The ESSE (Lermusiaux and Robinson, Citation1999 Lermusiaux et al., Citation2002 Lermusiaux, Citation2007) method is based on evolving an error subspace of variable size, that spans and tracks the scales and processes where the dominant errors occur (Lermusiaux et al., Citation2002). Here, we follow the formulation of Lermusiaux (Citation2007) adapted to the unified notation used here.

The consideration of an evolving error subspace is analogous to the motivation of the SEIK filter. The main difference to other subspace filters mentioned here is how the ensemble matrix is truncated. That is, the full ensemble perturbation matrix Xf at the current analysis time with columnsxjf=xjf-x¯f,j=1,,Ne

is approximated by the fastest growing singular vectors. The full ensemble perturbation matrix is decomposed using the reduced or thin singular value decomposition (SVD), (e.g. p. 72, Golub and Van Loan, Citation1996),(31) USΣSVST=Xf,(31)

where USRNx×Ne is an orthogonal matrix of left singular vectors, ΣSRNe×Ne is a diagonal matrix with singular values on the diagonal, and VSTRNe×Ne is an orthogonal matrix of right singular vectors of Xf. Next, normalised eigenvalues are computed via(32) ES=1Ne-1ΣS2.(32)

The matrices US and ES are truncated to the leading eigenvalues. Using U~S, E~S with rank N~eNe and U^S, E^S with rank p^<N~e where the similarity coefficient ρ is computed via(33) ρ=TrE^S12U^STU~SE~S12TrE~S(33)

and Tr(.) is the trace of a matrix. ρ measures the similarity between two subspaces of different sizes. The process of reducing the subspace is repeated until ρ is close to one, i.e. ρ>α where 1-ϵα1 is a user selected scalar limit.Footnote3 The dimension of the error subspace thus varies with time and in accord with model dynamics (Lermusiaux, Citation2007). Hence, in the following analysis update the reduced rank approximations(34) U~SUS,(34) (35) E~SES,(35) (36) V~SVS(36)

are used where the right singular vector matrix V~S is also truncated to have size N~e×N~e.

The product of the square root matrices, using Equation (Equation14), in the error subspace becomes(37) TSTST=I-S~TF~-1S~(37)

where ensemble errors in observation space are given by S~=HU~SE~S12V~ST and innovation covariances by F~=HU~SE~U~THT+R.

The inverse of the Ny×Ny-matrix F~ is obtained by performing the eigenvalue decomposition (EVD)(38) F~=ΓΛΓT(38)

so that Equation (Equation37) becomes(39) TSTST=I-S~TΓΛ-1ΓTS~.(39)

Performing another EVD in Equation (Equation39),(40) TSTST=ZΠZT,(40)

the symmetric square root becomes(41) TS=ZΠ12ZT.(41)

Hence, the ensemble transformation weight matrices needed to form the ensemble analysis mean and analysis perturbations in Equations (Equation17)–(Equation19) are given by(42) WSa=ZΠ12ZT(42) (43) w¯Sa=1Ne-1S~F~-1d.(43)

Note, that when computing the analysis ensemble mean and perturbations, the truncated ensemble perturbation matrix X~f is used in the pseudo-algorithm  in Appendix 2. The truncation to the rank N~e will results in a reduction of the ensemble size. To avoid that the ensemble size shrinks, Lermusiaux (Citation2007) described an optional adaptive method to generate new ensemble members.

5.4. The ensemble transform Kalman filter (ET̲KF)

The ETKF (Bishop et al., Citation2001) was derived to explicitly transform the ensemble in a way that results in the correct spread of the analysis ensemble. As the SEIK filter, the ETKF uses the Morrison-Woodbury identity to write(44) TTTTT=I+1Ne-1STR-1S-1.(44)

In contrast to the SEIK filter, TTTTT is of size Ne×Ne and hence represents the error-subspace of dimension Ne-1 indirectly by the full ensemble.

Currently, the most widespread method to compute the update in the ETKF appears to be the formulation of the LETKF by Hunt et al. (Citation2007), which we describe here. By performing the EVD of the symmetric matrix (TTTTT)-1=UTΣTUTT we obtain the symmetric square root(45) TT=UTΣT-12UTT.(45)

Using this decomposition, the ensemble transformation weight matrices needed to form the ensemble analysis mean and analysis perturbations in Equations (Equation17)–(Equation19) are given by(46) WT=UTΣT-12UTT,(46) (47) w¯T=1(Ne-1)UTΣT-1UTT(Xf)THTR-1d.(47)

Using the symmetric square root produces a transform matrix which is closest to the identity matrix in the Frobenius norm (Hunt et al., Citation2007). Thus, the ETKF results in a minimum transform in the ensemble space, which is different from the notion of ’optimal transportation’ used in the ETPF (see Section 6.3).

The original publication introducing the ETKF (Bishop et al., Citation2001) did not specify the form of the matrix square root TT. There are different possibilities to compute it, and taking a simple single-sided square root could lead to implementations with a biased transformation, such that the transformation by W would not preserve the ensemble mean. However, using the symmetric square root approach this bias is avoided. Livings (Citation2005) proposed another variant normalising first the forecast observation ensemble perturbation matrix so that the observations are dimensionless with standard deviation one(48) S~=1Ne-1R-12S.(48)

Substituting (Equation48) into (Equation44) gives(49) TTTTT=I+S~TS~-1.(49)

To find the square root form next we perform the SVD(50) S~T=UTΣ~TV~TT.(50)

In this case, the ensemble transformation weight matrices in Equations (Equation17)–(Equation19) become(51) WT=UTI+Σ~TΣ~TT-12UTT,(51) (52) w¯T=1Ne-1UT(I+Σ~TΣ~TT)-1Σ~TV~TTR-12d.s(52)

This formulation avoids the multiplication STR-1S and can hence prevent possible loss of accuracy due to rounding errors. However, this formulation also requires the computation of the square root of R, which itself can result in rounding errors if R is not diagonal.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the ETKF method.

5.5. The ensemble adjustment Kalman filter (EA̲KF)

The EAKF was introduced by Anderson (Citation2001). Similarly to the SEIK filter and the ETKF we require here that the matrix R-1 is readily available. Using scaled ensemble perturbations as discussed for the ETKF-formulation by Livings (Citation2005) in Equations (Equation48)–(Equation49) we can write(53) TATAT=I+S~TS~-1.(53)

We perform the SVD on the scaled forecast ensemble observation perturbation matrix(54) S~T=UAΣAVAT.(54)

Note that UA=UT, related to the similarity between the EAKF and the ETKF. We also use an EVD to obtain(55) Pf=ZAΓAZAT.(55)

The decomposition in Equation (Equation55) is usually performed as an SVD of the ensemble perturbation matrix Xf, which approximates Pf using Ne ensemble members.

Due to the ranks of the matrices decomposed in Equations (Equation54) and (Equation55) there are at most q=min(Ne-1,Ny) non-zero singular values in ΣA and at most Ne-1 non-zero eigenvalues in ΓA. Thus, the matrices in the equations below can be truncated as follows: UARNe×q, ΣARq×q, VATRq×Ny and ΓA,ZARNe-1×Ne-1. Then, the ensemble transformation weight matrices in Equations (Equation17)–(Equation19) are given by(56) WA=UAI+ΣAΣAT-12ΓA-12ZATXf,(56) (57) w¯A=1Ne-1UA(I+ΣATΣA)-1ΣTVATR-12d.(57)

Note, that the EAKF perturbation weight matrix in Equation (Equation56Equation57) is the same as applying the orthogonal matrix ΓA-12ZATXf instead of the orthogonal matrix UT in the ETKF perturbation transform matrix given by Equation (Equation51Equation52) (Tippett et al., Citation2003).

The decomposition in Equation (Equation55) is costly due to the size of the matrix to be decomposed. For this reason, the EAKF is typically applied with serial observation processing as will be described for the EnSRF in Section 5.7.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EAKF method.

5.6. The ensemble square root filter (EnSR̲F)

The EnSRF was introduced by Whitaker and Hamill (Citation2002) to avoid the use of perturbed observations by a square root formulation. In the EnSRF the transform matrix is given byTRTRT=I-STF-1S.

We first perform an EVD of F to obtain its inverse(58) F-1=ΓRΛR-1ΓRT.(58)

Then, we can write the ensemble analysis covariance as(59) Xa(Xa)T=XfI-STΓRΛR-1ΓRTS(Xf)T=XfI-GRGRT(Xf)T(59)

where GR=STΓRΛR-12. Decomposing GR=URΣRVRT using an SVD we obtainXa(Xa)T=XfI-URΣRVRTURΣRVRTT(Xf)T=XfURI-ΣRΣRTURT(Xf)T.

The diagonal matrix holding the singular values is of dimension ΣRRNe×Ny and has thus at most min(Ne,Ny) nonzero singular values. To reduce the computational cost for the case of high dimensional models with NeNy, we can truncate to get the much smaller matrix ΣRRNe×min(Ne,Ny) (see Table ). The square root form for the ensemble analysis perturbations is given by(60) Xa=XfURI-ΣRΣRT12,(60)

and the ensemble transformation weight matrices needed to form the ensemble analysis mean and analysis perturbations in Equations (Equation17)–(Equation19) are given by(61) WR=URI-ΣRΣRT12URT,(61) (62) w¯R=STΓRΛR-1ΓRTd,(62)

where in Equation (Equation61Equation62) we have post-multiplied the ensemble analysis perturbations by the orthogonal matrix of the left singular vectors URT to ensure that the analysis ensemble is unbiased (Livings et al., Citation2008; Sakov and Oke, Citation2008).

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnSRF method.

5.7. EnSRF with serial observation treatment

The serial observation treatment in the EnSRF was introduced by Whitaker and Hamill (Citation2002) together with the EnSRF assimilating all observations at once. The serial treatment reduces the computing cost. Hence, the EnSRF is typically not applied with the bulk update described above, but with serial treatment of observations, which is possible if R is diagonal. In this case, each single observation can be assimilated separately. Thus, F reduces to the scalar F and SST to the scalar S2. For a single observation (Ny=1), the matrix GR becomes a vector given by(63) GR=1FST.(63)

All singular values of GR are zero except the first, which is its norm,(64) ΣR=SFe(64)

where e is a vector with Ne zero elements except the first, which is one. The first column of UR corresponds to the normalised vector ST(65) URe=1SST.(65)

The square root of the diagonal matrix in Equation (Equation61) can be written as a sum of the identity matrix and a matrix proportional to eeT:(66) I-ΣRΣRT12=I-(1-(Ne-1)R/F)eeT.(66)

Using Equation (Equation65) and the fact that all columns of U are orthonormal, one obtains(67) WR=I-1-(Ne-1)R/FS2STS(67)

and the weight vector for the update of the ensemble mean is(68) w¯R=1FSTd.(68)

The equations above are then applied in a series over each single observation. The equations are likewise valid when the EAKF is formulated with a serial observation treatment.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnSRF method with serial observation treatment.

5.8. The square root formulation of the stochastic ensemble Kalman filter (SEnKF̲)

The SEnKF was introduced by Evensen (Citation2003) as a square root formulation of the stochastic EnKF. Defining Y as for the stochastic EnKF and using a matrix(69) FF=SST+YYT(69)

we obtain the matrix(70) TFTFT=I-STFF-1S.(70)

We could decompose FF using an EVD but this is costly if NyNe (Evensen, Citation2003). Instead, we assume that forecast and observation errors are uncorrelated, i.e.(71) SYT0,(71)

so that(72) FF=SST+YYT=(S+Y)(S+Y)T.(72)

Now we can use an SVD to decompose S+Y=UFΣFVFT, giving(73) FF=UFΣFΣFTUFT,(73)

which has a much smaller computational cost than decomposing FF using an EVD when NyNe.

The ensemble transformation is then computed according to Equation (Equation23) with the weight matrix given by(74) WF=STUFΣF-1(ΣF-1)TUFT(Y-Yf).(74)

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnKF in square root form.

5.9. The error-subspace transform Kalman filter (ESTK̲F)

The ESTKF has been derived from the SEIK filter (Nerger et al., Citation2012a) by combining the advantages of the SEIK filter and the ETKF. The ESTKF exhibits better properties than the SEIK filter, like a minimum ensemble transformation as the ETKF. However, unlike the ETKF, the ESTKF computes the ensemble transformation in the error subspace spanned by the ensemble rather than using the ensemble representation of it. That is, the error subspace of the dimension Ne-1 is represented directly in the ESTKF (similarly to the SEIK filter) while in the ETKF the error subspace is represented indirectly using the full ensemble of size Ne.

Similar to the SEIK filter, a projection matrix AKRNe×Ne-1 is used whose elements are defined by(75) AK{i,j}:=1-1Ne11Ne+1fori=j,i<Ne-1Ne11Ne+1forij,i<Ne-1Nefori=Ne(75)

With this projection, the basis vectors for the error subspace are given by(76) LK=XfAK.(76)

As for the matrix Ω in the SEIK filter, the columns of matrix AK are orthonormal and orthogonal to the vector (1,,1)T. When the matrix LK is computed, the multiplication with AK implicitly subtracts the ensemble mean. Further, AK subtracts a fraction of the last column of Xf from all other columns. In this way, the last column of Xf is not just dropped as in the SEIK filter, but its information is distributed over the other columns. The product of the square root matrices in the error subspace becomes now(77) TKTKT=I+1Ne-1(HLK)TR-1(HLK)-1.(77)

By performing the EVD of the symmetric matrix (TKTKT)-1=UKΣKUKT we obtain the symmetric square root(78) TK=UKΣK-12UKT.(78)

Then, the ensemble transformation weight matrices needed to form the ensemble analysis mean and perturbations in Equations (Equation17)–(Equation19) are given by(79) WK=AKTKAKT,(79) (80) w¯K=1Ne-1AKUKΣK-1UKT(HLK)TR-1d.(80)

Compared to the SEIK filter, both the matrices AE and Ω are replaced by AK in the ESTKF. In addition, the ESTKF uses the symmetric square root of TKTKT. The use of AK leads to a consistent projection onto the error subspace and back onto the state space, while the symmetric square root ensures that the minimum transformation is obtained. It is also possible to apply the ESTKF with a random ensemble transformation. For this case, the rightmost matrix AK in Equation (Equation79) is replaced by a random matrix with the same properties as the deterministic AK.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the ESTKF method.

5.10. The Maximum Likelihood Ensemble Filter (M̲LEF)

The MLEF (Zupanski, Citation2005; Zupanski et al., Citation2008) calculates the state estimate as the maximum of the posterior probability density (pdf) function. This is in contrast to the ensemble Kalman filter methods described in this paper, which are based on the minimum variance approach, so targeting the mean. The maximum of the pdf is found by an iterative minimisation of the cost function using a generalised non-linear conjugate-gradient method.

The original MLEF filter (Zupanski, Citation2005) uses a second-order Taylor approximation to the analysis increments, which requires that the cost function is twice differentiable. However, this requirement is not necessarily satisfied in many real life non-linear applications, for example where the parameterisation of some processes is used in models or for strongly non-linear observation operators. Here, we present the revised MLEF by Zupanski et al. (Citation2008) that avoids this requirement by using a non-differentiable minimisation algorithm.

In contrast to all other ensemble filters discussed above, the MLEF maintains the actual state estimate separately from the ensemble, which is used to provide the measurement of estimation error. Thus, the analysis perturbations in the MLEF are computed for each ensemble member in a square root form without re-centring them onto the analysis ensemble mean. Hence, this filter does not follow the same square root form as the filters described above and is presented last in this section.

In the MLEF, the ensemble analysis perturbations are defined using the difference between analysis and forecast for each ensemble member, xja=xja-xjf and not between ensemble analysis states and the analysis mean. They are found using generalised Hessian preconditioning in state space. A change of variable is performed as follows(81) xja=xjf+xja,(81) (82) xja=G12ξj,(82)

where the matrix G12=XfI+C-12RNx×Ne represents the inverse square root of the generalised Hessian estimated at the initial point of minimisation, and ξj is a control variable defined in ensemble subspace. Matrix C is the covariance matrix(83) C=(Xf)THTR-1HXf.(83)

Equation (Equation82) can be written as a transformation of ensemble perturbations by(84) xja=XfwM,j(84)

where the elements of the weight vector wM,jRNe for ensemble member j are given by(85) wM,j=I+C-12ξj.(85)

Now we use an EVD of C=ΓMΛMΓMT to write Equation (Equation85) as(86) wM,j=ΓMI+ΛM-12ΓMTξj.(86)

We note, that in a linear case matrix G12 is a square root of Pa. Indeed the same decomposition and inversion was used to find the square root analysis perturbations for the ETKF, see Equation (Equation45).

After successfully accomplishing the Hessian preconditioning, the next step in the iterative minimisation is to calculate the gradient in the ensemble-spanned subspace. The preconditioned generalised gradient at the k-th minimisation iteration is obtained by(87) GJ(xk)=I+C-1ξk-Z(xk)TR-12y-H(xk),(87)

where(88) Z(x)=[z1(x),z2(x),,zNe(x)](88) (89) zj(x)=R-12[H(x+xjf)-H(x)].(89)

Upon convergence we have obtained an optimal state analysisxa=xk.

To complete the non-differential formulation of the MLEF, ensemble analysis perturbations are computed as follows(90) Xa=XfI+Z(xa)TZ(xa)-12(90)

where Z(xa) is obtained by substituting x=xa into Equation (Equation89).

Algorithm  in Appendix 2 gives a pseudo-algorithm of the MLEF method.

5.11. Summary of ensemble Kalman Filter methods

In this section, we have described ten most popular ensemble Kalman filter methods that are applicable to high-dimensional non-Gaussian problems.

This collection of methods could be categorised in different ways, for example in deterministic ensemble filters, where the analysis is found through explicit mathematical transformations (SEIK, ETKF, EAKF, EnSRF, ESTKF, MLEF), and stochastic ensemble filters, where perturbed forecasted observations are used (EnKF, SEnKF). Burgers et al. (Citation1998) and Houtekamer and Mitchell (Citation1998) showed that in order to maintain sufficient spread in the ensemble and prevent filter divergence, the observations should be treated as random variables, i.e. perturbed, while our interpretation is slightly different, as described above. This stochasticity, of course, leads to extra sampling noise in the filters. On the other hand, Lawson and Hansen (Citation2004) showed that for large ensemble sizes, stochastic filters can handle non-linearity better than the deterministic filters. This is due to the additional Gaussian observation spread normalising the ensemble update in the stochastic filter, which tends to erase the non-Gaussian higher moments non-linear error growth has generated. However, current computational power restricts us to small ensemble sizes for high-dimensional problems, in which case stochastic filters add another source of sampling error thus underestimating the analysis update (Whitaker and Hamill, Citation2002).

While all ensemble Kalman filter methods use low-rank approximations of the state error covariance matrix, some of the methods in this section are referred to as error-subspace ensemble filters because they directly operate in the error subspace spanned by ensemble rather than using the ensemble representation of it. Such filters are SEIK (see Section 5.2), ESTKF (see Section 5.9), and ESSE (see Section 5.3). Nerger et al. (Citation2005b) compares the stochastic EnKF with the SEIK filter in an idealised high-dimensional shallow water model with non-linear evolution, showing that the main difference between the filters lies in the efficiency of the representation of the covariance matrix P. In general, the EnKF filter will require a larger ensemble size Ne to achieve the same performance as the SEIK filter. The relation of the ETKF and SEIK methods has been studied by Nerger et al. (Citation2012a), where also the ESTKF has been derived. Apart from computing the ensemble transformation in the error subspace in case of SEIK and ESKTF, the three filters are essentially equivalent. However, for the SEIK filter it has been found that the application of the Cholesky decomposition can lead to unevenly distributed variance in the ensemble members. To this end, the ESTKF and ETKF method are preferable unless a random matrix Ω is used in the SEIK filter.

The methods described in this section each have nuances in which they differ one from another as well as underlying common ground. Writing these methods using unified mathematics notation allows us to see these algorithmic differences and commonalities more readily. Many filters described above have several common variables and while for some methods the variables have different sizes, others can not only have the same size but actually the same value, too. Table summarises the sizes of the common variables between the methods and below we comment on whether they have the same value.

Table 1. Overview of the sizes of matrices that are used in different filter methods.

Apart from the matrices listed in Table , there are the finally resulting weight matrices W and W¯. The matrix W¯ is identical for all filters. Thus, for all filters except EnKF, SEnKF and MLEF, which don’t use this matrix, the mean of the analysis ensemble is identical. The EnKF and SEnKF are an exception because they do not explicitly transform the ensemble mean and introduce sampling error due to the perturbed observations. The maximum likelihood approach of the MLEF also results in a different analysis state estimate if the ensemble distribution is not Gaussian and the observation operator is non-linear. Further the square WWT is identical for all filters except the EnKF, SEnKF and MLEF. Thus, the analysis covariance matrix Pa will be identical.

In contrast to the equality of the matrix W¯, the matrix W is different for almost all methods. Thus, while many methods yield the same analysis ensemble covariance matrix, their ensemble perturbations are distinct. In the ETKF and ESTKF methods, the dimensions of the matrices T, U, and Σ have distinct dimensions. However, the ensemble transformation weight matrices W of both methods are identical (Nerger et al., Citation2012a).

In general, the choice of using a particular ensemble method depends on a number of the system’s components: the dynamical model at hand, model error, number and types of observations, ensemble size. Given these degrees of freedom it is not possible to attribute one data assimilation method to be better suited for a general situation. In practise operational meteorological applications most widely use the LETKF and the serial formulations of the EnSRF and EAKF, while in oceanography there are many applications of the stochastic EnKF, the SEIK filter, and the ESTKF. There are less applications of the ESSE and MLEF, despite the fact that these filters are algorithmically interesting because of the ensemble-size adaptivity of ESSE and the maximum-likelihood solution MLEF. From the algorithmic viewpoint, the stochastic EnKF will be useful if the stochasticity can be an advantage and if large ensembles can be used. Further, the filters SEIK, ETKF, ESTKF differ from the EnKF, EnSRF and EAKF also in the application of distinct localisation methods (see Section 9.1 for the discussion of localisation). EnKF, EnSRF and EAKF allow for localisation in the state space, which could be advantageous for some observation types (Campbell et al., Citation2010). The serial formulation of the EnSRF and EAKF requires that the observation error covariance matrix is diagonal. Thus, these filters cannot directly be applied if the observation errors are correlated. A transformation into variables that are uncorrelated is possible in theory, but it is most likely not practical for large sets of observations.

6. Particle filters

In this section we will consider the standard particle filter followed by three efficient variants of the particle filters: the Equivalent Weights Particle Filter (EWPF, van Leeuwen, Citation2010;van Leeuwen, Citation2011; Ades and van Leeuwen, Citation2013), the Implicit Equal-Weights Particle Filter (Zhu et al., Citation2016) and the Ensemble Transform Particle Filter (Reich, Citation2013). Other variants of local particle filters are discussed in Section 9 on practical implementation. Another interesting particle filter for high-dimensional systems, the so called implicit particle filter (Chorin and Tu, Citation2009; Chorin et al., Citation2010; Morzfeld et al., Citation2012; van Leeuwen et al., Citation2015), is not discussed here as it needs a 4D-Var-like minimisation for each particle. The Multivariate Rank Histogram Filter (MRHF, Metref et al., Citation2014b), based on the Rank-Histogram Filter of Anderson (Citation2010) that performs well in highly non-Gaussian regimes, has been recently developed in the European project Sangoma.Footnote4 However, it is still under development for high-dimensional systems and its idea is only shortly described in Section 9.1.5. Often particle filters are defined as providing approximations of px(0:m)|y(1:m), but we restrict ourselves to particle filters that are approximations of the marginal posterior pdf px(m)|y(1:m) as there are at present no efficient algorithms for the former for high-dimensional geophysical systems, and we have forecasting in mind. Furthermore, for ease of presentation we take all earlier observations for granted, leading to the marginal posterior at time m being denoted as px(m)|y(m).

6.1. The standard particle filter

This particle filter is also known as the bootstrap filter or Sequential Importance Resampling (SIR). The probability distribution function (pdf) in particle filtering, represented by Ne particles or ensemble members at time k, is given by(91) px(m)=1Nej=1Neδx(m)-xj(m),(91)

where x(m)RNx is the Nx-dimensional state of the system that has been integrated forward in time using the stochastic forward model and δ(x) is a Dirac-delta function. We let time m to be the time of a current set of observations with the previous observation set at time 0. Then the stochastic forward model for times 0<k<m for each particle j=1,,Ne is given by(92) xj(k)=Mkxj(k-1)+βj(k),(92)

where βj(k)RNx are random terms representing the Gaussian distributed model errors with mean zero and covariance matrix Q, and Mk:RNxRNx is the deterministic model from time k-1 to k. Thus, the model state transition from time k-1 to k is fully described by the transition density given by(93) pxj(k)|xj(k-1)=NMkx(k-1),Q,(93)

which will be of later use.

Using Bayes theorem(94) pxj(m)|y(m)=py(m)|xj(m)py(m)pxj(m)(94)

and the Markovian property of the model, the full posterior at observation time m is written as(95) pxj(m)|y(m)=j=1Newj(m)δx(m)-xj(m)(95)

where the weights wj(m) are given by(96) wj(m)py(m)|xj(m)pxj(m)|xj(m-1)wj(m-1)(96)

and each wj(m-1) is the product of all the weights from all time steps 0<km-1. The conditional pdf py(m)|x(m) is the pdf of the observations given the model state x(m) which is often taken to be Gaussian(97) py(m)|x(m)exp-12y(m)-Hmx(m)TR-1y(m)-Hmx(m).(97)

To obtain equal-weight posterior particles one applies resampling, in which particles with high weights are duplicated, while particles with low weights are abandoned. Several schemes have been developed to perform resampling, and three of the most-used schemes are presented in Appendix 1.

The problem in high-dimensional spaces with a large number of independent observations is that these weights vary enormously over the particles, with one particle obtaining a weight close to one, while all the others have a weight very close to zero. This is the so-called degeneracy problem related to the ‘curse of dimensionality’: any resampling scheme will produce Ne copies of the particle with the highest weight, and all variation in the ensemble has disappeared.

Hence, as mentioned at the beginning of this section, to apply a particle filter to a high-dimensional system additional information is needed to limit the search space of the filter. One option is to use localisation directly on the standard particle filter. Local particle filters, like the so-called Local Particle Filter (Poterjoy, Citation2016a) will be discussed in the section on localisation in particle filters. We next discuss the proposal-density particle filters since this technique could be applied to all filters and permits us to achieve equal-weights for the particles in a different way.

6.2. Proposal-density particle filters

To avoid that the ensemble degenerates we aim at ensuring that equally significant particles are picked from the posterior density. To do this we have to ensure that all particles end up in the high-probability area of the posterior pdf, and that they have very similar, or even equal, weights. For the former we can use a scheme that pulls the particles towards the observations. Several methods can be used for this, including traditional methods like 4DVar, a variational method, and ensemble Kalman filters and smoothers. However, the main ingredient in efficient particle filters is the step that ensures that the weights of the different particles are close before any resampling step.

We start by writing the prior at time m as follows:(98) px(m)=px(m)|x(m-1)px(m-1)dx(m-1).(98)

Without loss of generality but for simplicity we assume that the particle weights in the ensemble at the previous time step m-1 are equal, so(99) px(m-1)=1Nej=1Neδx(m-1)-xj(m-1).(99)

Using Equation (Equation99) in Equation (Equation98) leads directly to:(100) px(m)=1Nej=1Nepx(m)|xj(m-1),(100)

hence, from Equation (Equation93) the prior can be seen as a mixture density, with each density centred around one of the forecast particles.

One can now multiply the numerator and denominator of Equation (Equation100) by the same factor qx(m)|x1:Ne(m-1),y(m), in which x1:Ne(m-1) is defined as the collection of all particles at time m-1, and the conditioning on j denotes that each particle does in general has a different parent to start from. This leads to(101) px(m)=1Nej=1Nepx(m)|xj(m-1)qx(m)|x1:Ne(m-1),y(m)qx(m)|x1:Ne(m-1),y(m)(101)

where qx(m)|x1:Ne(m-1),y(m) is the so-called proposal transition density, or proposal for short, whose support should be equal to or larger than that of px(m)|x(m-1). Note that the proposal density as formulated here is slightly more general than the usual qx(m)|xj(m-1),y(m) through allowing for the explicit dependence on all particles at time m-1.

Drawing from this density we find for the posterior:(102) px(m)|y(m)=py(m)|xj(m)py(m)px(m)=1Nej=1Newjδx(m)-xj(m)(102)

where wj are the particle weights given by(103) wj=py(m)|xj(m)py(m)pxj(m)|xj(m-1)qxj(m)|x1:Ne(m-1),y(m).(103)

Using Bayes’ theorem, the numerator in the expression for the weights can be expressed as(104) py(m)|x(m)px(m)|xj(m-1)=px(m)|xj(m-1),y(m)py(m)|xj(m-1)(104)

Therefore, the particle weight of ensemble member j can be written as:(105) wj=py(m)|xj(m-1)py(m)pxj(m)|xj(m-1),y(m)qxj(m)|x1:Ne(m-1),y(m).(105)

In the so-called optimal proposal density (Doucet et al., Citation2000) one choosesqxj(m)|x1:Ne(m-1),y(m)=pxj(m)|xj(m-1),y(m),

leading to weights wjpy(m)|xj(m-1). For systems with a large number of independent observations these weights are again degenerate (see, e.g. Snyder et al., Citation2008; Ades and van Leeuwen, Citation2013; Snyder et al., Citation2015).

Several efficient particle filter schemes have been developed utilising the proposal density to avoid this degeneracy. Here we discuss the Equivalent-Weights Particle Filter (EWPF) and the Implicit Equal-Weights Particle Filter (IEWPF). As mentioned, the Implicit Particle Filter (Chorin et al., Citation2010), which allows for an extension of the one-time-step optimal proposal particle filter to a full time window explores a 4DVar-like method on each particle. Since it needs an adjoint of the underlying model, it is not discussed in this paper.

6.2.1. The equivalent-weights particle filter

The EWPF works as follows:

(1)

Determine the optimal proposal weight wjpy(m)|xj(m-1) for each particle. Note that these weights vary enormously in high-dimensional systems.

(2)

Choose a target weight wtarget based on these weights that a certain percentage of particles can reach. For instance, if the target weight is set to the lowest of these weights we keep 100% of the particles. A choice of 50% will mean that the target weight is set to the medium value of these weights.

(3)

Calculate the position in state space of each particle such that it has a weight exactly equal to the target weight. This is where the proposal density comes in. Note that some of the particles cannot reach this target weight no matter how we move them, and these are brought back into the ensemble via the resampling step in point 5.

(4)

Add a small random perturbation to each particle and recalculate its weight.

(5)

Resample all particles such that their weights are equal again.

It is in step 3 that we use the fact that the proposal density is dependent on all previous particles, and not just particle j. This step is the main reason for the efficiency of the filter.

As an example, when the error in the model equations is additive Gaussian and the observation operator is linear an analytical solution can be found for the maximum weight for each particle j, or actually, the minimum of minus the log of that weight called ϕj:(106) ϕj=y(m)-HMxj(m-1)THQHT+R-1×y(m)-HMxj(m-1).(106)

Then a target weight is set from these ϕj’s. The target weight splits the ensemble of particles in two groups: those particles that have a higher optimal proposal weight, and those with a lower optimal proposal weight. The latter are abandoned at this point, and will be regenerated in the resampling step 5.

For the retained particles, there is an infinite number of ways to move a particle in state space such that it reaches the target weight. In the EWPF that problem is solved by assuming(107) x^j(m)=Mxj(m-1)+αjΥy(m)-HMxj(m-1)(107)

in which αj is a scalar, and Υ is defined as(108) Υ=QHTHQHT+R-1.(108)

Under this assumption the number of solutions is reduced to two, and the two values for αj are given by(109) αj=1±1-bj/aj(109)

in which(110) aj=12y(m)-HMxj(m-1)T×R-1HTΥy(m)-HMxj(m-1)(110)

and(111) bj=12y(m)-HMxj(m-1)T×R-1y(m)-HMxj(m-1)+logwtarget-logwj(m-1),(111)

in which wj(m-1) is the weight of particle j accumulated over previous time steps, included here for completeness. Note that wtarget is the target weight selected from ϕ’s in Equation (Equation106) (e.g. if we choose to keep 80% particles -log(wtarget)={ϕ~j}j=Ne0.8 where {ϕ~j}j=1,,Ne is a sorted list of optimal-proposal weight of each particle) and that αj=1 pushes the particle to its optimal-proposal weight position. The solution resembles the optimal proposal solution in which the deterministic part of the proposal is scaled to ensure equal weights. Also note the resemblance of the deterministic part with the shape of that used in a Kalman filter when we replace Q with the ensemble covariance of the state.

When the number of independent observations is large the optimal proposal density particle filter is degenerate, meaning that one particle gets a much larger weight than all the others. The EWPF is not degenerate because a set percentage of all particles has a similar weight (before the resampling step). The EWPF does not, however, converge for large Ne to the posterior pdf because of this equivalent-weights construction, in which high-weight particles are moved such that their weight becomes lower, equal to the target weight. So the scheme is biased. However, the large Ne limit is not that relevant in practise as the affordable number of particles will be low, below say 10,000, and typically of O(20-100). In that setting, the Monte-Carlo error will be substantial, and the bias should be measured against the Monte-Carlo error. As long as the latter is larger than the former the scheme is a valid alternative in high-dimensional systems.

Algorithm  in Appendix 2 gives a pseudo-algorithm for the EWPF.

6.2.2. The implicit equal-weights particle filter

This scheme is very similar to that of the EWPF:

(1)

Determine the optimal proposal weight wjpy(m)|xj(m-1) for each particle. Note that these weights vary enormously in high-dimensional systems.

(2)

Choose a target weight based on these weights that a certain percentage of particles can reach. Typically the target weight is chosen as the minimum of the maximal weights, so that all particles are kept.

(3)

Draw a random perturbation vector for each particle, and add this to the particle position that leads to maximal weight. So far the scheme is the same as that used in the optimal proposal density.

(4)

Scale each random vector such that each particle will reach the target weight.

(5)

Resample the particles such that their weights are equal in case the kept percentage is lower than 100%.

The main difference between this scheme and the EWPF is that in the EWPF we scale the deterministic part of the optimal proposal to reach a target weight, while here we scale the random part of the optimal proposal.

The implicit part of our scheme follows from drawing samples implicitly from a standard Gaussian distributed proposal density qξ instead of the original one qx(m)|x(m-1),y(m), as in (Chorin and Tu, Citation2009). These two pdfs are related by:(112) qx(m)|x1:Ne(m-1),y(m)=qξj|dxdξj|(112)

where |dxdξj| denotes the absolute value of the determinant of the Jacobian matrix of the RNxRNx transformation xj=gj(ξj). The transformation gj(.) is now defined via the following implicit relation between variable xj(m) and ξ as(113) xj(m)=xja+αj1/2P1/2ξj(m)(113)

where xja is the mode of pxj(m)|xj(m-1),y(m), P a measure of the width of that pdf, and αj a scalar that depends on ξj(m).

The αj are now chosen such that all particles get the same weight wtarget, so the scalar αj is determined for each particle from:(114) wj=pxj(m)|xj(m-1),y(m)py(m)|xj(m-1)qξj|dxdξj|wj(m-1)=wtarget(114)

This ensures that the filter is not degenerate in systems with arbitrary dimensions and an arbitrary number of independent observations. Because of the target-weight construction the filter does not converge to the correct posterior pdf, and the same discussion as for the EWPF applies here, namely that as long as this bias is smaller than the Monte-Carlo error this filter is a valid candidate for high-dimensional non-linear filtering.

As an example we assume now that observation errors and model errors are Gaussian and that the observation operator HRNy×Nx is linear. Then we find that(115) py(m)|x(m)px(m)|xj(m-1)=1Aexp-12y(m)-Hx(m)TR-1y(m)-Hx(m)-12x(m)-Mxj(m-1)TQ-1x(m)-Mxj(m-1)=1Aexp-12x(m)-xjaTP-1x(m)-xjaexp-12ϕj=px(m)|xj(m-1),y(m)py(m)|xj(m-1)(115)

where(116) P=Q-1+HTR-1H-1,(116) (117) xja=Mxj(m-1)+Υy(m)-HMxj(m-1),(117)

and(118) ϕj=y(m)-HMxj(m-1)THQHT+R-1×y(m)-HMxj(m-1).(118)

This leads to a complicated non-linear differential equation for αj that involves the determinant of P. Since we are interested in high-dimensional problems we consider this equation in the limit of large state dimension Nx. In that limit it turns out that we can integrate this equation, leading to the much simpler equation (see Appendix in Zhu et al. (Citation2016)):(119) (αj-1)γj-Nxlog(αj)+ϕj-logwj(m-1)=logwtarget.(119)

in which γj=ξjTξj. This equation could be approximated by using numerical methods, such as the Newton method, etc., but analytical solutions based on the so-called Lambert W function do exist. We do not elaborate on these here.

Algorithm  in Appendix 2 gives a pseudo-algorithm for the IEWPF.

6.2.3. Between observations: relaxation steps

If the system is not observed at every time step, the schemes mentioned above can be used over the time window between observations. No analytical solutions can be obtained in this case so that the solution has to be found iteratively. However, this procedure is rather expensive as it typically involves solving a problem similar to a 4DVar on each particle.Footnote5 Thus, typically simpler schemes are employed between observation times. These schemes will be less efficient, although we can ensure that Bayes’ theorem is fulfilled exactly for each particle.

In the following, we demonstrate the use of relaxation between observation times. We use the future observations to relax the particles at time k towards observations at next time m>k by using instead of Equation (Equation92) the modified forward model(120) xj(k)=Mkxj(k-1)+β~j(k)++Υ~y(m)-Hkxj(k-1),(120)

where β~j(k)RNx are random terms representing the model error distributed according to a given covariance matrix Q~,Footnote6Mk is the same deterministic model as in Equation (Equation92), Υ~ is a relaxation matrix given by(121) Υ~=τ(k)QHTR-1.(121)

Here, τ(k) is a time dependent scalar that determines the strength of the relaxation. y(m)RNy is the vector of Ny observations at time m and Hk:RNxRNy is the observation operator mapping model space into observation space. Note that the observations y(m) exist at the later time m>k. The modified transition density is now given by(122) qxj(k)|xj(k-1),y(m)=NMkx(k-1)+Υ~y(m)-Hkx(k-1),Q,(122)

and the modified weights wj(k) are accumulated as(123) wj(k)pxj(k)|xj(k-1)qxj(k)|xj(k-1),y(m)wj(k-1)t=1kpxj(t)|xj(t-1)qxj(t)|xj(t-1),y(m).(123)

This simple modification of the forward model to include information about future observations using a relaxation term is only consistent with Bayes Theorem when the weights that are introduced by this modification are properly taken into account, and it leads to efficient schemes if it is used in combination with an equal-weight scheme, like the EWPF or the IEWPF. Algorithm  in Appendix 2 gives a pseudo-algorithm of the relaxation step used in the EWPF and the IEWPF.

Note that it would also be possible to use other methods like ensemble smoothers or ensemble 4Dvar-like methods to move particles between observations, but we will not elaborate on those here.

6.3. The Ensemble Transform Particle Filter (ETPF)

The idea of the Ensemble Transport Particle Filter (Reich, Citation2013) is to avoid resampling by finding a linear transportation map between the prior and the posterior ensemble such that the prior particles are minimally modified, while ensuring that the posterior particles have equal weight. We write each posterior particle as a linear combination of the prior particles as(124) xja=Nei=1Nexiftij(124)

in which we ensure that the particles have the correct mean via(125) i=1Netij=1Ne,j=1Netij=wi.(125)

This still leads to Ne2-2 undetermined elements tij. These are found by minimising the movement from old to new particles, by minimising(126) J(T)=i,jNetij|xif-xjf|2(126)

under the condition that tij0. The above formulation is an example of an optimal transportation algorithm, see e.g. the review by Chen and Reich in van Leeuwen et al. (Citation2015). This scheme can be combined with any proposal density discussed in the previous section.

If the dynamical model is deterministic one needs to add some small random noise to the particles to avoid ensemble collapse. Typically this noise is assumed to be Gaussian with zero mean and covariance Pa=h2Pf with 0<h<1 a free parameter. This term is an ad-hoc addition related to inflation in Ensemble Kalman Filters.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the ETPF.

7. Second-order exact ensemble Kalman filters

Several extensions to ensemble Kalman filters have been proposed to overcome the linearity or Gaussianity assumptions. A large number of filters exists that try to bridge an ensemble Kalman filter and particle filter by defining smoothly varying parameters that move the filter between these two extremes based on the degeneracy of the particle filter. In high-dimensional systems, however, all of these filters become ensemble Kalman filters as any particle filter contribution results in complete degeneracy. These filters (not discussed here) will become useful when localisation is applied.

In a non-linear, non-Gaussian case the ensemble Kalman filters will necessarily produce an analysis where the mean and covariance are biased due to the assumption of a Gaussian prior pdf (Lei and Bickel, Citation2011). Here, we will discuss four ensemble filters that concentrate on getting the first two moments of the posterior distribution correct in non-linear situations. These are the Particle Filter with Gaussian Resampling of Xiong et al. (Citation2006), the Non-linear Ensemble Transform Filter (Tödter and Ahrens, Citation2015), the Moment-Matching Ensemble Filter (Lei and Bickel, Citation2011), and the Merging Particle Filter (Nakano et al., Citation2007).

7.1. Particle Filter with Gaussian Resampling (PFGR) and Non-linear Ensemble Transform Kalman Filter (NETF)

The Particle Filter with Gaussian Resampling (PFGR, Xiong et al. (Citation2006)) introduced an explicit ensemble transformation matching the mean and covariance matrix. The Non-linear Ensemble Transform Filter (NETF, Tödter and Ahrens, Citation2015) is a recent reinvention of this algorithm formulated to obtain an ensemble transformation that is analogous to that of the ETKF. In addition, the NETF was introduced with localisation, so that the filter can be applied to high-dimensional systems (see Section 9.1). The presentation here follows the more modern formulation of the NETF in analogy to the ETKF presented before. As a novel feature, the presented formulation avoids the explicit computation of the analysis state, that is given by the weighted ensemble mean.

The PFGR and the NETF are designed to exactly match the first two moments of the posterior pdf in Bayes theorem without assuming that the prior or likelihood are normally distributed. The forecast ensemble is transformed into an analysis ensemble by applying a weight vector to obtain the analysis mean state and a transform matrix to obtain analysis ensemble perturbations, analogous in form to a square root filter (Equations (Equation17) to (Equation19)).

As in most particle filters, the likelihood weights that arise from Bayes’ theorem(127) wj=py|xjk=1Nepy|xk(127)

are used. For normally distributed observation errors, the weight of each member is at first given by(128) wjexp-12y-HxjfTR-1y-Hxjf(128)

and then normalised so that the weights sum up to one. Before the weights are computed, the ensemble perturbations should be inflated by an inflation factor γ>1 as in the ensemble-based Kalman filters (for inflation see Section 9.2). Using the weight vector w=w1,,wNeT the transform matrix is(129) TTT=Nediag(w)-wwT.(129)

Here, diag(w) is a diagonal matrix that contains the weights wj on the diagonal. The factor Ne was not present in the formulation by Xiong et al. (Citation2006). It was introduced by Tödter and Ahrens (Citation2015) to ensure that the ensemble has the correct analysis variance. As in the ensemble Kalman filters, the eigenvalue decomposition of TTT=UΣUT yields the ensemble transformation(130) T=UΣ1/2UT.(130)

Combining the weight vector and transform matrix as in Equation (Equation19), the analysis ensemble is given by(131) Xa=XfTΛ+[w,,w].(131)

Here, Λ is an random matrix. Xiong et al. (Citation2006) use a random matrix sampled from a normal distribution with mean zero and standard deviation one. They use Λ because in Equation (Equation130) they omit all eigenvalues that are very close to zero and need to restore an ensemble of full size. In contrast, Tödter and Ahrens (Citation2015) use a mean-preserving orthogonal matrix (see Pham, Citation2001) analogous to that used in the SEIK filter. They motivate the use of Λ also by the reduction of ensemble outliers and showed experimentally that the random transformation with mean preserving properties leads to a more stable data assimilation process.

Note, that the transformation in Equation (Equation131) is applied to the ensemble matrix Xf instead of the ensemble perturbation matrix Xf without subsequent addition of the analysis mean state (see e.g. Equation Equation19). This is possible because of the property of T to implicitly subtract the ensemble mean, while the multiplication of Xf with the weight vector array adds the analysis mean state.

For high-dimensional systems, a localisation of the analysis step is required. It was introduced by Tödter and Ahrens (Citation2015) in analogy to the localisation of the ETKF and SEIK filters (see Sec. 9.1). Algorithm  in Appendix 2 gives a pseudo-algorithm of the PFGR and NETF and Algorithm shows the computation of the weights for Gaussian observation errors.

7.2. Moment-Matching Ensemble Filter (MMEF)

A stochastic algorithm that has second-order correct statistics was developed by Lei and Bickel (Citation2011). In this moment-matching ensemble filter (MMEF) we generate an ensemble of perturbed pseudo-observations, Yf, as in the SEnKF (see Equations Equation20 and Equation21)(132) YjfpH(xj)y|xj(132)

using Hxj as variable in the density, so y is fixed. Then the analysis mean for each particle is generated using a corresponding pseudo-observation as follows(133) x¯ayjf=k=1Newkyjfxk=Xfwyjf(133)

in which wkyjf is given by(134) wkyjf=pyjf|xkl=1Nepyjf|xl.(134)

Similarly, the analysis mean for actual observations is computed via(135) x¯ay=k=1Newkyxk=Xfwy.(135)

Furthermore, we calculate equivalent expressions for covariances for perturbed and actual observations as follows(136) Payjf=k=1Newkyjfxk-x¯ayjf×xk-x¯ayjfT(136) (137) Pay=k=1Newkyxk-x¯ay×xk-x¯ayT.(137)

Then each of the ensemble members or particles is updated via(138) xja=x¯ay+Pay1/2Payjf-1/2xj-x¯ayjf.(138)

This filter gives the correct posterior mean and covariance in the large-ensemble limit (Lei and Bickel, Citation2011). To see this, note that xj-x¯ayjf is distributed according to N0,Payjf, so Payjf-1/2xj-x¯ayjf is distributed N0,I, and hence the distribution of xja is Nx¯ay,Pay.

This filter cannot be used in high-dimensional systems, even when localisation is applied, because it needs the evaluation of several full covariance matrices. However, we can explore ensemble perturbations that are used to calculate these covariances, as in all ensemble Kalman filter schemes. The following was not discussed by Lei and Bickel (Citation2011), but is a practical way to make the filter useful in high-dimensional systems.

We can express each covariance matrix Payjf directly in terms of the forecast ensemble as(139) Payjf=XfTyjfXfT(139)

where matrix Tyjf is given by(140) Tyjf=diagw(yjf)-wyjfwTyjf.(140)

The square root of this matrix is(141) Payjf1/2=XfTyjf1/2.(141)

To find the inverse of this matrix we perform an SVD on the prior ensemble matrix(142) Xf=UΛVT(142)

and compute also the EVD on the much smaller square matrices Tyjf(143) Tyjf=U~jΛ~jU~jT.(143)

Using Equations (Equation142) and (Equation143) we find(144) Payjf-12=U~jTΛ~j1/2VΛ-1UT.(144)

Hence, we can write the update equation of the MMEF as(145) xja=x¯a+XfT1/2U~jTΛ~j1/2VΛ-1UTxj-x¯ayjf.(145)

This expression is suitable for high-dimensional applications when the matrices T(yjf) are computed with localisation.

7.3. Merging Particle Filter (MPF)

The merging particle filter generates several sets of posterior ensembles and merges them via a weighted average to obtain a new set of particles that has the correct mean and covariance but is more robust than the standard particle filter. Specifically, the method draws a set of q ensembles each of size Ne from the weighted prior ensemble at the resampling step. Denote each ensemble member as xj,i for ensemble member j in ensemble i. Then new merged ensemble members are generated via(146) xja=i=1qαixj,i.(146)

To ensure that the new ensemble has the correct mean and covariance, the coefficients αj need to fulfil the two conditions(147) j=1qαj=1;j=1qαj2=1,(147)

where each αj also has to be a real number.

When q>3 there is no unique solution for the α’s, while for q=3 we find:(148) α1=34α2=13+18α3=-13-18.(148)

Although not discussed by Nakano et al. (Citation2007) this scheme will be degenerate for high-dimensional problems. However, we can make the α’s space-dependent when q>3 and then apply localisation.

8. Adaptive Gaussian mixture filter

Both ensemble Kalman and Monte Carlo-based techniques discussed in Sections 5 and 6, respectively, have their drawbacks. The Gaussian mixture filter (Anderson and Anderson, Citation1999; Bengtsson et al., Citation2003; Hoteit et al., Citation2008) attempts to avoid these by approximating an arbitrary form of the prior by combining multiple Gaussian priors. This gives it the advantage that both the local Kalman filter type correction step as well as the weighting and resampling step of a particle filter can be applied. This possibility also makes it applicable to highly non-linear and high dimensional systems. In this paper, we discuss the adaptive Gaussian mixture filter developed by Stordal et al. (Citation2011) as a representative scheme, out of all Gaussian mixture filters that have been proposed.

In the Gaussian mixture filter, the prior distribution is approximated by a mixture density (Silverman, Citation1986) where each ensemble member forms the centre of a Gaussian density function(149) p(xf)=j=1Ne1NeNxjf,P~f,(149)

where N(xj,P~) denotes a multivariate Gaussian kernel density with ensemble member xj as mean and covariance matrix P~f=h2Pf, in which Pf is the covariance of the whole forecast ensemble and h is a bandwidth parameter. Stordal et al. (Citation2011) discuss that the optimal choice of the bandwidth h is hoptNe-1/5 if we are only interested in the marginal properties of the individual components of x, but that it might be beneficial to choose h>hopt to reduce the risk of filter divergence, since the choice of the bandwidth parameter determines the magnitude of the Kalman filter update step. Thus, the parameter h is treated as the design parameter and is defined by the user. Note that each particle represents the mean of a Gaussian kernel and that the uncertainty associated with each particle is given by the covariance of that Gaussian kernel (Stordal et al., Citation2011).

If the likelihood is Gaussian, the posterior pdf is again a Gaussian mixture, now with pdf(150) pxa|y=j=1NewjNxja,P~a.(150)

Here, the weights wj are propotional to Ny-Hxjf,Ra with jwj=1 and Ra=HP~fHT+R. So, compared to the particle filter the covariance used in the weights is inflated with a term HP~fHT, leading to more equal weights. Each mean xja and the covariance matrix P~a are obtained using one of the EnKF variants.

In high-dimensional systems, the covariance matrices are never formed explicitly, and the algorithm in Stordal et al. (Citation2011) cannot be used. Hoteit et al. (Citation2008) used an update based in the SEIK filter (see Section 5.2). For a more modern formulation, we provide here an algorithm based on Stordal et al. (Citation2011) but explore an ETKF to avoid the explicit computation of P~. First, the matrix(151) TGMTGMT=I+h2Ne-1STR-1S-1(151)

is generated with S=HXf similar to Equation (Equation44), but including the factor h2. Then, we perform an EVD of the symmetric matrix (TGMTGMT)-1=UGMΣGMUGMT and obtain the symmetric square root(152) TGM=UGMΣGM-1/2UGMT.(152)

This is used to update the mean of each Gaussian kernel by calculating the ETKF update on each of the prior particles as(153) w¯j,GM=1(Ne-1)UGMΣGM-1UGMT(hXf)THTR-1dj(153)

in which dj=y-Hxjf. The new centres of the Gaussian mixture densities are now found as(154) xja=xjf+hXfw¯j,GM.(154)

A square root of the posterior covariance of each Gaussian mixture density is found by(155) Z=hXfWGM(155)

in which(156) WGM=UGMΣGM-1/2UGMT.(156)

Thus, for Equation (Equation150) we have P~a=(Ne-1)-1ZZT, but to evaluate the equation one can use the square root Z, so it is not required to compute P~a explicitly.

Until this point, the algorithm is the standard Gaussian mixture filter. The adaptive part of the filter was introduced by Stordal et al. (Citation2011) and has been demonstrated to avoid filter divergence due to ensemble degeneration. Further, the adaptivity allows us to choose smaller values of the bandwidth parameter h. To stabilise the Gaussian mixture filter, we interpolate the original analysis weights with a uniform weight as(157) wjα=αwj+(1-α)Ne-1.(157)

For the adaptivity, α is chosen to be(158) α=NeffNe-1,(158)

where Neff=1/(l=1Newl2) is the effective ensemble size. To avoid ensemble degeneration one can further add a resampling step as in particle filters. It is performed if Neff<Nc, with Nc a value that can be chosen freely, for instance Nc=0.5Ne. The full scheme then becomes:

(a)

When NeffNc no resampling is needed, so the weights are calculated as above and transported with each particle to the next set of observations.

(b)

When Neff<Nc we will resample according to any of the resampling schemes in Appendix 1. This leads to a new set of states for the centres of the Gaussian mixtures denoted xj,(i) in which j denotes the index of the state for resampling, and i its index after resampling. Note that several of the new states will coincide. To avoid identical samples we draw our final new ensemble from the Gaussian mixtures, as follows (159) ξiN(0,I),(159) (160) xia=xj,(i)+Zξi,(160) (161) wi=1Ne.(161) Further, we set P~a=(Ne-1)-1Xa(Xa)T, but use this only in factorised form.

Note that in this scheme we never calculate a full state covariance.

It is important to realise what the adaptive part does. Indeed, by construction, the filter is not degenerate, but at the expense of strongly reducing the influence of the observations when α is small. In high dimensional systems with a large number of independent observations localisation is essential to avoid using the scheme as a sum of ensemble Kalman filters only.

In the scheme by Bengtsson et al. (Citation2003), the mean of each Gaussian pdf is chosen at random from the ensemble, and the covariance in each Gaussian pdf is estimated from the ensemble members which are local in state space, including a localisation and smoothing step. Since the scheme has not been applied to high-dimensional systems it will not be discussed here.

9. Practical implementation of the ensemble methods

This section is devoted to issues related to the practical implementation of the ensemble methods. In particular, we address the need for localisation and inflation in some of the presented ensemble methods to counteract the issues arising from ensemble undersampling in large scale problems such as ocean and atmosphere prediction. We also discuss the computational cost of each method as presented and the parallelisation of ensemble data assimilation methods. We will conclude this section with a discussion on the suitability of the ensemble data assimilation methods applied to non-linear dynamical models.

9.1. Localisation in EnDA

The success of the EnDA methods is highly dependent on the size of the ensemble being adequate for the system we apply these methods to. Thus, for large scale problems, where the number of state variables is many magnitudes larger than the number of ensemble members, ensemble undersampling can cause major problems in EnDA methods: underestimated ensemble variance, filter divergence, and errors in estimated correlations, in particular spurious long-range correlations. In such cases, spatial localisation is a necessary tool to minimise the effect of undersampling.

Localisation damps long-range correlations, e.g. in the ensemble covariance matrix (’covariance localisation’, see Section 9.1.2). This damping can be applied to the extend to keep only correlations over limited distances and erase long-range correlations in the analysis step. Thus, localisation decouples the analysis update at distant locations in a model grid. The underlying assumption of localisation is that the assimilation problem has in fact a local structure. This means, that correlation length scales are much shorter than the extent of the model grid so that only correlations over short distances are relevant while for long distances the sampling error in the ensemble-estimated covariance matrices dominates (see, e.g. Morzfeld et al., Citation2017). This seems to be fulfilled for many oceanic and atmospheric applications. For example, Patil et al. (Citation2001) described a locally low dimension for atmospheric dynamics. The success of localised filters in oceanic and atmospheric data assimilation applications also shows that this condition is dominantly fulfilled, even though it is known that long-range correlations (teleconnections) exist in the atmosphere and ocean. However, if a modelling problem does not have a local structure or if too little observations are available or the observations only represent long-range properties of the system, localisation cannot be applied.

Localisation is usually applied either explicitly by considering only observations from a region surrounding the location of the analysis or implicitly by modifying P or R so that observations from beyond a certain distance cannot affect the analysis state. The way in which such localisation is applied is still an active field of research and many variants of localisation schemes have emerged over the last decade. There are two main types of spatial localisation techniques (or simply localisation) that are widely used in ensemble data assimilation: Covariance localisation (also termed P- or B-localisation) and observation localisation (also denoted R-localisation). Both methods will be discussed here together with domain localisation, which is required for the application of observation localisation. In addition, a number of adaptive localisation schemes was developed over the recent years. A selection of these schemes is discussed in Section 9.1.4.

In general, all localisation schemes are empirical. While they improve the estimations by ensemble filters, they can disturb balances in the model state (Lorenc, Citation2003; Kepert, Citation2009). Further, the interaction of localisation with the serial observation processing usually applied with the EnSRF and EAKF methods can reduce the stability of these filters (Nerger, Citation2015).

9.1.1. Domain localisation

Domain localisation or local analysis is the oldest localisation technique. For ensemble Kalman filters it was first applied by Houtekamer and Mitchell (Citation1998), but the method was also applied in earlier schemes of optimal interpolation (see Cohn et al., Citation1998). In domain localisation we only use the ensemble perturbations that belong to the domain Dγ in which the analysis correction of the state estimate is computed. For example, this domain can be a vertical column of grid points or a single grid point. Thus, we use a linear transformation Dγ to obtain(162) xj,γ=Dγxjf,(162)

where j=1,,Ne and γ=1,,Γ with Γ being the total number of subdomains. To localise, we now only use observations within a specified distance – the localization radius – around the local domain Dγ. This defines a local observation domain D^γ. Using the corresponding linear transformation D^γ we can transform the observation error covariance R, the global observation vector y, and the global observation operator H analogously to Equation (Equation162) to their local parts(163) yγ=D^γy,(163) (164) Hγ=D^γH,(164) (165) Rγ=D^γRDγT.(165)

Thus, we neglect observations that are outside of the domain D^γ. Then a general local analysis state is given by(166) Xγa=X¯γf+XγfW¯γ+Wγ,(166)

where W¯γ and Wγ are computed using local ensemble forecast perturbations and local observations from the domain D^γ. For a complete analysis update, a loop over all local analysis domains has to be performed with a local analysis update for each domain.

Applying domain localisation allows significant savings in computing time since solving for the analysis update is not performed globally but on much smaller local domains. Accordingly, updates on the smaller scale domains can be done independently and therefore parallel (Nerger et al., Citation2006) even if the observation domains overlap. In ensemble-based Kalman filters, domain localisation was used predominantly with filters that use the analysis error covariance matrix for the calculation of the gain like SEIK, ETKF, ESTKF, all discussed in detail in Section 5. In these algorithms, the forecast error covariance matrix is never explicitly computed. Examples of the application of domain localisation can be found, e.g. in Brusdal et al. (Citation2003) and Testut et al. (Citation2003).

Blindly using domain localisation can result in boxed analysis fields if neighbouring local domains are updated using significantly different observation sets. Thus, great care needs to be taken to choose domains so that they overlap sufficiently to produce smooth global analysis fields with minimal increase in computational cost. Today, domain localisation is typically applied with observation localisation (Hunt et al., Citation2007), which is discussed in Section 9.1.3.

9.1.2. Covariance localisation

Covariance localisation (also termed P-localisation or B-localisation, depending on whether the background covariance matrix is denoted P or B as in variational assimilation schemes) is a localisation method that is directly applied to the ensemble covariance matrix. The ensemble undersampling causes spurious cross-correlations between state variables. As realistic long-range correlations are typically small, the sampling errors are particularly pronounced for long distances. The direct covariance localisation can be used to reduce the long-range correlations in the forecast error covariance and hence damp the spurious correlations. In addition, the rank of the ensemble covariance is increased, giving more degrees of freedom to the analysis update (Hamill et al., Citation2001; Whitaker and Hamill, Citation2002).

Typically, covariance localisation is applied by first forming a correlation matrix C and then taking a Schur product (an element by element matrix multiplication) of this correlation matrix and the forecast error covariance. Thus, given some Pf, our localised forecast error covariance will be(167) PLf=CPf.(167)

The localization matrix C is usually formed of correlation functions with compact support similar in shape to a Gaussian function [e.g.][]Gaspari1999. Practically, the computation of the covariance matrix Pf can be avoided by applying the localisation matrix to the matrices PfHT and HPfHT (see Equation (Equation11)).

We note that, from all the ensemble-based Kalman filter methods presented in Section 5, covariance localisation can only be applied to the EnSRF and EAKF, since for these methods observations can be processed serially, and in the stochastic EnKF.

9.1.3. Observation localisation

In the case of square root filters, presented in Section 5, the full covariance matrix is never formed. Instead, only the ensemble perturbation matrix Xf is calculated at each analysis step. Petrie and Dance (Citation2010) showed that covariance-localisation for square root filters cannot be approximately decomposed into a square root of the correlation matrix ρ,(168) ρρTXXTρXρXT(168)

thus covariance localisation cannot be applied. For such filters, e.g. SEIK, ETKF and ESTKF, the observation localisation is a more natural choice and is currently used instead of covariance localisation (Hunt et al., Citation2007; Miyoshi and Yamane, Citation2007; Janjić et al., Citation2011).

Observation localisation is applied by modifying the observation error covariance matrix R. More specifically, one modifies its inverse R-1 so that the inverse observation variance decreases to zero with the distance of an observation from an analysis grid point. To be able to define the distance, it is necessary to perform the analysis with the domain localization method as described in Section 9.1.1. An abrupt cutoff could be obtained by setting observation variances to zero beyond a given distance. This would be equivalent to the simple domain localisation of Section 9.1.1 and could result in non-smooth analysis updates. For a smooth analysis, e.g. Brankart et al. (Citation2003) described to increase the observation error variance with increasing distance from the analysis grid point. Hunt et al. (Citation2007) proposed to use a gradual observation localisation in the LETKF acting on R-1, which is likewise applicable with the SEIK filter and the ESTKF. In this case, elements of R-1 are multiplied by a smoothly decreasing function of distance from the analysis grid point. This modification smoothly reduces the observation influence and excludes observations outside a defined radius by prescribing their error to be infinitely large. As for covariance-localisation, the method uses a Schur product as(169) R~=C~R.(169)

Here, the same correlation function (Gaspari and Cohn, Citation1999) as for covariance localisation can be used to construct the localisation matrix. However, in contrast to covariance localisation, C~ is not a correlation matrix as the values on the diagonal of this matrix vary with the distance between the observation and the local analysis domain. Then, the analysis update is computed as in the case of domain localisation, but using the weight-localised matrix R~. For computational savings we would in practise also discard any observations with zero weight from the analysis computations.

Both observation and covariance localisation can lead to similar assimilation results. In general, the optimal localisation has been found to be a bit larger for covariance localisation than for observation localisation (Greybush et al., Citation2011). The reason for this difference lies in the different effect of the localisations in the Kalman gain as was explained by Nerger et al. (Citation2012b).

9.1.4. Adaptive localisation schemes

The localisation methods described above are widely used and can be applied without much additional computing cost. However, the optimal localisation radius is a priori unknown and needs to be tuned in numerical experiments. For the tuning one performs several data assimilation experiments with different localisation radii, perhaps over shorter time periods, and selects the radius that results in the smallest estimation errors. Regarding the theoretical understanding of localisation, Kirchgessner et al. (Citation2014) showed for the case of observation localisation when each grid point is observed that the optimal localisation radius should be reached when the sum over the observation weights equals the ensemble size. This finding allows for a simple form of adaptivity or a starting point for further tuning. Further, Perianez et al. (Citation2014) showed that both the sampling error in the ensemble covariance matrix and the observation error influence the optimal localisation radius. As the sampling error has a largest influence when the true correlations are small, the dynamically generated correlations also influence the optimal localisation radius (Zhen and Zhang, Citation2014; Flowerdew, Citation2015).

To avoid the need for numerical tuning and to better adapt the localisation to the dynamically created correlation structure, several adaptive localisation methods have been developed, which we shortly mention here. A common approach is to damp the spurious correlations that are caused by sampling errors due to the small ensemble size. Anderson (Citation2007) developed a hierarchical localisation method, in which the ensemble is partitioned into sub-ensembles. Then, the sub-ensembles are used to estimate the sampling errors. Bishop and Hodyss (Citation2009) proposed an adaptive localisation method that uses a power of the correlations to damp small correlations and pronounce those correlations that are significant. This method can find correlations even at longer distances. Further, methods have been developed to find empirical localisation functions. In these methods, one attempts to find for a single observation the weight factor that minimises the deviation from a true solution (Anderson, Citation2012; Lei and Anderson, Citation2014; Flowerdew, Citation2015). These methods are typically tuned once based on observation system simulation experiments (OSSEs), in which one knows the true state. When the OSSEs are configured realistically, the obtained localisation functions should be applicable for the assimilation of real observations after the tuning.

The major advantage of the methods proposed so far is that they are able to adaptively specify the localisation function or radius according to the dynamically generated covariance structure. However, the methods still need tuning, which can be even more costly than for the fixed covariance and observation localisation methods. For example, the method by Bishop and Hodyss (Citation2009) requires the specification of an envelope function around the locations that are found by powering the correlations and the number of powers that are computed. Lei and Anderson (Citation2014) also showed that the localisation function can change when it is applied iteratively such that a sufficient number of iterations have to be computed.

Apart from the adaptive localisation methods, further methods like spectral localisation (Buehner and Charron, Citation2007) and localisation in different variables (i.e. stream function, velocity potential, Kepert, Citation2006) have been developed. However, none of these methods are yet a standard for operational centres.

9.1.5. Localisation in particle filtering

Several variants of the Particle Filter that explore localisation have been developed recently, following its success in Ensemble Kalman Filters. An issue with directly localising R or using domain localisation is that the weight of each particle is a global property of the filter (van Leeuwen, Citation2009). That is, the same particle could have a high weight in one area and low weight in another making it ambiguous whether this particle should be resampled or not. Keeping parts of a number of particles that all perform well in a certain area of the domain and parts of other particles in other areas of the domain would lead to balance problems between variables and sharp gradients in the fields. In contrast, when performing parameter estimation a smooth variation of parameter values is less likely to cause imbalances in the model variables, and localisation is straightforward, as pioneered by Vossepoel and Van Leeuwen (Citation2006).

Particle filters that use a proposal density, such as the EWPF discussed in Section 6.2.1 indirectly use localisation through the model error covariance matrix Q. This localisation does not explicitly work on the weights but on how the states are updated, because a natural choice is to pre-multiply each update of a particle with that matrix. Since the model error covariance matrix will mainly contain short length-scale correlations related to missing or inaccurate physics at the model grid scale, each point in the state space is only influenced by observations within the radius set by that covariance matrix. In fact, as noted in Section 6.2.1, we do have the freedom to choose this matrix differently from Q, so other choices closer to our needs are possible. This is because the effects of this choice will be taken into account in the computation of the weights of each particle. This has not been explored in any detail in the literature.

Of the full particle filters, the ETPF (Reich, Citation2013) can easily be localised by taking for each grid point only observations close to that grid point into account and making the transformation matrix space-dependent to ensure smooth transitions between different regions. This can for example be achieved by calculating the transformation matrix in a limited number of grid points and interpolate that matrix between grid points. This would also reduce the number of computations, which would otherwise be prohibitive (see Section 9.4 on computational costs).

The PFGR and the NETF perform an ensemble transformation similar to the ETKF, but with a transform matrix T computed from particle filter weights. Accordingly, observation localisation can be applied to the NETF (Tödter and Ahrens, Citation2015) by smoothing the weight matrix over space. This can also be applied to the MMPF in the high-dimensional implementation. Also the MPF can be localised by making the weights local and using a systematic resampling method like Stochastic Universal Resampling (see Appendix 1). In practise, more might be needed, e.g. the extra averaging as advocated by Penny and Miyoshi (Citation2016) described below.

Several localisation schemes have been proposed and discussed in the review van Leeuwen (Citation2009) and those will not be repeated here. The most obvious thing to do is to weight and resample locally, and somehow glue the resampled particles together via averaging at the edges between resampled local particles (van Leeuwen, Citation2003b). Recently, Penny and Miyoshi (Citation2016) used this idea with more extensive averaging, and their scheme runs as follows. First, for each grid point j the observations close to that grid point are found and the weight of each particle i is calculated based on the likelihood of only those observations:(170) wi,j=p(yj|xi,j)k=1Nep(yj|xi,j)(170)

in which yj denotes the set of observations within the localisation area. This is followed by resampling via Stochastic Universal Resampling to obtain ensemble members xi,ja with i=1,,Ne for each grid point j. As mentioned before, the issue is that two neighbouring grid points can have different sets of particles, and smoothing is needed to ensure that the posterior ensemble consists of smooth particles. This smoothing is performed for each grid point j for each particle i by averaging over the Np neighbouring points within the localisation area around grid point j:(171) xi,ja=12xi,ja+1Npk=12Npxi,jka(171)

in which jk for k=1,,Np denotes the grid point index for those points in the localisation area around grid point j. The resampling via Stochastic Universal Resampling is done such that the weights are sorted before resampling, so that high-weight particles are joined up to reduce spurious gradients.

While this scheme does solve the degeneracy problem in simple one-dimensional systems it is unclear if it will work well in complex systems such as the atmosphere in which fronts can easily be smoothed out, and non-linear balances broken, see e.g. the discussion in van Leeuwen (Citation2009).

A new scheme has recently been proposed in Poterjoy (Citation2016a), which involves a very careful process of ensuring smooth posterior particles and retaining non-linear relations. The filter processes each observation sequentially, as follows. First, adapted weights are calculated for the first element y1 of the observation vector, as(172) w~i=αp(y1|xi)+1-α(172)

These weights are then normalised by their sum W~. Then we resample the ensemble according to these normalised weights to form particles xki.

Here, α is an important parameter in this scheme, with α=1 leading to standard weighting, and α=0 leading to all weights being equal to 1. Its importance lies in the fact that the weights are always larger than 1-α, so even a value close to 1, say α=0.99, leads to a minimum weight of 0.01 that might seem small, but it means that particles that are more than 1.7 observational standard deviations away from the observations have their weights cut off to something close to 1-α. This seriously limits the influence the observation can have on the ensemble. Furthermore, the influence of α does depend on the size of the observational error, which is perhaps not what one would like. It is included to avoid losing any particle.

Now, we do the following for each grid point j. For each member i we calculate a weight(173) w~i=αρ(y1,xj,r)p(y1|xi)+1-αρ(y1,xj,r)(173)

in which ρ(.) is the localisation function with localisation radius r. The normalised weights for this grid point, wi, are obtained by dividing w~i by the summed weights over all the particles. Note, again, the role played by α. Then, the posterior mean for this observation at this grid point is calculated as(174) x¯j=i=1Newixi,j(174)

in which xi,j is grid point j of particle i. Next, a number of scalars are calculated that ensure smooth posterior fields (Poterjoy, Citation2016a):(175) σj2=i=1Neωi(xi,j-x¯j)2cj=Ne(1-αρ(xj,y1,r))αρ(xj,y1,r)W~r1,j=σj21Ne-1i=1Ne(xki,j-x¯j+cj(xi,j-x¯j))2r2,j=cjr1j(175)

so that the final estimate becomes:(176) xi,ja=x¯j+r1,j(xki,j-x¯j)+r2,j(xi,j-x¯j).(176)

This procedure is followed for each grid point so that at the end we have an updated set of particles that have incorporated the first observation. As a next step the whole process is repeated for the next observation, with a small change that w~i is multiplied by w~i from the previous observation, until all observations have been assimilated. In this way, the full weight of all observations is accumulated in the algorithm. Now the importance of α comes to full light: without α the ensemble would collapse because the w~’s would be degenerate when observations are accumulated.

The final estimate shows that each particle at grid point j is the posterior mean at that point plus a contribution from the deviation of the posterior resampled particle from that mean and a contribution from the deviation of the prior particle from that mean. So each particle is a mixture of posterior and prior particles, and departures from the prior are suppressed. When α=1, so for a full particle filter, we find for grid points at the observation locations that cj=0 because it is ρ(y1,xj,r)=1 here. Accordingly, it is r2,j=0 and r1,j1 and indeed the scheme gives back the full particle filter.

Between observation locations it can be shown that the particles have the correct first and second order moments, but higher-order moments are not conserved. To remedy this a probabilistic correction is applied at each grid point as follows. The prior particles are dressed by Gaussians with width 1 and weighted by the likelihood weights to generate the correct posterior pdf. The posterior particles are dressed in the same way, each with weight 1/Ne. Then the cumulative distribution functions (cdf’s) for the two densities are calculated using a trapezoidal rule integration. A cubic spline is used to find the prior cdf values at each prior particle i, denoted by cdf(i). Then a cubic spline is fitted to the other cdf, and the posterior particle i is found as the inverse of its cdf at value cdf(i). See Poterjoy (Citation2016a) for details. The result of this procedure is that higher order moments are brought back into the ensemble between observed points.

This scheme, although rather complicated, is the only local particle filter scheme that has been applied to high-dimensional geophysical systems based on primitive equations in Poterjoy and Anderson (Citation2016b). (van Leeuwen, Citation2003b applied a local particle filter to a high-dimensional quasi-geostrophic system, but that system is quite robust to sharp gradients as it does not allow gravity waves.)

Another interesting local particle filter is the Multivariate Rank Histogram Filter (Metref et al., Citation2014a). The idea is to write the posterior pdf in terms of an observed marginal multiplied by a set of conditional pdfs. For example, for a 3-dimensional system in which variable x1 is observed we have:(177) p(x1,x2,x3|y)=p(y|x1)p(y)p(x1,x2,x3)=p(y|x1)p(y)p(x1,x2)p(x3|x1,x2)=p(y|x1)p(y)p(x1)p(x2|x1)p(x3|x1,x2).(177)

The filter now uses the rank-histogram idea of Anderson (Citation2010) on each component, resulting in a fully non-Gaussian update of each component. Localisation can be easily applied directly in this algorithm as it is a transformation algorithm and the transformation can be made local. Unfortunately, this procedure becomes too expensive when the system is high dimensional. However, via a so-called mean-field approximation we suppress the conditioning on non-observed variables, so that we find:(178) p(x1,x2,x3|y)p(y|x1)p(y)p(x1)p(x2|x1)p(x3|x1).(178)

This will make the algorithm parallelisable and suitable for high-dimensional applications, although that has not been explored yet.

9.2. Ensemble covariance inflation

In practice, an ensemble Kalman filter can diverge from the truth due to systematic underestimation of the error variances in the filter, possibly caused by model errors or ensemble undersampling as discussed in Section 9.1. In particular, estimating a too large amount of long range correlation will reduce the estimated variance too strongly. Regardless of the cause, underestimating the uncertainty leads to a filter that is overly confident in the state estimate. Thus, the analysis step of the filter puts increasingly more weight on the ensemble background estimate than on the observations and, at some point, it disregards observations completely. Localisation is one method to reduce the undersampling. However, for high-dimensional systems, localisation alone is not sufficient to ensure a stable assimilation process and covariance inflation is applied to further increase the sampled variance and thus stabilise the filter. In addition, the inflation can partly account for model error in case of an imperfect model (Pham et al., Citation1998b; Hamill, Citation2001; Anderson, Citation2001; Whitaker and Hamill, Citation2002; Hunt et al., Citation2007).

Most common is a fixed multiplicative covariance inflation (Anderson and Anderson, Citation1999). The method uses the inflation factor r to perform a multiplicative inflation for each ensemble member xja,f. With j=1,,Ne being ensemble member indices, it is given by(179) xja,f=rxja,f-x¯a,f+x¯a,f(179)

where r usually is chosen to be slightly greater than one. The specification of an optimal inflation factor may vary according to the size of the ensemble ((Hamill, Citation2001); Whitaker and Hamill, Citation2002) and the choice of r will depend on various factors, such as dynamics of the model, type of the ensemble filter used as well as the length scale of covariance localisation.

Related to covariance inflation is the so-called ’forgetting-factor’ ρ introduced by Pham et al. (Citation1998b). The forgetting factor is usually chosen to be slightly lower than one and is typically applied in the square root filters like the ETKF, SEIK and ESTKF. For example, in the ETKF it is applied to TTT, e.g. in Equation (Equation44) as(180) TTTTT=ρI+1Ne-1STR-1S-1.(180)

In this way, the inflation and forgetting factors are related as ρ=r-2. Equation (Equation180) allows one to apply inflation in a computationally very efficient way because TTT is much smaller than the ensemble states to which the inflation is applied in Equation (Equation179).

Next to the multiplicative inflation, an additive inflation has been proposed. The multiplicative inflation leads to an inflation that is relative to the variance level. Thus, large variances will be inflated much more than small variances. This behaviour can be avoided with additive inflation (Ott et al., Citation2004), which can also be applied in combination with the multiplicative inflation. In additive inflation, all variances are inflated by the same amount, rather than a relative factor. This difference can be useful if the variances vary strongly as in this case the additive inflation acts stronger on the very small variances.

The optimal strength of the inflation is usually determined by tuning experiments, i.e. running experiments with different inflation values and analysing which value results in the smallest estimation errors. Usually a single fixed value of r or ρ is chosen for all grid points. This situation is mainly motivated by the fact that a manual tuning of spatially varying inflations is not feasible for high-dimensional models. To avoid the tuning, several adaptive inflation methods have been proposed. Brankart et al. (Citation2003) proposed to use the relation(181) tr(ρ-1SST+R)=(y-H(x¯f))T(y-H(x¯f))(181)

to estimate a temporally variable forgetting factor ρ for multiplicative inflation. This equation is one of the statistical consistency relations in observation space that Kalman filters should fulfil (Desroziers et al., Citation2005). Further, Anderson (Citation2009) proposed a method to adaptively estimate spatially and temporally varying inflation factors. This method also aims to fulfil Equation (Equation181) but uses Bayesian estimation to obtain the inflation values. All of these adaptive methods do assume that we have a very good knowledge of the error covariance of the observations. Apart from adaptively inflating the ensemble spread, adaptive inflation of observation errors has been proposed by Minamide and Zhang (Citation2017) for assimilating all-sky satellite brightness temperatures.

An alternative to the inflation can be to explicitly account for the sampling error caused by the finite ensemble size as is done in the finite-size ensemble transform Kalman filter (Bocquet, Citation2011). This method, while still denoted ’Kalman filter’, requires the iterative minimisation of a cost functional and is hence distinct from the Ensemble Kalman filter variants in Section 5, which compute a one-step analysis update.

9.3. Parallelisation of EnDA

The need to integrate an ensemble of model states leads to large computational costs, because instead of computing a single model integration as in normal modelling applications an ensemble of O(10–100) members has to be propagated. To reduce the time to perform the costly computations one can apply parallelisation of the data assimilation program and then use high-performance computers with a large number of processors to perform the computations. The ensemble integrations as the most costly part of the computations can be easily parallelised. In fact, the integration of each ensemble state is independent from the other states. Thus, this step could be parallelised by simply starting the numerical model Ne times. Each model state has to be initialised from a different restart file and one has to store the final state of each model integration to keep the information on the forecast ensemble. Subsequently to the ensemble forecasts, one starts the data assimilation program, which reads the ensemble information from the files, computes the analysis step, and writes a set of new restart files to prepare the next forecast phase. The computations of the analysis step can also be parallelised as is outlined below. This implementation scheme of data assimilation can be termed ’offline coupling’ (Nerger and Hiller, Citation2013). While being flexible, the frequent writing and reading of the large files holding the ensemble states can take a significant amount of time.

A more sophisticated parallelisation of the ensemble data assimilation problem with a high-dimensional ocean model was discussed by Keppenne and Rienecker (Citation2002) and Keppenne and Rienecker (Citation2003). This method applied a domain-decomposition to the model and then integrated several ensemble states concurrently. The forecast ensemble was then collected by the use of the parallelisation technique SHMEM, which was also used for exchanging data in between processors during the analysis step of the EnKF applied in this study. Keeping the analysis step and the ensemble forecasts within one program reduced the overall computing time because the writing and reading of model state files is reduced.

The analysis step of the ensemble filters can also be parallelised using parallelisation methods like the Message Passing Interface (MPI, Gropp et al., Citation1994). The parallelization differs depending on whether localisation is used and on which of the filters is used. For the filter methods that assimilate all observations at once (in contrast to the serial observation processing of the EAKF and EnSRF) using the domain-decomposition of a model was found to be more efficient than using ensembles which are distributed over several processors because the amount of data that has to be exchanged using MPI is smaller for domain-decomposition (Nerger et al., Citation2005a).

For the ensemble Kalman filters with domain localisation, the local analysis update is independent for each local domain. Thus, this part is naturally parallel and can be distributed with MPI, the shared-memory standard OpenMP, or a combination of both. However, because the observation domains have a larger spatial extent they can reach into the grid domain held by neighbouring processors. The local analysis step needs the difference (innovation) between the observation and the corresponding part of the observed state vector. These differences need to be first computed by the processor that holds the sub-domain and then exchanged in between the different processors computing the analysis step. This computation of the observation innovations and their exchange using MPI is only required once before the loop over all local analysis domains can be computed in parallel (Nerger and Hiller, Citation2013). The cost for these operations depends on the total number of observations and on their distribution over the model grid. For many observations this can limit the parallel speedup of the analysis update as was shown for the localised SEIK filter by Nerger and Hiller (Citation2013).

The EAKF and EnSRF are typically applied with serial observation processing and covariance localisation. In this case, the parallelisation of the analysis step has to take into account that for each assimilated observation the full model state has to be updated. Hence, also the innovation differences between the not yet assimilated observations and the corresponding observed model state change after each update. Anderson (Citation2007) proposed to let each processor separately update the innovations so that the required parallel communication is limited. This parallelisation does not take the localisation into account. Taking into account that the localisation results in a limited reach of the observation influence, Wang et al. (Citation2013) proposed another parallelisation strategy.

The analysis step of the ensemble Kalman filters requires only the model states. This allows for a generic coupling between the model and the analysis step. In particular, one can implement filter algorithms such that they can be coupled in the same way with different models. This allows one to build generic frameworks for ensemble data assimilation (Nerger et al., Citation2005a; Nerger and Hiller, Citation2013; Browne and Wilson, Citation2015). In the generic form, the ensemble forecast can still be computed by concurrent parallel model forecasts. The transfer of the forecast state information can then be performed either directly in memory by subroutine calls (Nerger and Hiller, Citation2013) or by parallel communication using MPI (Nerger, Citation2004; Browne and Wilson, Citation2015). These strategies allow a tight ’online’ coupling of the model and the data assimilation code that computes the analysis updates. The coupling can be achieved with minimal changes in the model code.

For the implementation of the EWPF and IEWPF different parallelisation schemes are applicable for the computations at each nudging step in between observations (Equations (Equation120) and (Equation123)) and at observation time for the EWPF between Equations (Equation106) to (Equation111) and for the IEWPF between Equations (Equation112) to (Equation114). Before the observation time, the computations for the random forcing β~j(m) in Equation (Equation120) are independent for each particle since a different forcing is drawn from the covariance for each of them. Similar, the nudging term in Equation (Equation120) and the update of the weights are independent for each particle. Thus, these operations can be performed in parallel and there is no need to gather all particles on a single process. The computation of the matrix Υ in Equations (Equation108) and (Equation117) is computationally the most expensive part. When the observation operator does not change over time, this matrix can be precomputed before beginning the assimilation. The downside of this approach is that this matrix can be huge and requires a lot of memory if the state dimension and number of observations are large. Otherwise, since the same matrix is used by all particles, it is possible to distribute the computation to all processes allocated for the particles, e.g. using a parallel matrix solver. At observation time, most of the computations are again independent for all particles. Only the maximum weight obtained from Equations (Equation106) and (Equation118) for EWPF and IEWPF, respectively, must be exchanged over all processes holding particles, so that the target weight wtarget can be computed. Further parallelisation, e.g. to use the domain decomposition of the model, might also be possible. However, the matrix Q is frequently implemented in form of operators. As the parallelisation is always dependent on the particular implementation of the matrix Q it cannot be generalised for all models.

9.4. Computational cost

In Section 5, we presented various ensemble-base Kalman filter methods in a clean mathematical way for ease of comparison and clarity. In Appendix 2, we give a practical and precise pseudo-algorithms on how to implement each method. Providing detailed operation counts for all the ensemble methods presented in this paper would be too lengthy but more importantly the actual operation count would depend on many details such as operators H and R, which are case specific for the model and observations. The operation counts provided here have been obtained by counting them in the pseudo codes in Appendix 2.

Generally, the leading order of operation counts in the different filters are those that scale with third order in any of the dimensions Nx, Ny, and Ne. For the SEIK, ETKF, ESTKF, and the EAKF methods, the leading order of operation count is ONyNe2+Ne3+NxNe2 if the observation error covariance matrix is diagonal. The main cost is the update of the ensemble by multiplying with the weight matrix in Equation (Equation19) which has a complexity of ONxNe2. Computing the matrix TTT, e.g. in Equation (Equation25) involves multiplications of matrix S with R-1 which has a complexity of ONyNe2. Finally computing the transform matrix T by a Cholesky decomposition or an EVD has a computational complexity of ONyNe2. While the leading order of operation counts is identical for all four filters, the SEIK and ESTKF are in general computationally faster than the ETKF, or the bulk formulation of the EAKF, despite equal leading operation counts due to details in the algorithms. For the EAKF, computing the SVDs of the matrices Xf and S~, whose costs scale with ONxNe2 and ONyNe2, respectively, increases the computing time without changing the leading order of operation counts. Thus, the leading order of operation count does not reflect the computing speed.

The serial observation handling that is usually applied in the EAKF and EnSRF leads to an operation count of ONyNxNe in the leading order. Because only the ensemble updates are of third order complexity in the serial update, it can be faster than the bulk updates that assimilate all observations at once. This is even the case when localisation is used. However, in combination with localisation, the stability of the serial formulations can be deteriorated (Nerger, Citation2015).

The leading order operation count of the stochastic EnKF with perturbed forecasted observations is O(NyNe2+Ny2Ne+Ny3+NxNe2). Here again the ensemble update, which scales as ONxNe2, is usually the most costly operation. However, the EnKF is usually more costly than the filters mentioned before because of the inversion of the Ny×Ny matrix FF (Equation (Equation69)), which has a complexity of ONy3. Parallelising this inversion can help to reduce the computing time. A computing cost ONy3 also occurs for the bulk formulation of the EnSRF due to the EVD computed in Equation (Equation58).

When localisation is used, the change in the cost compared to the global formulation depends on the localisation method used. For covariance localisation (Section 9.1.2), the cost for computing the weight in matrix C and to apply it to Pf or the matrices PfHT and HPfHT is added. For observation localisation (Section 9.1.3), the cost to compute the analysis for a local analysis domain with the bulk update methods is O(Ny,γNe2+Ne3+Nx,γNe2), where Ny,γ is the number of local observations and Nx,γ is the size of a local state vector that is corrected. Because both Ny,γ and Nx,γ are usually much smaller than the global dimensions Ny and Nx, a single local analysis update is cheaper than the global update. However, the local analysis update has to be computed for each local analysis domain. Thus, the cost for the analysis with observation localisation is usually significantly higher than the global analysis. However, the local analysis can be easily parallelised to reduce the computing time as was described in Section 9.3.

The computing cost in the ETKF can be reduced using a projection matrix A analogous to the SEIK and ESKTF methods. For the ETKF, this projection is square-matrix with diagonal entries of 1-1/Ne and off-diagonal entries of -1/Ne. The advantage of using A is that one can avoid the explicit computation of X in favour of applying A to smaller matrices when evaluating the analysis equations (Nerger et al., Citation2012a).

Table 2. Overview of filter methods available from Sangoma project website.

The computational cost for the particle-based non-linear filter NETF (Section 7.1) is similar to that of the ETKF since the analysis is performed in the Ne-dimensional subspace spanned by the ensemble members. In addition, the NETF does not compute an inverse matrix thus avoiding computational instabilities caused by small singular values, which are sometimes neglected in ETKF implementations for that reason (Sakov et al., Citation2012). If localisation is applied to the NETF, the local analysis computations are independent and can be evaluated in parallel as for the ETKF. The generation of random rotation matrices consumes additional resources; however, it is possible to resort to a collection of pre-calculated random matrices since they only depend on ensemble size Ne.

9.5. Ensemble data assimilation and non-linearity

The original EnKF was developed to overcome stability problems of the extended Kalman filter (see Jazwinski, Citation1970) that were discovered with ocean data assimilation applications (Evensen, Citation1993). Due to its use of an ensemble to propagate the state error covariance matrix, the EnKF is suited for non-linear models in this phase. However, the analysis step is based on the Kalman filter and is only optimal for Gaussian distributions. Obviously, a non-linear model forecast will transform a Gaussian distribution into a non-Gaussian distribution. Hence, the optimality of the Kalman filters is no longer preserved and the estimated analysis state and the error estimates will be suboptimal. This is a common issue for all ensemble filters whose analysis step is based on the equations of the Kalman filter. Nonetheless, the many existing data assimilation studies with non-linear models, e.g. of the ocean or atmosphere, with different formulations of the ensemble Kalman Filters show that these filters are rather stable with regard to non-linearity.

Second-order accurate ensemble filters, like the NETF, MMEF and MPF in Section 7 as well as the adaptive Gaussian mixture filter described in Section 8 avoid the assumption that the forecast ensemble has a Gaussian distribution. Thus, they should be better suited for non-linear systems. When the methods are applied with localisation, they can also be applied with large systems (e.g. the NETF in Tödter et al., Citation2016). However, filters like the NETF are still approximations to the full non-linear analysis that is performed by particle filters.

Particle filters do not rely on any assumption on the error distribution of the state estimate. However, the observation errors are frequently assumed to be Gaussian as is for the particle filters presented in Section 6. Additionally, while the EWPF and IEWPF do not require knowledge of forecast errors, they both require good knowledge of model errors, i.e. Q. Of course, good knowledge of model errors is always beneficial to forecasting irrespective of the data assimilation method used, but for the application EWPF and IEWPF model errors are essential. While standard particle filters suffer the curse of dimensionality when applied to large systems, the EWPF and IEWPF by construction are designed to work for high-dimensional models, including those which are highly non-linear, with a small number of particles, e.g. Ades and van Leeuwen (Citation2013) and Zhu et al. (Citation2016). However, when applying the relaxation scheme between observations in an EWPF it is important to keep in mind that one has to choose the relaxation term Υ~ in Equation (Equation120) very carefully. We can choose this term to suit the needs of our model and indeed we need to do so carefully by selecting an appropriate relaxation strength function ρ and covariance matrix. The relaxation term can be chosen to be constant between observation times, but that would not be a good idea if the system experiences oscillations between observations. In that case, the strength term can be chosen to be linearly increasing with the time lag to the next observation we are nudging particles to, or non-linear with maximum strength close to the observations.

In all local particle filters that we discussed the posterior particles are linear combinations of the prior particles. This has the potential to break non-linear balances between variables in the model. However, the linear combinations are typically formed such that only prior particles are added that are close to each other in state space, and hence quite similar. So this is not necessarily a disadvantage.

10. Summary and conclusion

This overview paper provides a coherent algorithmic summary and highlights differences between many currently used ensemble data assimilation methods that can be applied to high dimensional and non-linear problems such as ocean or weather prediction including well-known ensemble-based Kalman filters as well as recently developed particle filter methods and the Gaussian mixture filter.

We have presented these methods in a mathematically coherent way allowing the reader to compare many methods easily. In particular, we have presented all ensemble-based Kalman filter methods in form of a square root filter. In addition, we have included practical pseudo-algorithms for all methods since for computational reasons many of them would not be implemented in the form they are mathematically described. For some of the particle filters and for the Gaussian mixture filter we have presented the theory along with the step by step algorithm.

Finally, we have discussed important issues for practical implementation of the ensemble methods including various methods of localisation, inflation, parallelisation, computational cost and ensemble applicability to non-linear problems.

Concluding, a wealth of ensemble-based data-assimilation methods have been developed, and although they seem quite different in theory, the numerical implementations are quite similar. The implementations turn out to be quite similar to those for the particle filters, even those that explore a proposal density, where the state covariances that play an essential role in the Kalman filters are replaced by the covariance of the model errors. The main difference is that the state covariances are evolving over time and are always of low rank, while the model error covariance is given and of full rank but sparse. This means that different numerical algorithms need to be used to solve the equations when the system of interest has a high dimension.

11. Code availability

We note that many of the algorithms here have been efficiently implemented as a part of the Sangoma project and are freely available to everyone on the project website http://sourceforge.net/p/sangoma/ along with many other tools useful to data assimilation. Table  provides a list of available filters. Please note that the filter implementation was done independently from this paper so that not all filters described here are available. For simplicity these filters have been implemented without parallelisation and are hence only usable for moderately large problems with a state dimension of O(105).

Further, all of these analysis methods have been implemented in at least one of the toolboxes connected to the Sangoma project, these are: EMPIRE, OAK, SESAM, OPENDA, BELUGA/SEQUOIA, NERSC and PDAF. For example, the set of filters listed in Table , plus the SEIK filter (5.2) with localization, are available in a parallelised implementation for high-dimensional problems in the freely available data assimilation framework PDAF (Nerger et al., Citation2005a; Nerger and Hiller, Citation2013). Further, the EWPF (Section 6.2.1) and IEWPF (Section 6.2.2) are available in EMPIRE (Browne and Wilson, Citation2015).

Acknowledgements

PJvL thanks the European Research Council (ERC) for funding of the CUNDA project under the European Unions Horizon 2020 research and innovation programme.

Additional information

Funding

This work was supported by the SANGOMA EU Project [grant number FP7-SPACE-2011-1-CT-283580-621 SANGOMA].

Notes

No potential conflict of interest was reported by the authors.

2 The discussion of the increasingly growing developments in hybrid data assimilation methods is beyond the scope of this paper, instead we refer the reader to a very recent review article by Bannister (Citation2017), and papers by Frei and Künsch (Citation2013) and Chustagulprom et al. (Citation2016) aiming to bridge particle and ensemble Kalman filter methods.

3 Note that ρ=1 if p^=N~e.

4 Many of the analysis methods discussed in this paper including MRHF have been implemented in Sangoma and are available for free to download from www.data-assimilation.net, as well as many other data assimilation tools for diagnostics, utilities etc..

5 Interestingly, the ECMWF is using an ensemble of 4DVars for their weather forecasting scheme, and it is relatively easy to turn this into a set of particles using 4DVar as proposal (see e.g. van Leeuwen et al., Citation2015).

6 The model error covariance matrices are usually assumed to be equal, i.e. Q~=Q. .

References

  • Ades, M. and van Leeuwen, P. J. 2013. An exploration of the equivalent weights particle filter. Q. J. R. Meteorol. Soc. 139, 820–840.
  • Anderson, J. 2003. A local least squares framework for ensemble filtering. Mon. Wea. Rev. 131, 634–642.
  • Anderson, J. L. 2001. An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev. 129, 2884–2903.
  • Anderson, J. L. 2007. Exploring the need for localization in ensemble data assimilation using a hierarchical ensemble filter. Physica D 230, 99–111.
  • Anderson, J. L. 2009. Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus 61A, 72–83.
  • Anderson, J. L. 2010. A non-Gaussian ensemble filter update for data assimilation. Mon. Wea. Rev. 138(11), 4186–4198.
  • Anderson, J. L. 2012. Localization and sampling error correction in ensemble Kalman filter data assimilation. Mon. Wea. Rev. 140, 2359–2371.
  • Anderson, J. L. and Anderson, S. L. 1999. A Monte Carlo implementation of the non-linear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev. 126, 2741–2758.
  • Bannister, R. N. 2017. A review of operational methods of variational and ensemble-variational data assimilation. Q. J. R. Meteorol. Soc. 143, 607–633.
  • Bengtsson, T., Snyder, C. and Nychka, D. 2003. Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res. 108, 8775–8785.
  • Bengtsson, T., Bickel, P. and Li, B. 2008. Curse-of-dimensionality revisited: collapse of the particle filter in very large scale systems. IMS Collections: Prob. Stat. Essays Honor David A. Freedman 2, 316–334.
  • Bishop, C. H. and Hodyss, D. 2009. Ensemble covariances adaptively localized with ECO-RAP. Part 2: a strategy for the atmosphere. Tellus 61A, 97–111.
  • Bishop, C. H., Etherton, B. J. and Majumdar, S. J. 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: theoretical aspects. Mon. Wea. Rev. 129, 420–436.
  • Blockley, E. W., Martin, M. J., McLaren, A. J., Ryan, A. G., Waters, A., and co-authors. 2014. Recent development of the Met Office operational ocean forecasting system: an overview and assessment of the new global FOAM forecasts. Geosci. Mod. Dev. 7, 2613–2638.
  • Bocquet, M. 2011. Ensemble Kalman filtering without the intrinsic need for inflation. Nonl. Proc. Geophy. 18, 735–750.
  • Bocquet, M., Pires, C. A. and Wu, L. 2010. Beyond Gaussian statistical modelling in geophysical data assimilation. Mon. Wea. Rev. 138, 2997–3023.
  • Bolić, M., Djurić, P. M. and Hong, S. 2003. New resampling algorithms for particle filters. Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003 IEEE International Conference, Vol. 2, IEEE, pp. II--589–592.
  • Brankart, J.-M., Testut, C.-E., Brasseur, P. and Verron, J. 2003. Implementation of a multivariate data assimilation scheme for isopycnic coordinate ocean models: application to a 1993–1996 hindcast of the North Atlantic ocean circulation. J. Geophys. Res. 108(C3), 3074.
  • Browne, P. A. and Wilson, S. 2015. A simple method for integrating a complex model into an ensemble data assimilation system using MPI. Env. Modell. Software 68, 122–128.
  • Brusdal, K., Brankart, J. M., Halberstadt, G., Evensen, G., Brasseur, P., and co-authors. 2003. A demonstration of ensemble based assimilation methods with a layered OGCM from the perspective of operational ocean forecasting systems. J. Mar. Syst. 40–41, 253–289.
  • Buehner, M. and Charron, M. 2007. Spectral and spatial localization of background-error correlations for data assimilation. Q. J. R. Meteorol. Soc. 133, 615–630.
  • Burgers, G., van Leeuwen, P. J. and Evensen, G. 1998. Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev. 126(6), 1719–1724.
  • Campbell, W. F., Bishop, C. H. and Hodyss, D. 2010. Vertical covariance localization for satellite radiances in ensemble Kalman filters. Mon. Wea. Rev. 138, 282–290.
  • Chorin, A. J. and Tu, X. 2009. Implicit sampling for particle filters. PNAS 106, 17249–17254.
  • Chorin, A. J., Morzfeld, M. and Tu, X. 2010. Interpolation and iteration for nonlinear filters. Commun. Appl. Math. Comput. Sci. 5, 221–240.
  • Chustagulprom, N., Reich, S. and Reinhardt, M. 2016. A hybrid ensemble transform filter for nonlinear and spatially extended dynamical systems. SIAM/ASA J. Uncert. Quant. 4, 552–591.
  • Cohn, S. E., Da Silva, A., Guo, J., Sienkiewicz, M. and Lamich, D. 1998. Assessing the effects of data selection with the DAO physical-space statistical analysis system. Mon. Wea. Rev. 126, 2913–2926.
  • Desroziers, G., Berre, L., Chapnik, B. and Poli, P. 2005. Diagnosis of observation, background and analysis-error statistics in observation space. Q. J. R. Meteorol. Soc. 131, 3385–3396.
  • Doucet, A., Godsill, S. and Andrieu, C. 2000. On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 10, 197–208.
  • Doucet, A., de Freitas, N., Gordon, N. 2001. Sequential Monte-Carlo Methods in Practice. Springer-Verlag, New York.
  • Evensen, G. 1993. Open boundary conditions for the extended Kalman filter with a quasi-geostrophic ocean model. J. Geophys. Res. 98(C9), 16529–16546.
  • Evensen, G. 1994. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte-Carlo methods to forecast error statistics. J. Geophys. Res. 99, 10143–10162.
  • Evensen, G. 2003. The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dyn. 53, 343–367.
  • Flowerdew, J. 2015. Towards a theory of optimal localisation. Tellus A 67, 25257.
  • Frei, M. and Künsch, H. R. 2013. Bridging the ensemble Kalman and particle filters. Biometrica 100, 781–800.
  • Gaspari, G. and Cohn, S. 1999. Construction of correlation functions in two and three dimensions. Q. J. R. Meteorol. Soc. 125, 723–757.
  • Golub, G. H. and Van Loan, C. F. 1996. Matrix computations, 3rd ed. The Johns Hopkins University Press, Baltimore and London.
  • Gordon, N. J., Salmond, D. J. and Smith, A. F. M. 1993. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEEE Proc. 140, 107–113.
  • Greybush, S. J., Kalnay, E., Miyoshi, T., Ide, K. and Hunt, B. R. 2011. Balance and ensemble Kalman filter localization techniques. Mon. Wea. Rev. 139, 511–522.
  • Gropp, W., Lusk, E. and Skjellum, A. 1994. Using MPI: Portable Parallel Programming with the Message-Passing Interface The MIT Press, Cambridge, Massachusetts.
  • Hamill, T. M. 2001. Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev. 129, 550–560.
  • Hamill, T. M. 2006. Ensemble-based atmospheric data assimilation. In: Predictability of Weather and Climate (eds. T. Palmer and R. Hagedorn). Cambridge University Press, New York, pp. 124–156. chapter 6.
  • Hamill, T. M., Whitaker, J. S. and Snyder, C. 2001. Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev. 129, 2776–1790.
  • Hoteit, I., Pham, D.-T., Triantafyllou, G. and Korres, G. 2008. A new approximate solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Wea. Rev. 136, 317–334.
  • Houtekamer, P. L. and Mitchell, H. L. 1998. Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev. 126, 796–811.
  • Houtekamer, P. L. and Mitchell, H. L. 2001. A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev. 129, 123–137.
  • Houtekamer, P. L. and Zhang, F. 2016. Review of the ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev. 144, 4489–4532.
  • Hunt, B. R., Kostelich, E. J. and Szunyogh, I. 2007. Efficient data assimilation for spatiotemporal chaos: a local ensemble transform Kalman filter. Physica D 230, 112–126.
  • Ide, K., Courtier, P., Ghil, M. and Lorenc, A. C. 1997. Unified notation for data assimilation: pperational, l sequential and variational. J. Meteor. Soc. Jpn. 75, 181–189.
  • Janjić, T., Nerger, L., Albertella, A., Schroeter, J. and Skachko, S. 2011. On domain localization in ensemble-based Kalman filter algorithms. Mon. Wea. Rev. 139, 2046–2060.
  • Jazwinski, A. H. 1970. Stochastic Processes and Filtering Theory Academic Press, New York.
  • Kalman, R. E. 1960. A new approach to linear filtering and prediction problems. Trans. AMSE --J. Basic Eng. 82(D), 35–45.
  • Kepert, J. D. 2006. Localisation, balance and choice of analysis variable in an ensemble Kalman filter. Q. J. R. Meteorol. Soc. 135(642), 1157–1176.
  • Kepert, J. D. 2009. Covariance localisation and balance in an ensemble Kalman filter. Q. J. R. Meteorol. Soc. 135, 1157–1176.
  • Keppenne, C. L. and Rienecker, M. M. 2002. Initial testing of a massively parallel ensemble Kalman filter with the Poseidon isopycnal ocean circulation model. Mon. Wea. Rev. 130, 2951–2965.
  • Keppenne, C. L. and Rienecker, M. M. 2003. Assimilation of temperature into an isopycnal ocean general circulation model using a parallel ensemble Kalman filter. J. Mar. Syst. 40–41, 363–380.
  • Kirchgessner, P., Nerger, L. and Bunse-Gerstner, A. 2014. On the choice of an optimal localization radius in ensemble Kalman filter methods. Mon. Wea. Rev. 142, 2165–2175.
  • Law, K. J. H. and Stuart, A. M. 2012. Evaluating data assimilation algorithms. Mon. Wea. Rev. 140, 3757–3782.
  • Lawson, W. G. and Hansen, J. A. 2004. Implications of stochastic and deterministic filters as ensemble-based data assimilation methods in varying regimes of error growth. Mon. Wea. Rev. 132, 1966–1989.
  • Lei, J. and Bickel, P. 2011. A moment matching ensemble filter for nonlinear non-Gaussian data assimilation. Mon. Wea. Rev. 139, 3964–3973.
  • Lei, L. and Anderson, J. 2014. Empirical localization of observations for serial ensemble Kalman filter data assimilation in an atmospheric general circulation model. Mon. Wea. Rev. 142, 1835–1851.
  • Lermusiaux, P. F. J. 2007. Adaptive modelling, adaptive data assimilation and adaptive sampling. Physica D 230, 172–196.
  • Lermusiaux, P. F. J. and Robinson, A. R. 1999. Data assimilation via error subspaces statistical estimation, part I: theory and schemes. Mon. Wea. Rev. 127, 1385–1407.
  • Lermusiaux, P. F. J., Robinson, A. R., Haley, P. J. and Leslie, W. G. 2002. Advanced interdisciplinary data assimilation: filtering and smoothing via error subspace statistical estimation. Proceedings of “The OCEANS 2002 MTS/IEEE conference, Holland”, Vol. 230, pp. 795–802.
  • Livings, D. 2005. Aspects of the ensemble Kalman filter [Master’s thesis] Department of Mathematics, University of Reading, UK.
  • Livings, D., Dance, S. L. and Nichols, N. K. 2008. Unbiased ensemble square root filters. Physica D 237, 1021–1028.
  • Lorenc, A. C. 2003. The potential of the ensemble Kalman filter for NWP - a comparison with 4D-Var. Q. J. R. Meteorol. Soc. 129, 3183–3203.
  • Metref, S., Cosme, E., Snyder, C. and Brasseur, P. 2014a. A non-Gaussian analysis scheme using rank histograms for ensemble data assimilation. Nonlin. Processes Geophys. 21, 869–885.
  • Metref, S., Cosme, E., Snyder, C. and Brasseur, P. 2014b. A non-Gaussian analysis scheme using rank histogram for ensemble data assimilation. Nonlinear Proc. Geophys. 21, 869–885.
  • Minamide, M. and Zhang, F. 2017. Adaptive observation error inflation for assimilating all-sky satellite radiance. Mon. Wea. Rev. 145, 1063–1081.
  • Miyoshi, T. and Yamane, S. 2007. Local ensemble transform Kalman filter with an AGCM at a T159/L48 resolution. Mon. Wea. Rev. 135, 3841–3861.
  • Morzfeld, M., Tu, X., Atkins, E. and Chorin, A. J. 2012. A random map implementation of implicit filters. J. Comput. Phys. 231, 2049–2066.
  • Morzfeld, M., Hodyss, D. and Snyder, C. 2017. What the collapse of the ensemble Kalman filter tells us about particle filters. Tellus A 69, 1–14.
  • Nakano, S., Ueno, G. and Higuchi, T. 2007. Merging particle filter for sequential data assimilation. Nonlinear Process. Geophys. 14, 395–408.
  • Nerger, L. 2004. Parallel Filter Algorithms for Data Assimilation in Oceanography Number 487 in Reports on Polar and Marine Research, Alfred Wegener Institute for Polar and Marine Research, Bremerhaven, Germany, [PhD Thesis]. University of Bremen, Germany.
  • Nerger, L. 2015. On serial observation processing on localized ensemble Kalman filters. Mon. Wea. Rev. 143, 1554–1567.
  • Nerger, L. and Hiller, W. 2013. Software for ensemble-based data assimilation systems - implementation strategies and scalability. Comput. Geosci. 55, 110–118.
  • Nerger, L., Hiller, W. and Schröter, J. 2005a. PDAF -- the parallel data assimilation framework: experiences with Kalman filtering. In: Use of High Performance Computing in Meteorology: Proceedings of the Eleventh ECMWF Workshop on the Use of High Performance Computing in Meteorology, Reading, UK, 25--29 October 2004 (eds. W. Zwieflhofer and G. Mozdzynski). World Scientific, Singapore, pp. 63–83.
  • Nerger, L., Hiller, W. and Schröter, J. 2005b. A comparison of error subspace Kalman filters. Tellus 57A, 715–735.
  • Nerger, L., Danilov, S., Hiller, W. and Schröter, J. 2006. Using sea level data to constrain a finite-element primitive-equation ocean model with a local SEIK filter. Ocean Dyn. 56, 634–649.
  • Nerger, L., Janjić, T., Schroeter, J. and Hiller, W. 2012a. A unification of ensemble square root filters. Mon. Wea. Rev. 140, 2335–2345.
  • Nerger, L., Janjić, T., Schröter, J. and Hiller, W. 2012b. A regulated localization scheme for ensemble-based Kalman filters. Q. J. Roy. Meteor. Soc. 138, 802–812.
  • Ott, E., Hunt, B. R., Szunyogh, I., Zimin, A. V., Kostelich, E. J., and co-authors. 2004. A local ensemble Kalman filter for atmospheric data assimilation. Tellus 56A, 415–428.
  • Patil, D. J., Hunt, B. R., Kalnay, E., Yorke, J. A. and Ott, E. 2001. Local low dimensionality of atmospheric dynamics. Phys. Rev. Lett. 86(26), 5878–5881.
  • Penny, S. and Miyoshi, T. 2016. A local particle filter for high-dimensional geophysical systems. Nonlinear Processes Geophys. 23, 391–405.
  • Perianez, A., Reich, H. and Potthast, R. 2014. Optimal localization for ensemble Kalman filter systems. J. Meteorol. Soc. Jpn. 92, 585–597.
  • Petrie, R. E. and Dance, S. L. 2010. Ensemble-based data assimilation and the localisation problem. Weather 65(3), 65–69.
  • Pham, D. T. 2001. Stochastic methods for sequential data assimilation in strongly nonlinear systems. Mon. Wea. Rev. 129, 1194–1207.
  • Pham, D. T., Verron, J. and Gourdeau, L. 1998a. Singular evolutive Kalman filters for data assimilation in oceanography. C. R. Acad. Sci. Ser. II 326(4), 255–260.
  • Pham, D. T., Verron, J. and Roubaud, M. C. 1998b. A singular evolutive extended Kalman filter for data assimilation in oceanography. J. Mar. Syst. 16, 323–340.
  • Poterjoy, J. 2016a. A localized particle filter for high-dimensional nonlinear systems. Mon. Wea. Rev. 144, 59–76.
  • Poterjoy, J. and Anderson, J. L. 2016b. Efficient sssimilation of simulated observations in a high-dimensional geophysical system using a localized particle filter. Mon. Wea. Rev. 144, 2007–2020.
  • Reich, S. 2013. A nonparametric ensemble transform method for Bayesian inference. SIAM Journal on Scientific Computing 4(35), 2013–2024.
  • Sakov, P. and Oke, P. R. 2008. Implications of the form of the ensemble transformation in the ensemble square root filters. Mon. Wea. Rev. 136, 1042–1053.
  • Sakov, P., Counillon, F., Bertino, L., Lisaeter, K. A., Oke, P. R. and co-authors. 2012. TOPAZ4: an ocean-sea ice data assimilation system for the North Atlantic and Arctic. Ocean Sci. 8, 633–656.
  • Silverman, B. W. 1986. Density estimation for statistics and data analysis Chapman and Hall, New Work.
  • Snyder, C., Bengtsson, T., Bickel, P. and Anderson, J. 2008. Obstacles to high-dimensional particle filtering. Mon. Wea. Rev. 136, 4629–4640.
  • Snyder, C., Bengtsson, T. and Morzfeld, M. 2015. Performance bounds for particle filters using the optimal proposal. Mon. Wea. Rev. 143(11), 4750–4761.
  • Stordal, A. S., Karlsen, H. A., Nævdal, G., Skaug, H. J. and Valles, B. 2011. Bridging the ensemble Kalman filter and particle filters: the adaptive Gaussian mixture filter. Comput. Geosci. 15, 293–305.
  • Testut, C.-E., Brasseur, P., Brankart, J.-M. and Verron, J. 2003. Assimilation of sea-surface temperature and altimetric observationsduring 1992–1993 inot an eddy permitting primitive equation model of the North Atlantic ocean. J. Mar. Sys. 40–41, 291–316.
  • Tippett, M. K., Anderson, J. L., Bishop, C. H., Hamill, T. M. and Whitaker, J. S. 2003. Ensemble square root filters. Mon. Wea. Rev. 131, 1485–1490.
  • Tödter, J. and Ahrens, B. 2015. A Second-Order Exact Ensemble Square Root Filter for Nonlinear Data Assimilation. Mon. Wea. Rev. 143(4), 1347–1367.
  • Tödter, J., Kirchgessner, P., Nerger, L. and Ahrens, B. 2016. Assessment of a nonlinear ensemble transform filter for high-dimensional data assimilation. Mon. Wea. Rev. 144, 409–427.
  • Tong, X. T., Majda, A. J. and Kelly, D. 2016. Nonlinear stability and ergodicity of ensemble based Kalman filters. Nonlinearity 29, 657–691.
  • van Leeuwen, P. J. 2003a. A variance-minimizing filter for nonlinear dynamics. Mon. Wea. Rev. 131, 2071–2084.
  • van Leeuwen, P. J. 2003b. Nonlinear ensemble data assimilation for the ocean. Recent Developments in data assimilation for atmosphere and ocean, ECMWF Seminar 8--12 September 2003, Reading, United Kingdom, pp. 265–286.
  • van Leeuwen, P. J. 2009. Particle filtering in geophysical systems. Mon. Wea. Rev. 137, 4089–4114.
  • van Leeuwen, P. J. 2010. Nonlinear data assimilation in Geosciences: an extremely efficient particle filter. Q. J. R. Meteorol. Soc. 136, 1991–1999.
  • van Leeuwen, P. J. 2011. Efficient nonlinear data-assimilation in geophysical fluid dynamics. Comput. Fluids 46, 52–58.
  • van Leeuwen, P. J., Cheng, Y. and Reich, S. 2015. Nonlinear data assimilation. Frontiers in Applied Dynamical Systems: Reviews and Tutorials 2 Springer.
  • van Leeuwen, P. J. and Evensen, G. 1996. Data assimilation and inverse methods in terms of a probabilistic formulation. Mon. Wea. Rev. 124, 2898–2913.
  • Verlaan, M. and Heemink, A. W. 2001. Nonlinearity in data assimilation applications: a practical method for analysis. Mon. Wea. Rev. 129(6), 1578–1589.
  • Vossepoel, F. C. and Van Leeuwen, P. J. 2006. Parameter estimation using a particle method: inferring mixing coefficients from sea-level observations. Mon. Wea. Rev. 135, 1006–1020.
  • Wang, Y., Jung, Y., Supine, T. A. and Xue, M. 2013. A hybrid MPI-OpenMP parallel algorithm and performance analysis for an ensemble square root filter designed for multiscale observations. J. Atm. and Oce. Tech. 30, 1382–1397.
  • Whitaker, J. S. and Hamill, T. M. 2002. Ensemble data assimilation without perturbed observations. Mon. Wea. Rev. 130, 1913–1924.
  • Wikle, C. K. and Berliner, L. M. 2006. A Bayesian tutorial for data assimilation. Physica D, 230(1):1–16.
  • Xiong, X., Navon, I. M. and Uzunoglu, B. 2006. A note on the particle filter with posterior Gaussian resampling. Tellus A 58(4), 456–460.
  • Zhen, Y. and Zhang, F. 2014. A probabilistic approach to adaptive covariance localization for serial ensemble square root filters. Mon. Wea. Rev. 142, 4499–4518.
  • Zhu, M., van Leeuwen, P. J. and Amezcua, J. 2016. Implicit equal-weights particle filter. Q. J. R. Meteorol. Soc. page personal communication.
  • Zupanski, M. 2005. Maximum likelihood ensemble filter: Theoretical aspects. Mon. Wea. Rev. 133, 1710–1726.
  • Zupanski, M., Michael, N. I. and Zupanski, D. 2008. The maximum likelihood ensemble filter as a non-differentiable minimization algorithm. Q. J. R. Meteorol. Soc. 134, 1039–1050.

Appendix 1

Resampling methods

In this section, we give descriptions of a number of resampling techniques that can be applied to the particle filter and Gaussian mixture filter methods to turn weighted particles into equal-weight particles. The resampling techniques included here are probabilistic resampling, stochastic universal resampling and residual resampling. However, they are by no means exclusive and other techniques could be used.

Probabilistic resampling (PR)

The probabilistic resampling or the basic random resampling is the most straightforward to implement as we sample directly from the density given by the weights.

Given the weights {wj}j=1Ne associated with the ensemble of particles, where the sum of weights is equal to one, the total number of particles Ne and the number of particles to be generated N~e, we generate an index of the sampled particles using the Algorithm .

The required input for the PR is: wRNe a vector of particle weights, Ne the total number of particles in the filter, and N~e the number of particles to be sampled and the method returns an index IRN~e, which can then be used to select the sampled particles xj=xI(j) for j=1:N~e.

Note that this scheme introduces sampling noise by drawing N~e times from a uniform distribution.

Stochastic universal resampling (SUR)

Stochastic universal resampling is also known as systematic resampling. It performs resampling in the same way as the basic random resampling algorithm except instead of drawing each uj independently from U(0,1) for j=1,,Ne, it uses a uniform random number u according to uU[0,1/Ne] and uj=u+(j-1)/Ne (Bolić et al., Citation2003).

Given the weights {wj}j=1Ne associated with the ensemble of particles, where the sum of weights is equal to one, the total number of particles Ne and the number of particles to be generated N~e, we generate an index of the sampled particles using the Algorithm .

The required input for the SUR is: wRNe a vector of particle weights, Ne the total number of particles in the filter, and N~e the number of particles to be sampled and the method returns an index IRN~e which can then be used to select the sampled particles xj=xI(j) for j=1:N~e.

Note, that this method has a lower sampling noise than probabilistic resampling since only one random variable is drawn.

Residual resampling (RR)

The RR algorithm samples the particles in two parts. In the first part the number of replications of particles is calculated, but since the method does not guarantee that the number of resampled particles is Ne, the residual Nr is computed. The second step requires resampling, which produces Nr of the final N~e particles. In Algorithm  this is done by PR, but other resampling technique can be used.

The required input for the RR is: wRNe a vector of particle weights, Ne the total number of particles in the filter, and N~e the number of particles to be sampled and the method returns an index IRN~e which can then be used to select the sampled particles xj=xI(j) for j=1:N~e. Note, that we used the PR method to obtain an array IRRNr with the indices of the additional sampled particles, which we then stored in the remaining empty cells of the index array IRNe.

Note, that this method reduces the sampling noise, but not as much as the SUR method.

Appendix 2

Filter algorithms for practical implementation

This Appendix contains practical pseudo-algorithms of all the ensemble filter methods presented in Sections 57. To discuss the computational cost of each method in Section 9.4 we used the algorithms presented in this appendix because for some filter methods they are more computationally efficient or numerically stable than mathematically elegant versions given in Section 5. The algorithms are written in the way one would implement them efficiently in Fortran. For compactness, the algorithms don’t show that the final step of the ensemble filters can usually be written in a blocked form, so that only the allocation one large ensemble array X is required. This is different for the MLEF, where two arrays of size Nx×Ne are required. If indices are given for matrices, the notation follows Fortran in that the first index defines the row, while the second index specifies the column.

Note, that in all the algorithms that follow in this appendix any variable with a number subscript is a temporary variable used to reduce the computational time and storage space needed for the algorithm. Further, for ease of reading these algorithms, we use abbreviations: SVD for singular value decomposition and EVD for eigenvalue decomposition. The values in the right column of each algorithm give the dimension of the resulting array, which helps to determine the computational cost of the operations.

Below, we use the notation HXf as a shorthand notation for applying the possibly non-linear observation operator H individually to each ensemble state in Xf.