1,623
Views
1
CrossRef citations to date
0
Altmetric
Review Article

A selective review of statistical methods using calibration information from similar studies

, &
Pages 175-190 | Received 04 Jan 2021, Accepted 10 Jan 2022, Published online: 17 Feb 2022

Abstract

In the era of big data, divide-and-conquer, parallel, and distributed inference methods have become increasingly popular. How to effectively use the calibration information from each machine in parallel computation has become a challenging task for statisticians and computer scientists. Many newly developed methods have roots in traditional statistical approaches that make use of calibration information. In this paper, we first review some classical statistical methods for using calibration information, including simple meta-analysis methods, parametric likelihood, empirical likelihood, and the generalized method of moments. We further investigate how these methods incorporate summarized or auxiliary information from previous studies, related studies, or populations. We find that the methods based on summarized data usually have little or nearly no efficiency loss compared with the corresponding methods based on all-individual data. Finally, we review some recently developed big data analysis methods including communication-efficient distributed approaches, renewal estimation, and incremental inference as examples of the latest developments in methods using calibration information.

1. Introduction

Statistical inference with big data can be extremely challenging owing to the high volume and large variety of observed quantities. Currently, one of the most popular approaches to this problem in statistics and computer science is the divide-and-conquer paradigm. The basic idea of this method is to break down a problem recursively into two or more sub-problems of the same or related type, such that each sub-problem becomes simple enough to be solved easily. The solution to the original problem is the optimal combination of the solutions to the sub-problems. A closely related statistical method is called parallel and distributed inference. In essence, large amounts of observed data are stored in different machines in a distributed manner. The computation is often relatively inexpensive in each machine. Then, communication is essential to enable assembly of the available results from all machines. Many related references can be found in, for example, Jordan et al. (Citation2019). Although many new statistical methods have been developed for big data analysis, most of them have roots in traditional statistical methods of combining auxiliary information.

Combining information from similar studies has been and will continue to be an extremely important strategy in statistical inference. The most popular example of such methods is meta-analysis, in which the published results of multiple similar scientific studies are pooled to produce an enhanced estimate without using the raw individual data from each study. We refer to Borenstein et al. (Citation2009) for a comprehensive introduction to meta-analysis. For various reasons such as privacy or capacity of computer storage, in massive data inference, only summarized data rather than the original individual data may be available. This poses a very challenging problem: how to conduct efficient updated inference by making full use of the summarized data? In recent years, many methods of combining information have been developed in economic studies, machine learning, and distributed statistical inference. The goal of this paper is to selectively review a few popular methods that are able to integrate information in different disciplines.

Utilizing external summary data or auxiliary information to obtain more accurate inference is an old and effective method in survey sampling. Owing to restrictions such as cost effectiveness or convenience, the variable of interest Y may be available for only a small portion of individuals. However, the explanatory variable X associated with Y may readily be available for all individuals. Cochran (Citation1977) presented a comprehensive discussion on regression-type estimators making use of the summarized information from X. Chen and Qin (Citation1993), Chen et al. (Citation2002), and Wu and Sitter (Citation2001) used empirical likelihood (EL; Owen, Citation1988) to incorporate such information in finite populations.

With advances in technology, many summarized statistical results have become available in public domains. For example, many aggregated demographic and socioeconomic status data are provided in the US census reports. The Surveillance, Epidemiology, and End Results (SEER) programme of the National Cancer Institute provides population-based cancer survival statistics such as covariate-specific survival probabilities. Imbens and Lancaster (Citation1994) combined micro and macro data in economic studies through the generalized method of moments (GMM). Chaudhuri et al. (Citation2008) showed that inclusion of population-level information could reduce bias and increase the efficiency of the parameter estimates in a generalized linear model setup. Wu and Thompson (Citation2020) published an excellent monograph on combining auxiliary information in survey sampling.

In this paper, we consider two situations. In the first, the summarized information from different studies was derived using the same statistical model. Second, the summarized information was derived using statistical models that were similar but not exactly the same. In general, combining information in the former case is easier. The latter case is more complex, as one has to take into consideration the heterogeneity among different studies.

The rest of this paper is organized as follows. In Section 2, we briefly review two simple and popular meta-analysis methods for combining similar results. In Section 3, we review Owen's (Citation1988) EL method and Qin and Lawless's (Citation1994) over-identified parameter problem as examples of general tools for synthesizing information from summarized data. In particular, we present a new way of deriving the lower information bound for the over-identified parameter problem. Section 4 discusses enhanced inference by utilizing auxiliary information. Section 5 presents results on more flexible meta-analyses where information on different covariates are available in similar studies. Calibration of information from previous studies is described in Section 6. We discuss methods of using disease prevalence information for more efficient estimation in case–control studies in Section 7. The popular communication-efficient distributed statistical inference method used in machine learning is discussed in Section 8. Renewal estimation and incremental inference are briefly presented in Section 9. Finally, some further discussion is presented in Section 10.

2. Two simple information-combining methods

2.1. Convex combination

Suppose that θˆ1 and θˆ2 are two asymptotically unbiased estimators for θ from two independent studies, and that they satisfy θˆiN(θ,σi2),i=1,2. The most straightforward way of combining θˆ1 and θˆ2 is a convex combination, θˆ=αθˆ1+(1α)θˆ2,0<α<1. The asymptotic variance of θˆ is σ2=α2σ12+(1α)2σ22, which takes its minimum at α=σ22/(σ12+σ22). This suggests combining θˆ1 and θˆ2 by θˆ=σ22σ12+σ22θˆ1+σ12σ12+σ22θˆ2=θˆ1/σ12+θˆ2/σ221/σ12+1/σ22, an inverse-variance weighting estimator. In general, σ12 and σ22 are unknown; we may replace them by their estimators σˆ12 and σˆ22, respectively, which leads to θˆ=σˆ22σˆ12+σˆ22θˆ1+σˆ12σˆ12+σˆ22θˆ2=θˆ1/σˆ12+θˆ2/σˆ221/σˆ12+1/σˆ22. As an alternative method, we may use the maximum likelihood method to argue that this is the best estimator. We can treat θˆi as a direct observation from θˆi|θN(θ,σi2), i = 1, 2. Then, the log-likelihood (regarding σ12 and σ22 as known constants) is (θˆ1θ)2/(2σ12)(θˆ2θ)2/(2σ22). Maximizing this likelihood with respect to θ or setting the score function to be zero, we end up with the same inverse-variance weighting estimator.

2.2. Random-effect meta-analysis

Dersimonian and Laird (Citation1986) proposed a moment-based estimation method using a random-effect model for meta-analysis. Let θˆi be an estimator of θi from the i-th study, i=1,2,,K. For example, θˆi could be the estimated mean response from the i-th study. When the sample size ni in the i-th study is reasonably large, we may assume that θˆi|θiN(θi,wi1),θiN(θ,τ2),i=1,2,,K, where the wi1s are treated as known. Although the normal models hold to be true approximately, we assume that they are all true for ease of theoretical development. The goal here is to better estimate θ by combining the results from all the studies.

Unconditionally, we have θˆiN(θ,wi1+τ2). Consider the following inverse-variance weighting estimator for θ: θˆ=i=1Kθˆiwii=1Kwi with variance Var(θˆ)=i=1Kwi2(wi1+τ2)/(i=1Kwi)2. Define Q=i=1Kwi(θˆiθˆ)2=i=1Kwi(θˆiθ)2(θˆθ)2i=1Kwi. We can easily check that E(Q)=(K1)+τ2(i=1Kwii=1Kwi2/j=1Kwj), which implies that a natural estimator of τ2 is τˆ2=Q(K1)i=1Kwii=1Kwi2/j=1Kwj. For small sample sizes, there is no guarantee that this estimator is non-negative; one may replace it by max(τˆ2,0).

Alternatively, we may estimate τ using the likelihood approach. The joint likelihood based on the θˆis is (θ,τ)=12i=1K(θˆiθ)2τ2+wi112i=1Klog(τ2+wi1). Maximizing ℓ with respect to θ and τ2 gives their maximum likelihood estimators (MLEs).

Lin and Zeng (Citation2010) compared the relative efficiency of using summary statistics versus individual-level data in meta-analysis. They found that in general there was no information loss when using the summarized information compared with inference based on the original individual data when available.

3. Empirical likelihood and general estimating equations

In this section we briefly review Owen's (Citation1988) EL and Qin and Lawless' (Citation1994) estimating equations approaches, as those methods represent general tools for assembly of information from different sources. The maximum likelihood method for regular parametric models is among the most popular methods in statistical inference, as it has many nice properties. However, model mis-specification is a major concern, as a mis-specified model may lead to biased results. For the case when the underlying distribution is multinomial, Hartely and Rao (Citation1968) proposed a mean constrained estimator for the population total in survey sampling problems. To mimic the parametric likelihood but discard parametric model assumptions, Owen (Citation1988) and Owen (Citation1990) proposed the EL method, which is a natural generalization of the multinomial likelihood when the number of categories is equal to the sample size. The EL approach can be thought of as a bootstrap that does not resample, or as a likelihood without parametric assumptions (Owen, Citation2001).

3.1. Definition of empirical likelihood

Suppose that X1,,Xn are n independent and identically distributed observations from X, with cumulative distribution F. For convenience, we assume there are no ties, i.e., any two observations are unequal to each other. The techniques developed below can be easily adapted to handle ties. Let dF(Xi),i=1,2,,n, be the jumps of F(x) at the observed data points. The nonparametric likelihood is L(F)=i=1npi. It is clear that if any pi=0, then L(F)=0, and if i=1npi<1, then L(F)<L(F), where F(x)=i=1npiI(Xix)/i=1npi. According to the likelihood principle (that parameters with larger likelihoods are preferable), one need only consider the distribution functions F(x) with pi>0 and i=1npi=1.

If we maximize the log-likelihood (1) (F)=i=1nlogpi(1) subject to the constraints (2) i=1npi=1,pi0,(2) then we obtain pi=1/n,i=1,2,,n. Therefore, the maximum EL estimator of F is Fn(x)=i=1npiI(Xix)=n1i=1nI(Xix). This is why the empirical distribution is called the nonparametric MLE of F(x).

Suppose we are interested in constructing a confidence interval for μ=E(X)=xdF(x), the mean of X. Since we have discretized F at each of the observed data points, the integral becomes μ=i=1npiXi. Next, we maximize the nonparametric log-likelihood subject to an extra constraint: (3) i=1npi(Xiμ)=0.(3) Maximizing the log-likelihood (Equation1) subject to constraints (Equation2) and (Equation3), the Lagrange multiplier method gives the profile log-likelihood of μ, (4) n(μ)=i=1nlog{1+λ(Xiμ)}nlogn,(4) where λ is the solution to i=1n(Xiμ)/{1+λ(Xiμ)}=0. We can treat n(μ) as a parametric likelihood of μ. Based on this likelihood, the maximum EL estimator of μ is μˆ=X¯=n1i=1nXi, which is exactly the sample mean. We define the likelihood ratio function as Rn(μ)=2{maxμn(μ)n(μ)}=2{n(X¯)n(μ)}. Under the regularity conditions specified in Owen (Citation1988) and Owen (Citation1990), as n goes to infinity, Rn(μ0) converges to the χ2 distribution with p degrees of freedom, where p is the dimension of μ, and μ0 is the true value of μ.

3.2. General estimating equations

The original EL was mainly used to make inference for linear functionals of the underlying population distribution such as the population mean (Owen, Citation1988Citation1990). Qin and Lawless (Citation1994) applied this method to general estimating models, which greatly broadened its applications. Specifically, suppose the population of interest satisfies a general estimating equation (5) E{g(X,θ)}=0,(5) for a r×1 vector-valued function g and some θ, which is a p×1 parameter to be estimated. We assume rp as otherwise the true parameter value of θ would be undefined.

For general estimating equations with r>p or over-identified models, Hansen (Citation1982) proposed the celebrated GMM, which has become one of the most popular methods in the econometric community. In essence, the GMM minimizes {i=1ng(Xi,θ)}Σ1{i=1ng(Xi,θ)} with respect to θ, where Σ is the variance matrix of the estimating equation g(X,θ). If Σ is unknown, we may replace it by the sample variance Σˆ=1ni=1ng(Xi,θ~)g(Xi,θ~), where θ~ is an initial and consistent estimate of θ.

Instead of GMM, Qin and Lawless (Citation1994) used the EL to make inferences for parameters defined by a general estimating equation. For discretized F(x) satisfying (Equation2), Equation (Equation5) becomes (6) i=1npig(Xi,θ)=0.(6) Maximizing the log-likelihood (Equation1) subject to (Equation2) and (Equation6), we have the following profile log-likelihood of θ (up to a constant): n(θ)=i=1nlog{1+λg(Xi,θ)}, where λ is the Lagrange multiplier determined by i=1ng(Xi,θ)/{1+λg(Xi,θ)}=0. We then estimate θ by the maximizer θˆ=argmaxθn(θ), whose limiting distribution is established in the following theorem. Hereafter, we use θ to denote the differentiation operator with respect to θ.

Theorem 3.1

Qin & Lawless, Citation1994

Denote g=g(X,θ0) and θg=θg(X,θ0). Suppose that (1) E(gg) is positive definite, (2) θg(X,θ) is continuous in a neighbourhood of θ0, (3) θg(X,θ) and g(X,θ)3 can be bounded by some integrable function G(X) in this neighbourhood, and (4) E(θg) is of full rank. Then, as n, n(θˆθ)dN(0,V), where d means ‘convergence in distribution’ and (7) V={E(θg)(Egg)1E(θg)}1.(7)

3.3. Calculation of the information bound

Assuming that the parameter of interest satisfies the general estimating equation E{g(X,θ)}=0, we next consider how well we can estimate θ based on this model, and whether the maximum EL estimator is optimal. To answer these questions, we consider an ideal situation, where the probability function X has a parametric form f(x,θ), which is known up to θ. We define h(x,η,θ)=exp{ηg(x,θ)}f(x,θ)÷exp{ηg(t,θ)}f(t,θ)dt, implicitly assuming that exp{ηg(t,θ)}f(t,θ)dt<. Clearly, h(x,η,θ) is an enlarged parametric model of f(x,θ) as it reduces to f(x,θ) when η=0. As the parametric form f(x,θ) is unknown in practice, we anticipate that any estimator based on the moment constraints E{g(X,θ)}=0 should have a variance that is no less than that of the MLE derived from the enlarged model. We show that even if the form of f(x,θ) is available, the MLE of θ based on h(x,η,θ) has the same asymptotic variance as the maximum EL estimator.

With the parametric model h, we can estimate θ by maximizing L(θ,η)=i=1nh(Xi,η,θ) with respect to (θ,η). We denote the resulting MLE by (η~,θ~). We show in Section 3.4 that under some regularity conditions on h (see, e.g., Theorems 14 and 23 of van de Vaart (Citation2000)), as n, (8) n(θ~θ)dN(0,V),(8) where V is defined in (Equation7). In general, the parametric form f(x,θ) is unknown; hence, we expect that the best estimator of θ should have an asymptotic variance at least as large as V. As the maximum EL estimator of θ of Qin and Lawless (Citation1994) has asymptotic variance V, we conclude that it achieves the lower information bound.

Remark 3.1

If g(x,θ) is an unbounded function of x for each θ, we may construct a new density h(x,θ,η)=ψ{ηg(x,θ)}f(x,θ)ψ{ηg(t,θ)}f(t,θ)dt, where ψ(x)=2(1+e2x)1 with ψ(0)=ψ(0)=1. Clearly, ψ is bounded. We may go through the same derivations to get the same conclusion.

Remark 3.2

Back and Brown (Citation1992) established a similar result by constructing an exponential family. In particular, they defined h(x,θ)=exp{ξ(θ)g(x,θ0)a(θ)}f0(x), where f0(x)=f(x,θ0) and ξ(θ) is determined implicitly by the following conditions: ξ(θ0)=0, a(θ0)=0, exp{ξ(θ)g(x,θ0)a(θ)}f0(x)dx=1, and g(x,θ)exp{ξ(θ)g(x,θ0)a(θ)}f0(x)dx=0. In Back & Brown's approach, ξ(θ) is determined implicitly by the above constraint equation, whereas in our new approach, η is an independent parameter.

3.4. A sketched proof of (8)

The log-likelihood based on the enlarged model is (θ,η)=i=1nlog{h(Xi,η,θ)}, where log{h(x,η,θ)}=ηg(x,θ)+logf(x,θ)log[exp{ηg(t,θ)}f(t,θ)dt]. If log{h(x,η,θ)} satisfies the conditions of Theorem 14 of van de Vaart (Citation2000) on mθ(x), then (θ~,η~) is consistent with (θ0,0).

Result (Equation8) follows from Theorem 23 of van de Vaart (Citation2000). With tedious algebra, we find that θlog{h(x,θ0,0)}=0,ηlog{h(x,θ0,0)}=g(x,θ0),Eθθlog{h(X,θ0,0)}=0,Eηηlog{h(X,θ0,0)}=E(gg),Eηθlog{h(X,θ0,0)}=Eθg(X,θ0)E{θg(X,θ0)+g(X,θ0)θlogf(X,θ0)}. Under some mild assumptions, such as that g(x,θ)×f(x,θ)dx=0 holds for θ in a neighbourhood of θ0, differentiating both sides with respect to θ leads to E{θg(X,θ)}+E{θg(X,θ)θlogf(X,θ)}=0, which means Eηθlog{h(X,θ0,0)}=E{θg(X,θ0)}. As (θ~,η~) is consistent with (θ0,0), by Theorem 5.23 of van de Vaart (Citation2000), we have (9) n(θ~θ0η~0)=(0E(θg)E(θg)E(gg))1×(0n1/2i=1ng(Xi,θ0))+op(1).(9) This, together with the fact that n1/2i=1ng(Xi,θ0)dN(0,E(gg)) as n goes to infinity, implies (Equation8).

3.5. Empirical entropy family

Again we assume that the available information is given by the estimating equation E{g(X,θ)}=0. The enlarged parametric model h(x,η,θ) satisfies h(x,η,θ)g(x,θ)dx=0 only if η=0. Naturally, one may require η=η(θ) to satisfy g(x,θ)exp{ηg(x,θ)}f(x,θ)dx=0. It is often too restrictive to assume a known underlying parametric model f(x,θ) in the construction of the enlarged parametric model h(x,η,θ). We may replace the cumulative distribution function F(x,θ)=xf(t,θ)dt by the empirical distribution Fn(x)=n1i=1nI(Xix). In this situation, η=η(θ) is the solution to i=1ng(xi,θ)exp{ηg(xi,θ)}=0.

Let H(x,η,θ)=xh(t,η,θ)dt. For fixed parameter values (η,θ), the jump of H at x=Xi is dH(Xi,η,θ)=exp{η(θ)g(Xi,θ)}÷[j=1nexp{η(θ)g(Xj,θ)}], and the likelihood becomes i=1ndH(Xi,η,θ)=i=1nexp{η(θ)g(Xi,θ)}j=1nexp{η(θ)g(Xj,θ)}. In fact, this is equivalent to the EL i=1npi, where the pis minimize the Kullback–Leibler divergence (up to a constant) or minus the exponential titling likelihood i=1npilog(pi) subject to the constraints i=1npi=1, pi0, and i=1npig(Xi,θ)=0. See Susanne (Citation2007) for more details. We call this the empirical entropy family induced by the estimating equation E{g(X,θ)}=0.

4. Enhancing efficiency using auxiliary information

In this section, we discuss methods of incorporating auxiliary information to enhance estimation efficiency. This aspect was also investigated by Qin (Citation2000). We assume a parametric model f(y|x,β) for the conditional density function of Y given X and leave the marginal distribution G(x) of X unspecified. We wish to make inferences for β when some auxiliary information is summarized through an estimating equation E{ϕ(X,β)}=0. For example, if we know the mean μ of Y, then we can construct an estimating equation E(Yμ)=0. We can take ϕ(X,β)=(yμ)f(y|X,β)dy=yf(y|X,β)dyμ. Furthermore, we allow that the response Y may have missing values. Let D be the non-missingness indicator, which takes the value 1 if Y is available, and 0 otherwise. We assume a missing-at-random model pr(D=1|Y=y,X=x)=pr(D=1|X=x)=π(x), where π(x) depends only on x. We denote the observed data by (di,diyi,xi) (i=1,2,,n) and pi=dG(xi). The likelihood of (β,G) is L=i=1n{π(xi)f(yi|xi,β)dG(xi)}di×[{1π(xi)}dG(xi)]1di=j=1n{π(xj)}dj{1π(xj)}1dji=1n{f(yi|xi,β)}dipi. We can maximize this likelihood subject to the constraints i=1npi=1,pi0,i=1npiϕ(xi,β)=0. As j=1n{π(xj)}dj{1π(xj)}1dj is not a function of β, the profile hybrid empirical log-likelihood (up to a constant) is (10) (β)=i=1n[dilogf(yi|xi,β)log{1+λϕ(xi,β)}],(10) where λ is the Lagrange multiplier determined by (11) i=1nϕ(xi,β)1+λϕ(xi,β)=0.(11) For the special case where data are missing completely at random, i.e., π(x) is a constant function of x, Qin (Citation2000) established the following theorem.

Theorem 4.1

Let β0 be the true parameter value, let βˆ be the maximum hybrid EL estimator, i.e., the maximizer of (Equation10), and let λˆ be the corresponding Lagrange multiplier. Denote ϕ=ϕ(X,β0), βϕ=βϕ(X,β0), and J=E{diββlogf(yi|xi,β0)}=Var{diβlogf(yi|xi,β0)}. Under some regularity conditions, when n goes to infinity, we have n((βˆβ0),λˆ)dN(0,Σ), where Σ=diag(Σ11,Σ22) with (12) Σ11={J+E(βϕ)(Eϕϕ)1E(βϕ)}1,Σ22={E(βϕ)J1E(βϕ)+E(ϕϕ)}1.(12)

Remark 4.1

Imbens and Lancaster (Citation1994) studied the same problem using GMM. In particular, they directly combined the conditional score estimating equation βlogf(y|x,β) and ϕ(x,β). Even though the first-order large-sample results are the same, the hybrid EL based approach is more appealing as it respects the parametric conditional likelihood and replaces only the marginal likelihood with the EL. See Qin (Citation2000) for numerical comparisons of results of the two methods.

5. Combining summary information: a more flexible method for meta-analysis

Developing systematic methods for combining published information is one of the main goals of meta-analysis, which has become increasingly popular since little extra cost is needed. The main restriction in meta-analysis is that all studies must include the same variables in their analyses. The only difference allowed is in the sample sizes. Thus, studies must be discarded if they contain different variables from those in other studies.

Summarized information is often available from publications such as census reports and results of national health studies. For reasons including confidentiality, it is typically not possible to gain access to the original data, only the summarized reports. Suppose we are interested in conducting a new study that may contain some new variables of interest that are not available in the summarized information, for example, a genetic study involving newly discovered biomarkers or genes. Below we discuss a more flexible method that could be used to combine published information and individual study data for enhanced inference in such cases. Chatterjee et al. (Citation2016) discussed a related problem on the utilization of auxiliary information. As Han and Lawless (Citation2016) pointed out, however, their methodology and theoretical results had already been developed by Imbens and Lancaster (Citation1994) and Qin (Citation2000) in the absence of selection bias in sampling.

We consider two cases. (I) The sample size for the summarized information is much larger than that of the new study. (II) Sample sizes from the two data sources are comparable. In Case I, we can treat the summarized information as known, i.e., the variation in the summarized data is negligible compared with the variation in the new study. In Case II, we have to take the variation in the summarized information into consideration as it is comparable to the variation in the new study. We focus on Case I in this section and study Case II in Section 6.

5.1. Setup and solution

Suppose that the summarized results were obtained from statistical analyses of response Y and covariate variables X (although the original data are not available), and that the new study includes an extra covariate Z in addition to (Y,X). We are interested in fitting a parametric model f(y|x,z,β) for the conditional density function of Y given X and Z. Let (y1,x1),.,(yN,xN) be the historic data even though they are unavailable. The published information can be summarized in two ways:

  1. h¯=N1i=1Nh(yi,xi) is known; and

  2. γ is the solution of an estimating equation i=1Nh(yi,xi,γ)=0, where the function h(y,x,γ) is known up to γ.

Let (y1,x1,z1),,(yn,xn,zn) be observed data from the new study. The basic assumption is that (yi,xi),i=1,2,,n, and (yi,xi) have the same distribution. To utilize the summarized information, we can define estimating functions g=(g1,g3),g1(y,x,z)=βlogf(y|x,z,β),g3(y,x)=h(y,x)h¯ in Scenario (I), and g=(g1,g3),g1(y,x,z)=βlogf(y|x,z,β),g3(y,x)=h(y,x,γ) in Scenario (II). We consider only the situation where n/N0. In other words, the variation in the auxiliary information is negligible.

The EL approach amounts to maximizing i=1nlogpi subject to the constraint i=1npig(yi,xi,zi,β)=0,pi0,i=1npi=1. According to Qin and Lawless (Citation1994), the asymptotic variance of the maximum EL estimator βˆ based on estimating equation g is [E(βg){E(gg)}1E(βg)]1, where βg=g(y,x,z,β)/β|β=β0, g=g(y,x,z,β0), and β0 is the true value of β. We denote A=E(gg)=(A11A12A12A22),A22.1=A22A12A111A12. Equivalently, the asymptotic variance can be written as [E(βg1)A111E(βg1)+E(βg1)A111A12×A22.11A21A111E(βg1)]1, or (J+A12A22.11A21)1, where A11=J is Fisher's information matrix.

In the above approach, the estimating equation g3=h(y,x)h¯ does not involve the parameter β. However, there are ways to achieve higher efficiency. For example, we define g2(x,z,β)=ψ(x,z,β) with ψ(x,z,β)=E{h(Y,X)|X=x,Z=z}h¯=h(y,x)f(y|x,z,β)dyh¯. Then, E{g2(x,z,β)}=0. If we combine the empirical log-likelihood based on the estimating equation g2 and the log-likelihood i=1nlogf(yi|xi,zi,β) as in the previous section (see Equation (Equation12)), then the asymptotic variance of the resulting MLE βˆ is given by {J+E(βψ)(Eψψ)1E(βψ)}1. In general, this approach can achieve better efficiency.

5.2. A comparison

Given two pairs of estimation functions, {g1,g3} and {g1,g2}, we may wonder combining which pair leads to a better estimator if we directly compare their asymptotic variance formulae. Alternatively, we may enquire whether we should combine all three constraints g=(g1,g2,g3) together. Write g12=g21=(g1,g2), a=E{h(y,x)βlogf(y|x,z,β)}, and E(gg)=(J0a0E(ψψ)E(ψψ)aE(ψψ)E(hh))=(B11B12B12B22),B11=(J00E(ψψ)),B12=(aE(ψψ)). Using results from Qin and Lawless (Citation1994) and (B11B12B21B22)1=(IB111B120I)×(B11100B22.11)(I0B21B111I) with B22.1=B11B12B111B12, we find that the asymptotic variance of βˆ obtained by combining the three estimating equations and i=1nlogf(yi|xi,zi,β) is [J+E(βψ){E(ψψ)}1E(βψ)+E(βg21)B111B12B22.11B21B111E(βg12)]1. It can be shown that E(βg)=(J,E(βψ),0) and E(βg12)=(J,a). Immediately, we have E(βg12)B111B12=(J,a)(J100{E(ψψ)}1)×(aE(ψψ))=0, which implies that the asymptotic variance in the case where g1, g2, and g3 are combined is the same as that in the case where g1 and g2 only are combined. This indicates that taking g3 into account leads to no efficiency gain in the estimation of β.

The method of combining g2 and the parametric likelihood i=1nf(yi|xi,zi,β) is better than that of combining g1, g3, and the parametric likelihood. To see this, recall that the asymptotical variances for the MLEs of β with the two methods are V1={J+E(βψ)(Eψψ)1E(βψ)}1 and V2=(J+A12A22.11A21)1. It suffices to show that V2V10, namely, V2V1 is non-negative definite.

5.3. Proof of V2V10

For convenience, we assume that E(h)=0. As E(βψ)=A12 and ψ=E(h|X,Z), it suffices to show that (13) A22.1E(ψψ)=(A22A21A111A12)E[{E(h|X,Z)}2]0.(13) Let E and Var denote E(|X,Z) and Var(|X,Z), respectively. As (A11A12A21A22)=E{(g1h)2}=E{Var(g1h)}+Var{E(g1h)} and E(g1)=0, it follows that (A11A12A21A22)Var{E(g1h)}=E(000E(h)E(h)). Multiplying both sides by (A21A111,I) from the left and by (A21A111,I) from the right, we arrive at A22A21A111A12E{E(h)E(h)}, that is, inequality (Equation13) holds, which implies V2V10.

6. Calibration of information from previous studies

We consider calibration of information using parametric likelihood, EL (Owen, Citation1988), and GMM (Hansen, Citation1982). When only summary information from previous studies is available, these three well-known methods can be used to calibrate such summary information and to make inferences about the unknown parameters of interest. We may wonder whether doing so results in efficiency loss compared with inferences based on the pooled data if they were all available. Zeng and Lin (Citation2015) found that parametric-likelihood-based meta-analysis of summarized information retained first-order asymptotic efficiency compared with analysis based on individual data. We show here that EL and GMM also possess this property. This is extremely important, as individual data may involve privacy issues, whereas summarized information does not.

6.1. Efficiency comparison

Suppose that (Yij,Xij) (j=1,2,,ni;i=1,2,,K) are independent observations from the same population. We consider two scenarios according to the model's assumption about the population.

  1. The conditional probability function (i.e., the probability density/mass function of a continuous/discrete random variable) of Y given X has a parametric form f(y|x,β).

  2. The population satisfies E{g(Y,X,β)}=0.

Here, β is a finite-dimensional unknown parameter, and β is its true value. Assume that data are available batch by batch, and that ni/n=ρi(0,1), where n=i=1Kni. For the i-th batch (i=1,2,,K) of data:

  1. under assumption (I), the parametric log-likelihood function of β is i,PL(β)=j=1nilog{f(Yij|Xij,β)};

  2. under assumption (II), we define an empirical log-likelihood function i,EL(β)=sup{j=1nilog(nipj):pj0,j=1nipj=1,j=1nipjg(Yij,Xij;β)=0}=j=1nilog{1+λig(Yij,Xij;β)}nilog(ni), where λi satisfies j=1nig(Yij,Xij;β)1+λig(Yij,Xij;β)=0;

  3. under assumption (II), we define the objective function of the GMM method (GMM log-likelihood for short) as i,GMM(β)={j=1nig(Yij,Xij;β)}×Ω1{j=1nig(Yij,Xij;β)}, where Ω=Var{g(Y,X,β)} and β is the true value of β. In practice, β is generally replaced by a consistent estimator of β in the expression for Ω. Using the true value β of β does not affect the theoretical analysis presented in this section.

Let i(β)=i,PL(β), i,EL(β), or i,GMM(β). Under certain regularity conditions, it can be verified that for β=β+Op(n1/2), (14) i(β)=Uini(ββ)ni2(ββ)×V(ββ)+op(1).(14) In Case (a), Ui=ni12j=1niβlog{f(Yij|Xij,β)},V=Var[βlog{f(Y|X,β)}]. In Case (b) Ui=ni12j=1nig(Yij,Xij;β),V=A12A221A21, where A=(0E{βg(Y,X;β)}E{βg(Y,X;β)}E{g(Y,X;β)g(Y,X;β)})(A11A12A21A22). In Case (c), Ui={Eθg(Y,X,β)}Ω1ni12j=1nig(Yij,Xij,β),V={Eθg(Y,X,β)}Ω1{Eβg(Y,X,β)}. We denote the MLE of β based on the i-th batch of data by βˆi=argmaxr(β). The above approximation implies that ni(βˆiβ)=V1Ui+op(1)dN(0,V1). When the K-th batch of individual data are available, we no longer have access to the individual data of the previous K−1 batches but only have summarized information (βˆi,Σˆi),i=1,2,,K1, where βˆi is the MLE based on the i-th batch of data and Σˆi=V1/ni+o(n1), and we can define an augmented log-likelihood A(β)=K(β)12i=1K1(βˆiβ)Σˆi1(βˆiβ) and the corresponding MLE βˆA=argmaxA(β). For β=β+Op(n1/2), using the approximation in (Equation14), we have A(β)=UKnK(ββ)nK2(ββ)V(ββ)12i=1K1ni(ββ)V(ββ)+i=1K1ni(βˆiβ)V(ββ)+C+op(1)=n1/2i=1KniUin(ββ)n2(ββ)V(ββ)+C+op(1), where the constant C differs in different equations.

For comparison, based on the pooled data, in Case (a) we define the parametric log-likelihood as PL(β)=i=1Kj=1nilog{f(Yij|Xij,β)}; in Case (b) we define the empirical log-likelihood function as EL(β)=sup{i=1Kj=1nilog(npij):pij0,i=1Kj=1nipij=1,i=1Kj=1nipijg(Yij,Xij;β)=0}=i=1Kj=1nilog{1+λg(Yij,Xij;β)}i=1Knilog(ni), where λ satisfies i=1Kj=1nig(Yij,Xij;β)1+λg(Yij,Xij;β)=0; and in Case (c) we define the GMM log-likelihood as GMM(β)={i=1Kj=1nig(Yij,Xij;β)}×Ω1{i=1Kj=1nig(Yij,Xij;β)}. Let the log-likelihood based on the pooled data be pool(β)=PL(β), EL(β), and GMM(β) in Cases (a), (b), and (c), respectively. Then, it can be shown that pooled(β)=n1/2j=1KnjUjn(ββ)n2(ββ)V(ββ)+C+op(1), for some constant C. Let βˆpooled=argmaxpooled(β). By comparing pooled(β) and A(β), we obtain pooled(β)=A(β)+C+op(1) and n(βˆAβ)=n(βˆpooledβ)+op(1)=V1n1/2j=1KnjUj+op(1)dN(0,V1). This indicates that compared with the methods, including parametric likelihood, EL, and GMM, based on all individual data, the calibration method based on the last batch of individual data and all summary results of the previous batches has no efficiency loss.

6.2. When nuisance parameters are present

For batch i, assume that the data (Yij,Xij) (j=1,2,,ni) satisfy either pr(Yij=y|Xij=x)=f(y|x,β,γi) or E{g(Y,X,β,γi)}=0, where β is common but γi is a batch-specific parameter. We define r(β,γr) in the same way as r(β). Let (βˆi,γˆi) be the MLE of (β,γi) based on the i-th batch of data, and assume that approximately ((βˆiβ),(γˆiγi))N(0,Σˆi) with Σˆi=(Σˆi,rs)1r,s2.

We have two ways of combining information from previous studies. If we use all the previous summary information, we can define (15) A(1)(β,γ1,,γK)=K(β,γK)12i=1K1((βˆiβ),(γˆiγi))Σˆi1((βˆiβ),(γˆiγi)).(15) As βˆi|γˆiN(β,Σˆi,112), where Σˆi,112=Σˆi,11Σˆi,12Σˆi,221Σˆi,21, using only this summary information, we can define A(2)(β,γK)=K(β,γK)12i=1K1(βˆiβ)×Σˆi,1121(βˆiβ). Below we show that the MLEs of β based on these two likelihoods are actually equal to each other. In other words, there is no efficiency loss when estimating β based on A(2)(β,γK) instead of A(1)(β,γ1,,γK).

To see this, it suffices to show that (16) supγ1,,γK1A(1)(β,γ1,,γK)=A(2)(β,γK).(16) We denote the inverse matrix of Σi by Σi1=(Σirs)1r,s2, where Σi11=Σi,1121,Σi21=Σi,221Σi,21Σi,1121,Σi12=Σi,1121Σi,12Σi,221,Σi22=Σi,221+Σi,221Σi,21Σi,1121Σi,12Σi,221. It can be seen that A(1)(β,γ1,,γK)=K(β,γK)12i=1K1(βˆiβ)Σi11(βˆiβ)+i=1K1(βˆiβ)Σi12(γiγˆi)12i=1K1(γiγˆi)Σi22(γiγˆi). Setting A(1)(β,γ1,,γK)/γi=0 (1iK1) gives (γiγˆi)=(Σi22)1Σi21(βˆiβ). Putting this back into A(1)(β,γ1,,γK) gives supγ1,,γK1A(1)(β,γ1,,γK)=K(β,γK)12i=1K1(βˆiβ)×{Σi11Σi12(Σi22)1Σi21}(βˆiβ)+C=K(β,γK)12i=1K1(βˆiβ)×Σi,1121(βˆiβ)+C, where we used the definition of Σi,112 in the last equation. We arrive at Equation (Equation16) after comparing this with the definition of A(2)(β,γK).

7. Using covariate-specific disease prevalent information

As discussed in the previous section, summarized statistics from previous studies can sometimes be utilized to enhance the estimation efficiency in a current study. This is especially important in the big data era, when many types of information can be found through the internet. More specifically, suppose the prevalence of a disease is known at various levels of a known risk factor X. In this section, we combine this type of information in a case–control biased sampling setup.

7.1. Induced estimating equations under case–control sampling

Case–control sampling is among the most popular methods in cancer epidemiological studies. This is mainly because it is the most convenient, economic, and effective method. In the study of rare diseases in particular, one has to collect large samples in order to get a reasonable number of cases by using prospective sampling, which may not be practical. Using case–control sampling, a pre-specified number of cases (n1) and controls (n0) are collected retrospectively from case and control populations, respectively. Typically, this can be accomplished by sampling cases from hospitals and controls from the general disease-free population.

For a given risk factor X, let Fi(x)=pr(Xx|D=i) for i = 0, 1. Given X in a range (a,b], the disease prevalence is pr(D=1|a<Xb)=ϕ(a,b), where ϕ(a,b) is known. Using Bayes' formula, we have ϕ(a,b)=πabdF1(x)pr(a<Xb),1ϕ(a,b)=(1π)abdF0(x)pr(a<Xb) with π=pr(D=1). It follows that abdF1(x)=1ππϕ(a,b)1ϕ(a,b)abdF0(x), or E1[I(a<Xb)]=1ππϕ(a,b)1ϕ(a,b)E0[I(a<Xb)], where E0 and E1 denote the expectation operators with respect to F0 and F1, respectively.

We assume that given covariates X and Y, the underlying disease model is given by the conventional logistic regression (17) pr(D=1|x,y)=exp(α+xβ+yγ+yxξ)1+exp(α+xβ+yγ+yxξ).(17) Let α=αη with η=log{(1π)/π}. It can be shown (see Qin, Citation2017) that this is equivalent to the exponential tilting model f1(x,y)=f(x,y|D=1)=exp(α+xβ+yγ+yxξ)f0(x,y), where f0(x,y)=f(x,y|D=0). As a consequence, E0{I(a<Xb)exp(η+α+βX+γY+ξXY)}=1ππϕ(a,b)1ϕ(a,b)E0[I(a<Xb)] or (18) E0{I(a<Xb)exp(α+βX+γY+ξXY)ϕ(a,b)1ϕ(a,b)I(a<Xb)}=0.(18) We denote g0(X,Y)=exp(η+α+βXi+γYi+ξXiYi)1 and the summarized auxiliary information equations as gi(X,Y)=I(ai1<Xai)exp(α+βX+γY+ξXY)ϕ(ai1,ai)1ϕ(ai1,ai)×I(ai1<Xai) with i=1,2,,I. Then E0{g(X,Y)}=0, where g(X,Y)=(g0(X,Y),g1(X,Y),,gI(X,Y)).

7.2. Empirical likelihood approach

The log-likelihood is (19) =i=1ndi(η+α+βxi+γyi+ξxiyi)+i=1nlog(pi),(19) where pi=dF0(xi),i=1,2,,n, and the constraints are pi0,i=1npi=1,i=1npig(xi,yi)=0. The profile log-likelihood is =i=1ndi(η+α+βxi+γyi+ξxiyi)i=1nlog{1+λg(xi,yi)}, where the Lagrange multiplier λ is determined by i=1ng(xi,yi)1+λg(xi,yi)=0. Finally, the underlying parameters can be obtained by maximizing ℓ.

If the overall disease prevalence probability π=pr(D=1) is known, then η=log{(1π)/π} is known. On the other hand, if it is unknown but I1, then π is identifiable. If I>1, then we have an over-identified equation problem. This can be treated as a generalization of the EL method for estimating functions (Qin & Lawless, Citation1994) for biased sampling problems. Qin et al. (Citation2015) considered the case where η is unknown and I1.

Let ω=(η,α,β,γ,ξ,λ) and let ωˆ be its maximum EL estimator. As the first estimating function g0 corrects biased sampling in a case–control study, the remaining estimating functions g1,,gI are used for improving efficiency. When n goes to infinity, it can be shown that the limit of λ is a (I+1)-dimensional vector where the first component is limn(n1/n) and the remainder are all zero. Qin et al. (Citation2015) showed that if ρ=n1/n0 remains constant as n and ρ(0,1), then under suitable regularity conditions n(ωˆω0) is asymptotically normally distributed with mean zero. Moreover, the estimation of the logistic regression parameters (β,γ,ξ) improves as the number I of estimating functions increases. This means that a richer set of auxiliary information leads to better estimators. In practice, however, this consideration must be balanced with the numerical difficulty of solving a larger number of equations.

Notably, auxiliary information is informative for estimating β and ξ but not for estimating γ. This can be observed through the following equations: I(a<x<b)exp(α+βx+γy+ξxy)dF0(x,y)=I(a<x<b)exp{α+βx+s+ξx(s/γ)}×dF0(x,s/γ). As the underlying distribution F0(x,y) is unspecified, we can treat F0(x,s/γ) as a new underlying distribution F0(x,s). With F0 profiled out, the auxiliary information equation does not involve γ if ξ=0. Hence, even if ξ0, the information for γ is minimal as γ and ξ cannot be separated.

7.3. Generalizations

The simulation results of Qin et al. (Citation2015) indicate that when covariate-specific auxiliary information is employed, the estimator of the coefficient β of X has the maximum variance reduction, whereas the variance reductions for other coefficients are small. If the auxiliary information pr(D=1|bj1<Ybj)=ψj,j=1,2,,J is also available, we can combine them through estimating equations gi(X,Y)=I(ai1<Xai)eα+βX+γY+ξXY)ϕ(ai1,ai)1ϕ(ai1,ai)I(ai1<Xai),hj(X,Y)=I(bj1<Ybj)eα+βX+γY+ξXYψ(bj1,bj)1ψ(bj1,bj)I(bj1<Ybj). It would be more informative if the auxiliary information pr(D=1|a<X<b,c<Y<d) is available.

7.4. More on the use of auxiliary information

Under a logistic regression model, the case and control densities are linked by the exponential tilting model (20) pr(x,y|D=1)=pr(x,y|D=0)×exp(α+xβ+yγ+ξxy).(20) Suppose that for the general population E(X)=μ1, E(Y)=μ2, and E(XY)=μ3 are all known, and π=pr(D=1) is known or can be estimated using external data. Under the exponential tilting model (Equation20), the density f(x,y) in the general population and the density pr(x,y|D=0) in the control population are linked by pr(x,y)={πeα+xβ+yγ+ξxy+(1π)}×pr(x,y|D=0). As a consequence E(X)=E0[X{πeα+Xβ+Yγ+ξXY+(1π)}]=μ1, where E0 is an expectation with respect to pr(x,y|D=0). Let h(x,y)=(xμ1,yμ2,xyμ3) with known μ1,μ2, and μ3. The log-likelihood under case–control data is still (Equation19), where the pis satisfy the following constraints: i=1npi=1,pi0,i=1npieα+xiβ+yiγ+xiyiξ=1,i=1npih(xi,yi){πeα+xiβ+yiγ+xiyiξ+(1π)}=0. More generally, any information in the general population such as E[ψ(Y,X)]=0 can be converted to an equation for the control population, E0[{πeα+Xβ+Yγ+ξXY+(1π)}ψ(Y,X)]=0. Therefore, the results developed by Qin et al. (Citation2015) can be applied. The results of Chatterjee et al. (Citation2016) for case–control data can be considered as a special case of Qin et al. (Citation2015).

8. Communication-efficient distributed inference

In the era of big data, it is commonplace for data analyses to run on hundreds or thousands of machines, with the data distributed across those machines and no longer available in a single central location. Recently, parallel and distributed inference has become popular in the statistical literature in both frequentist and Bayesian settings. In essence, the data-parallel procedures are intended to break the overall dataset into subsets that are processed independently. To the extent that communication-avoiding procedures have been discussed explicitly, the focus has been on one-shot or embarrassingly parallel approaches that use only one round of communication in which estimators or posterior samples are first obtained in parallel on local machines, then communicated to a centre node, and finally combined to form a global estimator or approximation to the posterior distribution (Lee et al., Citation2017; Neiswanger et al., Citation2015; Wang & Dunson, Citation2015; Zhang et al., Citation2013). In the frequentist setting, most one-shot approaches rely on averaging (Zhang et al., Citation2013), where the global estimator is the average of the local estimators. Lee et al. (Citation2017) extend this idea to high-dimensional sparse linear regression by combining local debiased Lasso estimates (van de Geer et al., Citation2014). Recent work by Duchi et al. (Citation2015) shows that under certain conditions, these averaging estimators can attain the information-theoretic complexity lower bound for linear regression, and at least O(dk) bits must be communicated in order to attain the minimax rate of parameter estimation, where d is the dimension of the parameter and k is the number of machines. This result holds even in the sparse setting (Braverman et al., Citation2016).

The method of Jordan et al. (Citation2019) proceeds as follows. Suppose the big data consists of N observations and there are k machines. For the convenience of presentation, we assume that each machine has n observations, i.e., N = nk. Denote the full-data likelihood by N(θ)=1kj=1kj(θ), where j(θ) is the log-likelihood based on the data from the j-th machine. For θ near its target value θ¯, N(θ)=N(θ¯)+θN(θ)|θ=θ¯(θθ¯)+RN(θ),1(θ)=1(θ¯)+θ1(θ)|θ=θ¯(θθ¯)+R1(θ), where RN(θ) and R1(θ) are remainders. Observing that RNR1, define a surrogate log-likelihood ¯(θ)=N(θ¯)+(θθ¯)θN(θ)|θ=θ¯+{1(θ)1(θ¯)(θθ¯)θ1(θ)|θ=θ¯}. Ignoring the constant terms, the surrogate log-likelihood is ¯(θ)=1(θ)+θ{θN(θ)|θ=θ¯θ1(θ)|θ=θ¯}. The score equation based on the surrogate likelihood is θ¯(θ)=θ1(θ)+{θN(θ)|θ=θ¯θ1(θ)|θ=θ¯}=0. Let θˆ be the solution. Expanding it at θ0 and using the fact that N1{θθ1(θ0)θθN(θ0)}0in probability, we can easily show that if θ¯θ0=Op(N1/2) then (θˆθ0)={θθN(θ0)}1θN(θ0)+op(N1/2). If we let θ¯ be the MLE based on 1(θ), the surrogate log-likelihood can be simplified to ¯(θ)=1(θ)+θθN(θ¯), because θ1(θ¯)=0.

If the dimension of θ is high, one may add a penalty function in the surrogate log-likelihood and estimate θ by θ~=argmaxθΘ{¯(θ)λθ1}, where θ1 is the L1-norm of θ. Similarly, Bayesian inference can be adapted to the surrogate likelihood as well.

Duan et al. (Citation2020) proposed distributed algorithms that account for heterogeneous distributions by allowing site-specific nuisance parameters. The proposed methods extend the surrogate likelihood approach (Jordan et al., Citation2019; Wang et al., Citation2017) to the heterogeneous setting by applying a novel density ratio tilting method to the efficient score function. Asymptotically, the approach described in Section 6.2 on nuisance parameters is equivalent to that of Duan et al. (Citation2020).

9. Renewal estimation and incremental inference

Let U(D1,β)=βM(D1,β) be a score function of β based on some objective function M(D1,β) from the first batch of data, where M can be either the log-likelihood M(D1,β)=i=1n1logf(y1i|x1i,β) or a pseudo log-likelihood.

Let βˆ1 be the solution to U(D1,β)=0, when only the first batch of data D1 is available. Let D2 denote the second batch of data. If both of them are available, we let βˆ2 be the solution to the pooled score equation, U(D1,β)+U(D2,β)=0. Clearly, βˆ2 is the most efficient estimator of β when D1 and D2 are both available.

If D2 is available but D1 is not, with only some summary information βˆ1 and Σˆ1 in its place, how can we utilize the summary information efficiently? It is not feasible to estimate β by directly solving U(β)U(D1,β)+U(D2,β)=0, which involves the individual data of the unavailable D1. Luo (Citation2020) considers expanding U(D1,β) at β=βˆ1, i.e., U(D1,β)=U(D1,βˆ1)+(ββˆ1)βU(D1,βˆ1)+O(ββˆ12) for β close to βˆ1. As U(D1,βˆ1)=0, it follows that U(β)=U(D2,β)+(ββˆ1)βU(D1,βˆ1)+O(ββˆ12). Luo (Citation2020) proposes obtaining an updated estimator of β by solving (21) (ββˆ1)βU(D1,βˆ1)+U(D2,β)=0.(21) Alternatively, we may understand this renewal estimation strategy in the manner of Zhang et al. (Citation2020), who propose estimating β by maximizing (22) i=1n2logf(y2i|x2i,β)12n1(βˆ1β)Σ(βˆ1β),(22) where Σ=E{βlogf(Y|X,β)βlogf(Y|X,β)} is the Fisher information. If both batches are available, the score for β is S(β)=i=1n1βlogf(y1i|x1i,β)+i=1n2βlogf(y2i|x2i,β). After recording βˆ1 and Σ, we no longer have the raw data {(y1i,x1i),i=1,2,,n1}. As βˆ1β=n11Σ1i=1n1βlogf(y1i|x1i,β)+op(n11/2), differentiating (Equation22) with respect to β gives i=1n2βlogf(y2i|x2i,β)n1Σ(βˆ1β)=i=1n1βlogf(y1i|x1i,β)+i=1n2βlogf(y2i|x2i,β)+op(n1/2). Here, we have assumed that n1=O(n2)=O(n). This indicates that estimating β by maximizing (Equation22) results in no efficiency loss asymptotically compared with the MLE based on all individual data, where the latter is infeasible in the current situation.

10. Concluding remarks

Rapid growth in hardware technology has made data collection much easier and more effective. In many applications, data often arrive in streams and chunks, which leads to batch-by-batch data or streaming data. For example, web sites served by widely distributed web servers may need to coordinate many distributed clickstream analyses, e.g., to track heavily accessed web pages as part of their real-time performance monitoring. Other examples include financial applications, network monitoring, security, telecommunications data management, manufacturing, and sensor networks (Babcock et al., Citation2002; Nguyen et al., Citation2021). The continuous arrival of such data in multiple, rapid, time-varying, possibly unpredictable and unbounded streams not only yields many fundamentally new research problems but provides contains various forms of auxiliary information.

Assembling information from different data sources has become indispensable in big data and artificial intelligence research. Statistical tools play an essential part in updating information. In this paper, we have presented a selective review of several traditional statistical methods, including meta-analysis, calibration information methods in survey sampling, and EL together with over-identified estimating equations and GMM. We have also briefly reviewed some recently developed statistical methods, including communication-efficient distributed statistical inference and renewal estimation and incremental inference, which can be regarded as the latest developments of calibration information methods in the era of big data. Although these methods were developed in different fields and in different statistical frameworks, in principle, they are asymptotically equivalent to well-known methods developed for meta-analysis. These methods result in almost no or little information loss compared with the case when full data are available.

Finally, we apologize to people whose work has inadvertently have been left out of our reference list.

Acknowledgments

The authors thank the editor and two referees for constructive comments and suggestions that led to significant improvements in this paper.

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This research was supported by the National Natural Science Foundation of China [grant numbers 71931004, 12171157, and 32030063], the 111 Project [grant number B14019], the Development Fund for Shanghai Talents and the Natural Sciences and Engineering Research Council of Canada (grant number RGPIN-2020-04964).

References

  • Babcock, B., Babu, S., Datar, M., Motwani, R., & Widom, J. (2002). Models and issues in data stream systems. In Proceedings of the 21 ACM SIGMOD-SIGACT-SIGART symposium on principles of database systems (pp. 1–16). ACM.
  • Back, K., & Brown, D. P. (1992). GMM, maximum likelihood, and nonparametric efficiency. Economics Letters, 39(1), 23–28. https://doi.org/10.1016/0165-1765(92)90095-G
  • Braverman, M., Garg, A., Ma, T., Nguyen, H., & Woodruff, D. (2016). Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In Proceedings of the 48th annual ACM symposium on theory of computing (pp. 1011–1020). ACM.
  • Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. (2009). Introduction to meta-analysis. Wiley.
  • Chatterjee, N., Chen, Y.-H., Maas, P., & Carroll, R. J. (2016). Constrained maximum likelihood estimation for model calibration using summary-level information from external big data sources. Journal of the American Statistical Association, 111(513), 107–117. https://doi.org/10.1080/01621459.2015.1123157
  • Chaudhuri, S., Handcock, M. S., & Rendall, M. S. (2008). Generalized linear models incorporating population level information: an empirical likelihood based approach. Journal of the Royal Statistical Society: Series B, 70(2), 311–328. https://doi.org/10.1111/rssb.2008.70.issue-2
  • Chen, J., & Qin, J. (1993). Empirical likelihood estimation for finite populations and the effective usage of auxiliary information. Biometrika, 80(1), 107–116. https://doi.org/10.1093/biomet/80.1.107
  • Chen, J., Sitter, R., & Wu, C. (2002). Using empirical likelihood methods to obtain range restricted weights in regression estimators for surveys. Biometrika, 89(1), 230–237. https://doi.org/10.1093/biomet/89.1.230
  • Cochran, W. G. (1977). Sampling techniques (3rd ed.). Wiley.
  • Dersimonian, R., & Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7(3), 177–188. https://doi.org/10.1016/0197-2456(86)90046-2
  • Duan, R., Ning, Y., & Chen, Y. (2020). Heterogeneity-aware and communication-efficient distributed statistical inference. arXiv:1912.09623v1.
  • Duchi, J., Jordan, M., Wainwright, M., & Zhang, Y. (2015). Optimality guarantees for distributed statistical estimation. arXiv:1405.0782.
  • Han, P., & Lawless, J. (2016). Comment. Journal of the American Statistical Association, 111(513), 118–121. https://doi.org/10.1080/01621459.2016.1149399
  • Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica, 50(4), 1029–1054. https://doi.org/10.2307/1912775
  • Hartely, H. O., & Rao, J. N. K. (1968). A new estimation theory for sample surveys. Biometrika, 55(3), 547–557. https://doi.org/10.1093/biomet/55.3.547
  • Imbens, G., & Lancaster, T. (1994). Combining micro and macro data in microeconometric models. Review of Economic Studies, 61(4), 655–680. https://doi.org/10.2307/2297913
  • Jordan, M. I., Lee, J. D., & Yang, Y. (2019). Communication-efficient distribution statistical inference. Journal of the American Statistical Association, 114(526), 668–681. https://doi.org/10.1080/01621459.2018.1429274
  • Lee, J., Liu, Q., Sun, Y., & Taylor, J. (2017). Communication-efficient sparse regression. Journal of Machine Learning Research, 18, 1–30. http://jmlr.org/papers/v18/16-002.html
  • Lin, D. Y., & Zeng, D. (2010). On the relative efficiency of using summary statistics versus individual-level data in meta-analysis. Biometrika, 97(2), 321–332. https://doi.org/10.1093/biomet/asq006
  • Luo, L. (2020). Renewable estimation and incremental inference in generalized linear models with streaming data sets. Journal of the Royal Statistical Society, Series B, 82(1), 69–97. https://doi.org/10.1111/rssb.12352
  • Neiswanger, W., Wang, C., & Xing, E. (2015). Asymptotically exact, embarrassingly parallel MCMC. In Proceedings of the 30th conference on uncertainty in artificial intelligence (pp. 623–632). AUAI Press.
  • Nguyen, T. D., Shih, M. H., Srivastava, D., Tirthapura, S., & Xu, B. (2021). Stratified random sampling from streaming and stored data. Distributed and Parallel Databases, 39(3), 665–710. https://doi.org/10.1007/s10619-020-07315-w
  • Owen, A. B. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika, 75(2), 237–249. https://doi.org/10.1093/biomet/75.2.237
  • Owen, A. B. (1990). Empirical likelihood ratio confidence regions. Annals of Statistics, 18(1), 90–120. https://doi.org/10.1214/aos/1176347494
  • Owen, A. B. (2001). Empirical likelihood. CRC.
  • Qin, J. (2000). Combining parametric and empirical likelihoods. Biometrika, 87(2), 484–490. https://doi.org/10.1093/biomet/87.2.484
  • Qin, J. (2017). Biased sampling, over-identified parameter problems and beyond. Springer.
  • Qin, J., & Lawless, J. (1994). Empirical likelihood and general equations. Annals of Statistics, 22(1), 300–325. https://doi.org/10.1214/aos/1176325370
  • Qin, J., Zhang, H., Li, P., Albanes, D., & Yu, K. (2015). Using covariate specific disease prevalence information to increase the power of case-control study. Biometrika, 102(1), 169–180. https://doi.org/10.1093/biomet/asu048
  • Susanne, M. S. (2007). Point estimation with exponentially tilted empirical likelihood. Annals of Statistics, 35(2), 634–672. https://doi.org/10.1214/009053606000001208
  • Tian, L., & Gu, Q. (2016). Communication-efficient distributed sparse linear discriminant analysis. arXiv:1610.04798.
  • van de Geer, S., Buhlmann, P., Ritov, Y., & Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high dimensional models. Annals of Statistics, 42(3), 1166–1202. https://doi.org/10.1214/14-AOS1221
  • van de Vaart, V. W. (2000). Asymptotic statistics. Cambridge University Press.
  • Wang, X., & Dunson, D. (2015). Parallelizing MCMC via Weierstrass sampler. arXiv:1312.4605.
  • Wang, J., Kolar, M., Srebro, N., & Zhang, T. (2017). Efficient distributed learning with sparsity. In Proceedings of the 34th international conference on machine learning, Sydney, Australia, PMLR 70 (pp. 3636–3645).
  • Wu, C., & Sitter, R. R. (2001). A model-calibration approach to using complete auxiliary information from survey data. Journal of the American Statistical Association, 96(453), 185–193. https://doi.org/10.1198/016214501750333054
  • Wu, C., & Thompson, M. E. (2020). Sampling theory and practice. Springer.
  • Zeng, D. & Lin, D. Y. (2015). On random-effects meta-analysis. Biometrika, 102(2), 281–294.
  • Zhang, Y., Duchi, J., & Wainwright, M. (2013). Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14, 3321–3363.
  • Zhang, H., Deng, L., Schiffman, M., Qin, J., & Yu, K. (2020). Generalized integration model for improved statistical inference by leveraging external summary data. Biometrika, 107(3), 689–703. https://doi.org/10.1093/biomet/asaa014