Abstract
There is an increasing number of goodness-of-fit tests whose test statistics measure deviations between the empirical characteristic function and an estimated characteristic function of the distribution in the null hypothesis. With the aim of overcoming certain computational difficulties with the calculation of some of these test statistics, a transformation of the data is considered. To apply such a transformation, the data are assumed to be continuous with arbitrary dimension, but we also provide a modification for discrete random vectors. Practical considerations leading to analytic formulas for the test statistics are studied, as well as theoretical properties such as the asymptotic null distribution, validity of the corresponding bootstrap approximation, and consistency of the test against fixed alternatives. Five applications are provided in order to illustrate the theory. These applications also include numerical comparison with other existing techniques for testing goodness-of-fit.
Appendix
The list of required assumptions is as follows.
Assumption 1.
Let X1, X2, …, Xn be iid distributed random vectors with df F and let . Assume that there exists θ ∈ intΘ, with θ = θ0 if F(.) = F(.; θ0), such that and the components of l(x; θ) = (l1(x; θ), l2(x; θ), …., lp(x; θ))′ satisfy
Assumption 2.
, for some θ ∈ intΘ, with θ = θ0 if F(.) = F(.; θ0).
Assumption 3.
The marginal and the conditional distributions of F(x; γ) are continuously differentiable with respect to γ, ∀γ in a open neighborhood of θ, , and satisfy where and τ(x; θ) is as defined in (Equation4(4) ) and (Equation5(5) ).
A sufficient condition for Assumption 3 to hold is that where G(x; θ) represents any marginal or conditional distribution of F(x; θ).
Assumption 4.
Σl(γ) is continuous at γ = θ, where Σl(γ) = ∫l(x; γ)l(x; γ)′ dF(x; γ) and l(x; γ) is defined in Assumption 1.
Assumption 5.
The marginal and the conditional distributions of F(x; γ) are differentiable with respect to γ, ∀γ in a open neighborhood of θ, , and satisfy where and τ(r)(x; θ) denotes the Rosenblatt transformation for the r-th coordinate permutation, 1 ⩽ r ⩽ d!.
Assumption 6.
As γ → ∞, where Λ is a neighborhood of θ.
Proof of Theorems 3.1 and 4.1.
Theorems 3.1 and 4.1 follow from Theorems 3.1 and 4.1 in Meintanis and Swanepoel (Citation2007), respectively, by applying the Cramér Wold device. Although Theorems 3.1 and 4.1 in Meintanis and Swanepoel (Citation2007) assume that the data are univariate, the dimension of the data plays no role in their proof, so, the results in these theorems are also valid for d-dimensional data, for any fixed d ⩾ 1.
Proof of Theorem 5.1.
Since and , by the Dominated Convergence Theorem and the SLLN, and in order to prove the result it suffices to see that (17) , where g is as defined in (Equation8(8) ). First order Taylor expansion gives (18) where , for some α ∈ (0, 1). By Assumption 5 and , the right side of (Equation18(18) ) is oP(1), , and thus (Equation17(17) ) holds.