Abstract
The article analyzes nonnegative multivariate time series which we interpret as weighted networks. We introduce a model where each coordinate of the time series represents a given edge across time. The number of time periods is treated as large compared to the size of the network. The model specifies the temporal evolution of a weighted network that combines classical autoregression with nonnegativity, a positive probability of vanishing, and peer effect interactions between weights assigned to edges in the process. The main results provide criteria for stationarity versus explosiveness of the network evolution process and techniques for estimation of the parameters of the model and for prediction of its future values. Natural applications arise in networks of fixed number of agents, such as countries, large corporations, or small social communities. The article provides an empirical implementation of the approach to monthly trade data in European Union. Overall, the results confirm that incorporating nonnegativity of dependent variables into the model matters and incorporating peer effects leads to the improved prediction power.
Supplementary Material
See “Evolution of Networks: Supplementary Material” for all proofs, additional tables, game theoretical justification of EquationEquation (3)(3)
(3) , and results on OLS and MLE estimation.
Acknowledgments
The author would like to thank Donald Andrews, Vadim Gorin, Bruce Hansen, Peter Phillips, Jack Porter, and Larry Samuelson as well as anonymous referees for valuable comments and suggestions.
Notes
1 Equation (1) can also be obtained as a solution to a utility maximization problem, see supplementary material.
2 For two matrices A and B of the same dimension , the Hadamard product
is a
matrix with elements given by
.
3 In this and other theorems we state the results in the networks notation . When coordinates of
have other meaning, the same results follow by straightforward renaming of variables.
4 In supplementary material, we show how to correct the OLS procedure to restore consistency. This requires throwing out observations and the accuracy of the estimation decreases. Further, in the special case when errors are Gaussian we prove that the maximum likelihood estimator (MLE) is consistent. In contrast, our approach based on LAD does not require Gaussianity.
5 The most challenging part of the extension is showing asymptotic normality of the LAD estimator. This would require a potentially different proof and the asymptotic variance would need to be updated. The much more complicated formula for the variance can be guessed by comparing with de Jong and Herrera (Citation2011).
6 Convergence to stationarity will be used to show consistency and asymptotic normality of our estimators of . Yet, it is plausible that one can relax Assumption 2, and still get a consistent estimator as long as several first moments of the network evolution process
are uniformly bounded. We further discuss estimation without stationarity in Remark 3 and bounds on moments in Theorem 1.
7 Formally this means that the finite-dimensional distributions of the process , converge to those of a stationary in τ process as
.
8 Linear independence of means that there do not exist constants
such that
almost surely (1 represents a constant random variable).
9 For example, see Theorem 1 which guarantees uniformly bounded first moment.
10 Notice that in Theorem 3 we had α instead of . This is due to the fact that we assumed
in Theorem 3, while now we use a different normalization, med
.
11 Suppose that and γ are known. To see that the optimal 1-step-ahead prediction in L1 is
, define
and write
(9)
Because , minimizing Equation (9) gives
and .
12 or
would be as good and the precise value of T is ad hoc in this sense.
13 See supplementary material for a similar table with peer effects at t – 1.
14 One needs to compare the 10% column of the two-sided test with the 5% column of the one-sided test.