835
Views
2
CrossRef citations to date
0
Altmetric
Articles

Time Series Approach to the Evolution of Networks: Prediction and Estimation

 

Abstract

The article analyzes nonnegative multivariate time series which we interpret as weighted networks. We introduce a model where each coordinate of the time series represents a given edge across time. The number of time periods is treated as large compared to the size of the network. The model specifies the temporal evolution of a weighted network that combines classical autoregression with nonnegativity, a positive probability of vanishing, and peer effect interactions between weights assigned to edges in the process. The main results provide criteria for stationarity versus explosiveness of the network evolution process and techniques for estimation of the parameters of the model and for prediction of its future values. Natural applications arise in networks of fixed number of agents, such as countries, large corporations, or small social communities. The article provides an empirical implementation of the approach to monthly trade data in European Union. Overall, the results confirm that incorporating nonnegativity of dependent variables into the model matters and incorporating peer effects leads to the improved prediction power.

Supplementary Material

See “Evolution of Networks: Supplementary Material” for all proofs, additional tables, game theoretical justification of EquationEquation (3), and results on OLS and MLE estimation.

Acknowledgments

The author would like to thank Donald Andrews, Vadim Gorin, Bruce Hansen, Peter Phillips, Jack Porter, and Larry Samuelson as well as anonymous referees for valuable comments and suggestions.

Notes

1 Equation (1) can also be obtained as a solution to a utility maximization problem, see supplementary material.

2 For two matrices A and B of the same dimension m1×m2, the Hadamard product A°B is a m1×m2 matrix with elements given by (A°B)ij=(A)ij(B)ij.

3 In this and other theorems we state the results in the networks notation yt=(yijt)i,j. When coordinates of yt have other meaning, the same results follow by straightforward renaming of variables.

4 In supplementary material, we show how to correct the OLS procedure to restore consistency. This requires throwing out observations and the accuracy of the estimation decreases. Further, in the special case when errors ut are Gaussian we prove that the maximum likelihood estimator (MLE) is consistent. In contrast, our approach based on LAD does not require Gaussianity.

5 The most challenging part of the extension is showing asymptotic normality of the LAD estimator. This would require a potentially different proof and the asymptotic variance would need to be updated. The much more complicated formula for the variance can be guessed by comparing with de Jong and Herrera (Citation2011).

6 Convergence to stationarity will be used to show consistency and asymptotic normality of our estimators of {αij,βij,γij}i,j. Yet, it is plausible that one can relax Assumption 2, and still get a consistent estimator as long as several first moments of the network evolution process {yijt:i,j=1,n}t1 are uniformly bounded. We further discuss estimation without stationarity in Remark 3 and bounds on moments in Theorem 1.

7 Formally this means that the finite-dimensional distributions of the process {yijt+τ:i,j=1,n}τZ, converge to those of a stationary in τ process as t.

8 Linear independence of 1,y,z means that there do not exist constants λ1,λ2,λ3 such that λ1+λ2y+λ3z=0 almost surely (1 represents a constant random variable).

9 For example, see Theorem 1 which guarantees uniformly bounded first moment.

10 Notice that in Theorem 3 we had α instead of α+Eut. This is due to the fact that we assumed Eut=0 in Theorem 3, while now we use a different normalization, med (ut)=0.

11 Suppose that α, β, and γ are known. To see that the optimal 1-step-ahead prediction in L1 is [α+βyt1+γzt1]+, define Δŷt+1=ŷt+1αβytγzt and write

|yt+1ŷt+1|f(u)du=|[α+βyt+γzt+u]+ŷt+1|f(u)du=|max(u,αβytγzt)Δŷt+1|f(u)du. (9)

Notice that max(u,αβytγzt)={u,uαβytγzt,αβytγzt,u<αβytγzt,,so that med(max(u,αβytγzt))={0,α+βyt+γzt0,αβytγzt,α+βyt+γzt<0.

Because arg minCE|vC|=med(v), minimizing Equation (9) gives

Δŷt+1={​​​0,α+βyt+γzt0,αβytγzt,α+βyt+γzt<0

and ŷt+1=[α+βyt+γzt]+.

12 T=195 or T=205 would be as good and the precise value of T is ad hoc in this sense.

13 See supplementary material for a similar table with peer effects at t – 1.

14 One needs to compare the 10% column of the two-sided test with the 5% column of the one-sided test.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.