References
- Battey, H., Tan, K. M., & Zhou, W.-X. (2021). Communication-efficient distributed quantile regression with optimal statistical guarantees. Preprint. arXiv:2110.13113
- Cai, T. T., & Zhang, L. (2021). A convex optimization approach to high-dimensional sparse quadratic discriminant analysis. The Annals of Statistics, 49(3), 1537–1568. https://doi.org/https://doi.org/10.1214/20-AOS2012
- Chen, X., Liu, W., & Zhang, Y. (2021). First-order Newton-type estimator for distributed estimation and inference. Journal of the American Statistical Association, 1–17. https://doi.org/https://doi.org/10.1080/01621459.2021.1891925
- Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., & LeCun, Y. (2015). The loss surfaces of multilayer networks. In Artificial Intelligence and Statistics (pp. 192–204). PMLR. http://proceedings.mlr.press/v38/choromanska15.pdf
- Du, B., & Zhao, J. (2021). Hypothesis testing of one-sample mean vector in distributed frameworks. Preprint. arXiv:2110.02588
- Jordan, M. I., Lee, J. D., & Yang, Y. (2018). Communication-efficient distributed statistical inference. Journal of the American Statistical Association, 114(526), 668–681. https://doi.org/https://doi.org/10.1080/01621459.2018.1429274
- Lalitha, A., Shekhar, S., Javidi, T., & Koushanfar, F. (2018). Fully decentralized federated learning. In Third Workshop on Bayesian Deep Learning (NeurIPS). http://bayesiandeeplearning.org/2018/papers/140.pdf
- Lian, H., Liu, J., & Fan, Z. (2021). Distributed learning for sketched kernel regression. Neural Networks, 143, 368–376. https://doi.org/https://doi.org/10.1016/j.neunet.2021.06.020
- Lin, S.-B., Wang, D., & Zhou, D.-X. (2020). Distributed kernel ridge regression with communications. Journal of Machine Learning Research, 21(93), 1–38. https://jmlr.org/papers/volume21/19-592/19-592.pdf
- McMahan, B., Moore, E., Ramage, D., & Hampson, S. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics (pp. 1273–1282). PMLR.
- Ormándi, R., Hegedös, I., & Jelasity, M. (2013). Gossip learning with linear models on fully distributed data. Concurrency and Computation: Practice and Experience, 25(4), 556–571. https://doi.org/https://doi.org/10.1002/cpe.v25.4
- Pan, R., Ren, T., Guo, B., Li, F., Li, G., & Wang, H. (2021). A note on distributed quantile regression by pilot sampling and one-step updating. Journal of Business & Economic Statistics, 1–10. https://doi.org/https://doi.org/10.1080/07350015.2021.1961789
- Shamir, O., Srebro, N., & Zhang, T. (2014). Communication-efficient distributed optimization using an approximate newton-type method. In International Conference on Machine Learning (pp. 1000–1008). PMLR.
- Shi, J., Qin, G., Zhu, H., & Zhu, Z. (2021). Communication-efficient distributed m-estimation with missing data. Computational Statistics & Data Analysis, 161, Article 107251. https://doi.org/https://doi.org/10.1016/j.csda.2021.107251
- Sun, Z., & Lin, S.-B. (2020). Distributed learning with dependent samples. Preprint. arXiv:2002.03757
- Tang, H., Lian, X., Yan, M., Zhang, C., & Liu, J. (2018). D2: Decentralized training over decentralized data. In International Conference on Machine Learning (pp. 4848–4856). PMLR.
- Wang, J., Kolar, M., Srebro, N., & Zhang, T. (2017). Efficient distributed learning with sparsity. In International conference on machine learning (pp. 3636–3645). PMLR.
- Wu, S., Li, Z., & Zhu, X. (2020). Distributed community detection for large scale networks using stochastic block model. Preprint. arXiv:2009.11747
- Xu, C., Zhang, Y., Li, R., & Wu, X. (2016). On the feasibility of distributed kernel regression for big data. IEEE Transactions on Knowledge and Data Engineering, 28(11), 3041–3052. https://doi.org/https://doi.org/10.1109/TKDE.2016.2594060
- Yu, Y., Chao, S.-K., & Cheng, G. (2020). Simultaneous inference for massive data: Distributed bootstrap. In International Conference on Machine Learning (pp. 10892–10901). PMLR.
- Yu, Y., Chao, S.-K., & Cheng, G. (2021). Distributed bootstrap for simultaneous inference under high dimensionality. Preprint. arXiv:2102.10080
- Zhou, X., Chang, L., Xu, P., & Lv, S. (2021). Communication-efficient byzantine-robust distributed learning with statistical guarantee. Preprint. arXiv:2103.00373