Abstract
Penalized quantile regression is a widely used tool for analyzing high-dimensional data with heterogeneity. Although its estimation theory has been well studied in the literature, its computation still remains a challenge in big data, due to the nonsmoothness of the check loss function and the possible nonconvexity of the penalty term. In this article, we propose the QPADM-slack method, a parallel algorithm formulated via the alternating direction method of multipliers (ADMM) that supports penalized quantile regression in big data. Our proposal is different from the recent QPADM algorithm and uses the slack variable representation of the quantile regression problem. Simulation studies demonstrate that this new formulation is significantly faster than QPADM, especially when the data volume n or the dimension p is large, and has favorable estimation accuracy in big data analysis for both nondistributed and distributed environments. We further illustrate the practical performance of QPADM-slack by analyzing a news popularity dataset.
Supplementary Materials
In Section 1 of the Supplementary Materials, we give the closed-form solution of -update with the SCAD and MCP penalties. Section 2 presents some additional simulation results. Section 3 reports the real data analysis results under MCP.
Acknowledgments
The authors thank the Editor, an associate editor and two anonymous reviewers for their helpful comments and suggestions that greatly improved the article. This work is in part supported by NVDIA GPU grant program. We thank NVDIA for giving us Titan V GPU as a grant to carry out our work.