ABSTRACT
In this paper, a family of coordinate majorization descent algorithms are proposed for solving the nonconvex penalized learning problems including SCAD and MCP estimation. In the coordinate majorization descent algorithms, each coordinate descent step is replaced with a coordinate-wise majorization descent operation, and the convergence of the algorithms are discussed in linear models. In addition, we apply the algorithms to the Logisitic models. Our simulation study and data examples indicate that the coordinate majorization descent algorithms can select the real model with a higher probability and the model is sparse, also the algorithms improve the accuracy of the parameter estimation with SCAD and MCP penalties.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Correction Statement
This article has been republished with minor changes. These changes do not impact the academic content of the article.