ABSTRACT
Support vector machine (SVM) has proved to be a successful approach for machine learning. Two typical SVM models are the L1-loss model for support vector classification (SVC) and ε-L1-loss model for support vector regression (SVR). Due to the non-smoothness of the L1-loss function in the two models, most of the traditional approaches focus on solving the dual problem. In this paper, we propose an augmented Lagrangian method for the L1-loss model, which is designed to solve the primal problem. By tackling the non-smooth term in the model with Moreau–Yosida regularization and the proximal operator, the subproblem in augmented Lagrangian method reduces to a non-smooth linear system, which can be solved via the quadratically convergent semismooth Newton's method. Moreover, the high computational cost in semismooth Newton's method can be significantly reduced by exploring the sparse structure in the generalized Jacobian. Numerical results on various datasets in LIBLINEAR show that the proposed method is competitive with the most popular solvers in both speed and accuracy.
Acknowledgments
We would like to thank the three anonymous reviewers for there valuable comments which helped improve the paper significantly. We would also like to thank Prof. Chao Ding from Chinese Academy of Sciences for providing the important reference of Prof. Jie Sun's PhD thesis. We would also like to thank Dr Xudong Li from Fudan University for helpful discussions.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 For the same reason as in the L1-Loss SVC model, we omit the bias term b.
Additional information
Funding
Notes on contributors
Yinqiao Yan
Yinqiao Yan received his Bachelor of science in statistics degree from Beijing Institute of Technology, China, in 2019. He is pursuing his PhD degree in Institute of Statistics and Big Data, Renmin University of China, China.
Qingna Li
Qingna Li received her Bachelor's degree in information and computing science and Doctor's degree in computational mathematics from Hunan University, China, in 2005 and 2010 respectively. Currently, she is an associate professor in School of Mathematics and Statistics, Beijing Institute of Technology. Her research interests include continuous optimization and its applications.