Abstract
Outliers widely occur in big-data applications and may severely affect statistical estimation and inference. In this article, a framework of outlier-resistant estimation is introduced to robustify an arbitrarily given loss function. It has a close connection to the method of trimming and includes explicit outlyingness parameters for all samples, which in turn facilitates computation, theory, and parameter tuning. To tackle the issues of nonconvexity and nonsmoothness, we develop scalable algorithms with implementation ease and guaranteed fast convergence. In particular, a new technique is proposed to alleviate the requirement on the starting point such that on regular datasets, the number of data resamplings can be substantially reduced. Based on combined statistical and computational treatments, we are able to perform nonasymptotic analysis beyond M-estimation. The obtained resistant estimators, though not necessarily globally or even locally optimal, enjoy minimax rate optimality in both low dimensions and high dimensions. Experiments in regression, classification, and neural networks show excellent performance of the proposed methodology at the occurrence of gross outliers. Supplementary materials for this article are available online.
Supplementary Materials
The supplementary materials provide all proof details for the theorems, as well as some further discussions on the tails of the effective noise and a notion of stochastic breakdown point. More simulation results in different setups are also included. Matlab implementations of the proposed algorithms are available online.
Acknowledgments
The first author would like to thank Weixin Yao for helpful discussions, Dongrui Zhong for his assistance in the real data analysis, and Peter Rousseeuw, in particular, for his inspiration. We would like to thank the editor, the associate editor, and the anonymous referees for their careful comments and useful suggestions that significantly improved the quality of the article.