Abstract
We develop a new globally convergent optimization method for solving a constrained minimization problem underlying the minimum density power divergence estimator for univariate Gaussian data in the presence of outliers. Our hybrid procedure combines classical Newton’s method with a gradient descent iteration equipped with a step control mechanism based on Armijo’s rule to ensure global convergence. Extensive simulations comparing the resulting estimation procedure with the more prominent robust competitor, Minimum Covariance Determinant (MCD) estimator, across a wide range of breakdown point values suggest improved efficiency of our method. Application to estimation and inference for a real-world dataset is also given.
Acknowledgements
The authors thank Mr. Guillermo Lopez-Ramirez for tabulating 2004–2017 City of El Paso (TX) Department of Public Health notifiable conditions used in Section 6. The authors also acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources extensively used for simulations reported in this manuscript. URL: http://www.tacc.utexas.edu. Comments and improvement suggestions from the Editor-in-Chief Professor Narayanaswamy Balakrishnan, Associate Editor and an anonymous referee are greatly appreciated.
Disclosure Statement
The authors have no conflict of interests to declare.
Data Availability Statement
All data presented in the article are available in the Supplement.