Abstract
Recent advances in mathematical programming have made mixed integer optimization a competitive alternative to popular regularization methods for selecting features in regression problems. The approach exhibits unquestionable foundational appeal and versatility, but also poses important challenges. Here, we propose MIP-BOOST, a revision of standard mixed integer programming feature selection that reduces the computational burden of tuning the critical sparsity bound parameter and improves performance in the presence of feature collinearity and of signals that vary in nature and strength. The final outcome is a more efficient and effective L0 feature selection method for applications of realistic size and complexity, grounded on rigorous cross-validation tuning and exact optimization of the associated mixed integer program. Computational viability and improved performance in realistic scenarios is achieved through three independent but synergistic proposals. Supplementary materials including additional results, pseudocode, and computer code are available online.
Supplementary Materials
Supplement-MIP-BOOST.pdf Contains pseudo code, additional simulation results, and a convergence proof sketch for bisection with feelers.
MIPBOOSTex.jl: Julia code running a simulated example of the methods described in the article. Contains functions for each component for general use.
Acknowledgments
We thank Matthew Reimherr for useful discussions and comments.
Notes
1 Calculations were performed on the Roar Supercomputer at Penn State, using the basic memory option on the ACI-B cluster with an Intel Xeon 24 core processor at 2.2 GHz and 128 GB of RAM. The multi-thread option in Gurobi was limited to a maximum of 5 threads for consistency across settings.
2 For SNR 10, MIP-BOOST was run with a more conservative threshold set for BF; a quick run of the LASSO selecting a very large number of features can inexpensively diagnose the need to implement a restrictive BF. However, we stress that BF used in MIP-BOOST does not suffer from the same lack of specificity that hinders LASSO at high SNRs. When the SNR increases the true k0 becomes in fact easier to identify for MIP; it is simply that the BF search may be pushed toward the right of the k range if seemingly large differences in CVMSE are not “ignored” by setting a stricter threshold. In , the elbow at 10 is clearly denoted when , but the strong signal leads to distinct drops in the CVMSE also at large k values. A more restrictive threshold easily improves solution quality. In contrast, using a more conservative tuning for LASSO (e.g., the parsimonious LSD), still fails to increase specificity.