Abstract
We test the comparative ability of representative machine-learning algorithms – Logistic Regression, Random Forest Classifier, Adaboost Classifier and Multi-Layer Perceptron Classifier – to predict the likelihood that an acquirer will be forcibly delisted for performance reasons after the close of a deal. We find that the Multi-Layer Perceptron Classifier, Adaboost and Random Forest have similar performance in terms of performance but the Logistic Regression is the poorest performing among the models we study. For feature importance, the results suggest that firm size, leverage, and profitability are the most informative features for the models in predicting the likelihood of performance-induced delisting. Deal-related characteristics and agency problems do not drive performance-induced involuntary delisting of acquirers. The results taken together suggest that acquirers delisted within five years post-merger for performance-induced reasons were already poor-performing firms pre-merger, their state likely worsened by undertaking a merger they were originally not supposed to undertake.
Disclosure statement
No potential conflict of interest was reported by the author. This study is partially supported by Korea University Business School Research Grant. We are grateful to the editor, and two anonymous reviewers who have provided valuable comments in improving this paper.
Notes
1 See https://www.metrowestdailynews.com/story/business/2009/10/27/fairpoint-files-chapter-11/41339258007/ and https://www.reuters.com/article/idUSBNG480400/
2 Listing on exchanges in the first place involves non-trivial costs in the form of registration costs and underwriting fees, annual listing fees imposed by exchanges, and trading costs, compliance costs and agency costs.
3 For the sake of robustness, even when we winsorize leverage and profitability at 1% and also 5%, the results remain qualitatively the same in both instances confirming that our results are not driven by outliers in these variables.
4 For brevity, we do not discuss the theory behind these models. See Hastie et al. (Citation2009) for an excellent discussion.
5 Though not reported, we also obtain qualitatively similar results when we tune the hyperparameters of the machine-learning methodologies using balanced F-score, which is also robust to the class imbalance in classification problems (Onan, Citation2019; Kelleher et al., Citation2020). The results for the models selected using balanced F-score are readily available upon request from the authors.
6 Note that this method is not compatible with models such as MLP that do not inherently produce coefficients or feature importances.
7 The results remain qualitatively similar whether we maximize accuracy or balanced accuracy during cross-validation.