29
Views
0
CrossRef citations to date
0
Altmetric
Research Article

On modeling acquirer delisting post-merger using machine learning techniques

ORCID Icon, ORCID Icon & ORCID Icon
Received 02 Mar 2023, Accepted 24 Apr 2024, Published online: 20 May 2024
 

Abstract

We test the comparative ability of representative machine-learning algorithms – Logistic Regression, Random Forest Classifier, Adaboost Classifier and Multi-Layer Perceptron Classifier  – to predict the likelihood that an acquirer will be forcibly delisted for performance reasons after the close of a deal. We find that the Multi-Layer Perceptron Classifier, Adaboost and Random Forest have similar performance in terms of performance but the Logistic Regression is the poorest performing among the models we study. For feature importance, the results suggest that firm size, leverage, and profitability are the most informative features for the models in predicting the likelihood of performance-induced delisting. Deal-related characteristics and agency problems do not drive performance-induced involuntary delisting of acquirers. The results taken together suggest that acquirers delisted within five years post-merger for performance-induced reasons were already poor-performing firms pre-merger, their state likely worsened by undertaking a merger they were originally not supposed to undertake.

Subject Classification Codes:

Disclosure statement

No potential conflict of interest was reported by the author. This study is partially supported by Korea University Business School Research Grant. We are grateful to the editor, and two anonymous reviewers who have provided valuable comments in improving this paper.

Notes

2 Listing on exchanges in the first place involves non-trivial costs in the form of registration costs and underwriting fees, annual listing fees imposed by exchanges, and trading costs, compliance costs and agency costs.

3 For the sake of robustness, even when we winsorize leverage and profitability at 1% and also 5%, the results remain qualitatively the same in both instances confirming that our results are not driven by outliers in these variables.

4 For brevity, we do not discuss the theory behind these models. See Hastie et al. (Citation2009) for an excellent discussion.

5 Though not reported, we also obtain qualitatively similar results when we tune the hyperparameters of the machine-learning methodologies using balanced F-score, which is also robust to the class imbalance in classification problems (Onan, Citation2019; Kelleher et al., Citation2020). The results for the models selected using balanced F-score are readily available upon request from the authors.

6 Note that this method is not compatible with models such as MLP that do not inherently produce coefficients or feature importances.

7 The results remain qualitatively similar whether we maximize accuracy or balanced accuracy during cross-validation.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 358.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.