92
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

The TreeRank Tournament algorithm for multipartite ranking

&
Pages 107-126 | Received 27 Nov 2013, Accepted 12 Sep 2014, Published online: 10 Oct 2014
 

Abstract

Whereas various efficient learning algorithms have been recently proposed to perform bipartite ranking tasks, cast as receiver operating characteristic (ROC) curve optimisation, no method fully tailored to K-partite ranking when K≥3 has been documented in the statistical learning literature yet. The goal is to optimise the ROC manifold, or summary criteria such as its volume, the gold standard for assessing performance in K-partite ranking. It is the main purpose of this paper to describe at length an efficient approach to recursive maximisation of the ROC surface, extending the TreeRank methodology originally tailored for the bipartite situation (i.e. when K=2). The main barrier arises from the fact that, in contrast to the bipartite case, the volume under the ROC surface criterion of any scoring rule taking K≥3 values cannot be interpreted as a cost-sensitive misclassification error and no method is readily available to perform the recursive optimisation stage. The learning algorithm we propose, called TreeRank Tournament (referred to as ‘TRT’ in the tables), breaks it and builds recursively an ordered partition of the feature space. It defines a piecewise scoring function whose ROC manifold can be remarkably interpreted as a statistical version of an adaptive piecewise linear approximant of the optimal ROC manifold. Rate bounds in sup norm describing the generalisation ability of the scoring rule thus built are established and numerical results illustrating the performance of the TRT approach, compared to that of natural competitors such as aggregation methods, are also displayed.

AMS Subject Classification::

Notes

1. Recall that, by definition, a càd-làg function is such that for all t∈]0, 1] and for all t∈[0, 1[. Its completed graph is obtained by connecting the points (t, h(t−)) and (t, h(t)), when they are not equal, by a vertical line segment and thus forms a continuous curve.

2. Let (X′, Y′) be a random pair, where Y′ takes binary values, in say, and X′ models some information valued in a space , hopefully useful to predict the label Y′. A classifier is any measurable mapping . Let . Given a cost ω∈[0, 1], the cost-sensitive error of g is . The quantity L1/2(g) is generally referred to as the error of classifier g.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 912.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.