92
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

The TreeRank Tournament algorithm for multipartite ranking

&
Pages 107-126 | Received 27 Nov 2013, Accepted 12 Sep 2014, Published online: 10 Oct 2014
 

Abstract

Whereas various efficient learning algorithms have been recently proposed to perform bipartite ranking tasks, cast as receiver operating characteristic (ROC) curve optimisation, no method fully tailored to K-partite ranking when K≥3 has been documented in the statistical learning literature yet. The goal is to optimise the ROC manifold, or summary criteria such as its volume, the gold standard for assessing performance in K-partite ranking. It is the main purpose of this paper to describe at length an efficient approach to recursive maximisation of the ROC surface, extending the TreeRank methodology originally tailored for the bipartite situation (i.e. when K=2). The main barrier arises from the fact that, in contrast to the bipartite case, the volume under the ROC surface criterion of any scoring rule taking K≥3 values cannot be interpreted as a cost-sensitive misclassification error and no method is readily available to perform the recursive optimisation stage. The learning algorithm we propose, called TreeRank Tournament (referred to as ‘TRT’ in the tables), breaks it and builds recursively an ordered partition of the feature space. It defines a piecewise scoring function whose ROC manifold can be remarkably interpreted as a statistical version of an adaptive piecewise linear approximant of the optimal ROC manifold. Rate bounds in sup norm describing the generalisation ability of the scoring rule thus built are established and numerical results illustrating the performance of the TRT approach, compared to that of natural competitors such as aggregation methods, are also displayed.

AMS Subject Classification::

Notes

1. Recall that, by definition, a càd-làg function is such that for all t∈]0, 1] and for all t∈[0, 1[. Its completed graph is obtained by connecting the points (t, h(t−)) and (t, h(t)), when they are not equal, by a vertical line segment and thus forms a continuous curve.

2. Let (X′, Y′) be a random pair, where Y′ takes binary values, in say, and X′ models some information valued in a space , hopefully useful to predict the label Y′. A classifier is any measurable mapping . Let . Given a cost ω∈[0, 1], the cost-sensitive error of g is . The quantity L1/2(g) is generally referred to as the error of classifier g.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.