3,082
Views
40
CrossRef citations to date
0
Altmetric
Theory and Methods

Robust Improper Maximum Likelihood: Tuning, Computation, and a Comparison With Other Methods for Robust Gaussian Clustering

&
Pages 1648-1659 | Received 01 Jul 2014, Published online: 04 Jan 2017

ABSTRACT

The two main topics of this article are the introduction of the “optimally tuned robust improper maximum likelihood estimator” (OTRIMLE) for robust clustering based on the multivariate Gaussian model for clusters, and a comprehensive simulation study comparing the OTRIMLE to maximum likelihood in Gaussian mixtures with and without noise component, mixtures of t-distributions, and the TCLUST approach for trimmed clustering. The OTRIMLE uses an improper constant density for modeling outliers and noise. This can be chosen optimally so that the nonnoise part of the data looks as close to a Gaussian mixture as possible. Some deviation from Gaussianity can be traded in for lowering the estimated noise proportion. Covariance matrix constraints and computation of the OTRIMLE are also treated. In the simulation study, all methods are confronted with setups in which their model assumptions are not exactly fulfilled, and to evaluate the experiments in a standardized way by misclassification rates, a new model-based definition of “true clusters” is introduced that deviates from the usual identification of mixture components with clusters. In the study, every method turns out to be superior for one or more setups, but the OTRIMLE achieves the most satisfactory overall performance. The methods are also applied to two real datasets, one without and one with known “true” clusters. Supplementary materials for this article are available online.

1. Introduction

In this article, we introduce and investigate the “optimally tuned robust improper maximum likelihood estimator” (OTRIMLE), a method for robust clustering with clusters that can be approximated by multivariate Gaussian distributions. Its one-dimensional version was introduced in Coretto and Hennig (Citation2010). We also present a simulation study comparing OTRIMLE and other approaches for (mostly robust) model-based clustering, which is, to our knowledge, the most comprehensive study in the field and involves a careful discussion of the issue of comparing methods based on different model assumptions.

The basic idea of OTRIMLE is to fit an improper density to the data that is made up by a Gaussian mixture density and a “pseudo mixture component” defined by a small constant density, which is meant to capture outliers in low density areas of the data. This is inspired by the addition of a uniform “noise component” to a Gaussian mixture (Banfield and Raftery Citation1993). Hennig (Citation2004) showed that using an improper density improves the breakdown robustness of this approach. The OTRIMLE has been found to work well for one-dimensional data in Coretto and Hennig (Citation2010).

As in many other statistical problems, violations of the model assumptions and particularly outliers may cause problems in cluster analysis. Our general attitude to the use of statistical models in cluster analysis is that the models should not be understood as reflecting some underlying but in practice unobservable “truth,” but rather as thought constructs implying a certain behavior of methods derived from them (e.g., maximizing the likelihood), which may or may not be appropriate in a given application (more details on the general philosophy of clustering can be found in Hennig and Liao Citation2013). Using a model such as a mixture of multivariate Gaussian distributions, interpreting every mixture component as a “cluster,” implies that we look for clusters that are approximately “Gaussian-shaped,” but we do not want to rely on whether the data really were generated iid by a Gaussian mixture. We are interested in the performance of such methods in situations where one may legitimately look for Gaussian-shaped clusters, even if some data points do not belong to such clusters (called “noise” in the following), and even if the clusters are not precisely Gaussian. This reflects the fact that in practice, for example, mixtures of t-distributions are used for clustering the same datasets to which Gaussian mixtures are fitted as well, interpreting the resulting clusters in the same way.

For illustration of the outlier problem in model-based clustering, we use a five-dimensional dataset in which the 170 districts of the German city of Dortmund are characterized by a number of variables, which is discussed in detail in Section 6.1. Fitting a plain Gaussian mixture with G = 4 to all five variables by R’s MCLUST package (Fraley et al. Citation2012), one cluster is a one-point cluster consisting only of an extreme outlier, and two further clusters fit two different varieties of moderate outliers. More than 120 districts are collected in a single cluster. The task of robust clustering is to avoid having many or even most clusters dominated by outliers, and to produce a meaningful clustering structure also among the main bulk of nonextreme observations.

A number of model-based clustering methods that can deal with outliers have been proposed in recent years. An overview of these methods is given in Section 2. The OTRIMLE is introduced and discussed in Section 3, starting from the “RIMLE,” in which the level of the improper constant density is a tuning constant. We then introduce a method for optimal tuning and discuss its computation. Section 5 presents a simulation study that uses a unified approach for defining elliptically shaped clusters with noise/outliers, presented in Section 4. The Dortmund dataset mentioned above is discussed in Section 6 along with a dataset of folk song melodies from two known regions. Some further issues including the estimation of the number of clusters are discussed in Section 7. Additional details about the example dataset (including full scatterplots), the simulation study, and computation of methods are provided in an online supplement (Coretto and Hennig Citationin press b). Theoretical properties of the RIMLE with the tuning constant fixed are investigated in Coretto and Hennig (Citationin press a) and cited here.

2. Methods from the Literature

In the following, assume an observed sample xn_={x1,x2,...,xn}, where xi is the realization of a random variable XiRp with p ⩾ 1; X1, …, Xn iid. The goal is to cluster the sample points into G distinct groups.

2.1. Maximum Likelihood (ML) for Gaussian Mixtures (gmix)

Let φ(x; μ, Σ) be the density of a multivariate Gaussian distribution with mean vector μRp and p × p covariance matrix Σ. Assume that the observed sample is iid drawn from the finite Gaussian mixture distribution having density (2.1) m(x;θ)=j=1Gπjϕ(x;μj,Σj),(2.1) where πj ∈ [0, 1] for all j = 1, 2, …, G and ∑Gj = 1πj = 1, θ is the parameter vector containing the triplets πj, μj, Σj for all j = 1, 2, …, G. Clustering coincides with assigning points to the mixture components based on ML parameter estimates. πj can be interpreted as the expected proportion of points originated from the jth component. Let θnml be the ML estimator for θ, usually computed by the expectation–maximization (EM) algorithm (Dempster et al. Citation1977). The ML estimator under (Equation2.1) exists only under appropriate constraints on the covariances matrices. These constraints (which are also relevant for the methods introduced below) will be discussed in detail in Section 3. Let τijml be the estimated posterior probability that the observed point xi has been drawn from the jth mixture component, that is, (2.2) τijml=πj,nmlϕ(xi;μj,nml,Σj,nml)m(xi;θnml)forallj=1,2,...,G.(2.2) The point xi can then be assigned to the kth cluster if k=argmaxj=1,2,...,Gτijml. This assignment method is common to all model-based clustering methods introduced here. gmix is implemented in R’s MCLUST package (Fraley et al. Citation2012). As illustrated in Section 1 and proven in Hennig (Citation2004), the method can be strongly affected by outliers and deviations from the model assumptions, and we now turn to approaches that attempt to deal with this problem. For lack of space, we present these in detail in the online supplement and only give a short overview here.

2.2. ML-Type Estimator for Gaussian Mixtures with Uniform Noise (gmix.u)

Banfield and Raftery (Citation1993) added a uniform mixture component on the smallest hyperrectangle covering the data to (Equation2.1), calling it “noise component” to accommodate “noise.”

2.3. ML for Mixtures of Student t-Distributions (tmix)

McLachlan and Peel (Citation2000) replaced the Gaussian densities in (Equation2.1) with multivariate Student-t densities, because they have heavier tails and can therefore accommodate outliers in a better way. Observations can be declared “noise” if they lie in a low density area of the t-distribution fitting their cluster. Hennig (Citation2004) showed that neither tmix nor gmix.u is breakdown-robust.

2.4. TCLUST

TCLUST is based on maximizing a trimmed likelihood of a “fixed partition model” with cluster weights πj. With R = ∪Gj = 1Rj, #{R}=[n(1-α)] the number of nontrimmed points: (2.3) θtclust:= arg max θΘ,#{R}=[n(1-α)]j=1GiRjlogπj+logϕ(x;μj,Σj).(2.3) For background, see Gallegos (Citation2002), Gallegos and Ritter (Citation2005), and García-Escudero et al. (Citation2008). The TCLUST methodology is implemented in R’s TCLUST package by Fritz, García-Escudero, and Mayo-Iscar (Citation2012). Partition methods with trimming started with the trimmed k-means proposal of Cuesta-Albertos, Gordaliza, and Matrán (Citation1997).

2.5. Further Existing Work

More approaches to robust model-based clustering can be found in the literature. Neykov et al. (Citation2007) proposed and implemented a trimmed likelihood method. Qin and Priebe (Citation2013) introduced an EM-algorithm adapted to maximum Lq-likelihood estimation and studied its behavior under a gross error model. References to other approaches to robust clustering are given in García-Escudero et al. (Citation2010).

3. Optimally Tuned Robust Improper Maximum Likelihood

3.1. Robust Improper Maximum Likelihood

The robust improper maximum likelihood estimator (RIMLE) is based on the “noise component”-idea for robustification (gmix.u). The main idea is to use a pseudo-model where the noise is represented by an improper constant density over the whole Euclidean space: (3.1) ψδ(x,θ)=π0δ+j=1Gπjϕ(x;μj,Σj),(3.1) with π0, πj ∈ [0, 1] for j = 1, 2, …, G, π0+i=1Gπj=1. δ > 0 is the improper constant density (icd). The parameter vector θ contains all Gaussian parameters plus all proportion parameters including π0, i.e., θ = (π0, π1, …, πG, μ1, …, μG, Σ1, …, ΣG). Given the sample improper pseudo-log-likelihood function (3.2) ln(θ)=1ni=1nlogψδ(xi,θ),(3.2) the RIMLE is defined as (3.3) θn(δ)= arg max θΘln(θ),(3.3) where Θ is an appropriate constrained parameter space discussed below. θn(δ) is then used to cluster points as for model-based clustering methods. Define pseudo posterior probabilities in analogy with (Equation2.2): τj(xi,θ):=π0δψδ(xi,θ)ifj=0πjϕ(xi,μj,Σj)ψδ(xi,θ)ifj=1,2,...,G;fori=1,2,...,n,and assign the points based on the following rule (3.4) J(xi,θ):= arg max j0,1,2,...,Gτj(xi,θ).(3.4) Fixing δ, (Equation3.1) does not define a proper probability model, but (Equation3.3) yields a useful procedure for data modeled as a proportion of (1 − π0) of a mixture of Gaussian distributions plus a proportion of π0 points not assigned to any meaningful cluster. Regions of high density are rather associated with clusters than with noise, so the noise regions should be those with the lowest density. This could be achieved by using the uniform density as in gmix.u, but for this the presence of gross outliers the dependence of the uniform distribution on the convex hull of the data still causes a robustness problem (Hennig Citation2004).

The optimization problem in (Equation3.3) requires that Θ is suitably defined, otherwise θn(δ) may not exist. As discovered by Day (Citation1969), the Gaussian mixture likelihood can degenerate. This problem extends to (Equation3.1) as well. Let λk, j be an eigenvalue of Σj for some k = 1, 2, …, p and j = 1, 2, …, G. Take a sequence (θm)mN such that λk, j, m→0 and μj, m = x1, then lnm) → +∞. There are various ways to avoid this issue. Let λmax (θ) and λmin (θ) be, respectively, the maximum and the minimum eigenvalues of the covariance matrices in θ, Coretto and Hennig (Citationin press a) adopted the “eigenratio constraint” (3.5) λmax(θ)/λmin(θ)γ<+(3.5) with fixed γ ⩾ 1. γ = 1 constrains all component covariance matrices to be spherical and equal, as in k-means clustering, while γ > 1 restricts the relative scatter discrepancy among clusters. This type of constraint has been proposed by Dennis (Citation1981) and studied by Hathaway (Citation1985) for one-dimensional Gaussian mixtures. EM-algorithms for computing the ML of multivariate Gaussian mixtures under (Equation3.5) have been studied by Ingrassia (Citation2004) and Ingrassia and Rocci (Citation2007), although asymptotic properties of the corresponding MLE have not been proved. The same constraints are used for TCLUST by García-Escudero et al. (Citation2008). There are a number of alternative constraints, see Ingrassia and Rocci (Citation2011) and Gallegos and Ritter (Citation2009).

Although (Equation3.5) prevents the unboundness of the likelihood in standard mixture models and TCLUST, for RIMLE this is not enough. Points not fitted by any of the Gaussian components can still be fitted by the improper uniform component. Therefore, Coretto and Hennig (Citationin press a) proposed an additional “noise proportion constraint,” (3.6) 1ni=1nτ0(xi,θ)πmax,(3.6) for fixed 0 < πmax  < 1. The quantity n− 1ni = 1τ0(xi, θ) can be interpreted as the estimated proportion of noise points. Setting πmax  = 0.5 just implements a familiar condition in robust statistics that at most half of the data should be classified as “outliers/noise.” The resulting restricted parameter space for RIMLE is then (3.7) Θ:=θ:πj0j1;π0+j=1Gπj=1;1ni=1nτ0(xi,θ)πmax;λmax(θ)λmin(θ)γ.(3.7) Coretto and Hennig (Citationin press a) showed that θn(δ) exists for any δ ⩾ 0 if #(xn_)>G+nπmax and that θn(0) exists under the milder condition that #(xn_)>G. For δ = 0, the RIMLE reduces to ML for plain Gaussian mixtures. Let EPf(x) be the expectation of f(x) under xP. The RIMLE functional is defined as (3.8) θ(δ)= arg max θΘGEPlogψδ(x;θ).(3.8) Existence of (Equation3.8), consistency of θn(δ) on the quotient space topology identifying all log-likelihood maxima and its breakdown point are shown in Coretto and Hennig (Citationin press a).

3.2. Optimal Improper Density Level

Occasionally, subject matter knowledge may be available aiding the choice of δ, but such situations are rather exceptional. Here we suggest a data-dependent choice of δ. Note that δ is not treated as a model quantity to be estimated here, but rather as a tuning device to enable a good robust clustering. The aim of the RIMLE is to approximate the density of clustered regions of points when these regions look like those produced by a Gaussian distribution. We define the “optimal” δ value as minimizer of a criterion function measuring the discrepancy of the found clusters from the Gaussian prototype. Given θn(δ), define the clusterwise squared Mahalanobis distances to clusters’ centers as di,j,n=(xi-μj,n)'Σj,n-1(xi-μj,n),and the clusterwise weighted empirical distribution of di, j, n, (3.9) Mj(t;δ)=1i=1nτj(xi,θn(δ))i=1nτj(xi,θn(δ))1{di,j,nt},j=1,2,...,G.(3.9) In Mj, the ith point’s distance is weighted according to the pseudo posterior probability that the ith observation has been generated by the jth mixture component. If the jth cluster is approximately Gaussian and μj, n and Σj, n are good approximations of its location and scatter, we expect that squared Mahalanobis distances to μj, n of the points indeed belonging to mixture component no. j (for which τj( · ) indicates the estimated probability) will approximate a χ2p distribution. With χ2p(a) being the value of the cdf of the χ2p distribution at a, define the Kolmogorov-type distance for the jth cluster (3.10) Kj(δ)=maxi=1,...,nMj(di,j,n;δ)-χp2(di,j,n).(3.10) The quality of the overall Gaussian approximation is then evaluated by weighting Kj(·) with the estimated component proportion πj, n: (3.11) D(δ)=1j=1Gπj,nj=1Gπj,nKj(δ).(3.11) For a given constant β ⩾ 0, define the optimal icd level as (3.12) δn= arg min δ[0,δmax]D(δ)+βπ0,n.(3.12) The corresponding optimally tuned RIMLE (OTRIMLE) will be denoted as θnn). Existence and uniqueness of δn are not trivial, see Section 3.3. Section 5.4 is about how D(δ) behaves as a function of δ.

The straightforward choice for β, formalizing the “Gaussian cluster” concept, is β = 0. However, often in practice it is not so important that the clusters are of as precise Gaussian shape as possible. β > 0 (but normally smaller than 1) formalizes that less Gaussian shapes of clusters are tolerated if this brings the estimated noise proportion down. As can be seen in Section 5, choosing β=13 leads to improvements if the true clusters are t-distributed. Section 5.5 gives more details on the effect of choosing β > 0.

3.3. Computation

For a fixed δ, the RIMLE can be appropriately computed using an expectation–maximization (EM) algorithm. See Coretto and Hennig (Citationin press a) for details, and the online supplement for details and background of the following. The outcome of the EM-algorithm depends on the initialization. We used an initialization method inspired by the MCLUST software. To avoid spurious clusters, we consider as valid initial partitions only those containing at least min.pr× n observations in each cluster (min.pr=0.005, say). As a first attempt to find such a valid partition, nearest neighbor-based clutter/noise detection proposed by Byers and Raftery (Citation1998) is applied to identify an initial noise guess. Agglomerative hierarchical clustering based on ML criteria for Gaussian mixture models proposed by Banfield and Raftery (Citation1993) is then used for finding initial Gaussian clusters among the nonnoise. See the online supplement for the case that the found partition is not “valid.”

The OTRIMLE can be found by computing RIMLEs on a grid of δ values ranging from zero to some large enough δmax . In practice, we solve the program (Equation3.12) by the “golden section search” of Kiefer (Citation1953) over the candidate set δ ∈ [0, δmax ]. In most numerical experiments, we found that no more than 30 RIMLE evaluations are required. δmax  can be chosen as highest density value occurring within an initialized cluster, discarding δ-values for which the RIMLE-solution ends up at the border of the parameter space.

4. Definition of “True” Clusters and Misclassification

In most simulation studies in cluster analysis, in which data are generated from mixture (or fixed partition) models, it is assumed that the “true” clusters are identified with the mixture components, and methods can then be evaluated by misclassification rates. But this can be problematic. Consider the comparison of ML estimators for Gaussian mixtures and for mixtures of t-distributions. In most applications, both approaches would be considered as potentially appropriate for doing the same thing, namely, finding clusters that are unimodal and elliptical. In applications in which the clustering is of main interest (as opposed to parameter estimation), researchers would not mind much whether the density around their cluster cores rather looks like a Gaussian or a t-distribution. But implications for which points are considered as “true outliers” versus “truly belonging to a cluster” would be different, because some points generated by a t-distribution with low degrees of freedom are indeed outlying with respect to the core of the t-distribution from which they are generated.

More generally, the identification of clusters and mixture components cannot be taken for granted. Hennig (Citation2010) illustrated that the interpretation of Gaussian mixture components as clusters depends on whether the components are separated enough. And in robust clustering, one would often interpret a group of a few points with low density as “noise” even if they were generated by a Gaussian distribution.

We now define a “reference truth” for the mixture models that are used in our simulation study, from which misclassification rates then can be computed. For motivation consider , which shows an artificial dataset drawn from a mixture of two Gaussians and a uniform distribution. (a) shows the unlabeled dataset on which cluster analysis operates. (b) shows the points labeled by the mixture components that generated them. Observe that there are three red stars (generated from the uniform noise) in the middle of the region where the two Gaussians have most of their mass. Furthermore, there are green points from the left Gaussian component with lower density that fall into the dense blue region. No method can be expected to reconstruct all the cluster memberships in such overlapping regions.

Figure 1. An artificial dataset consisting of 950 points drawn from two two-dimensional Gaussian distributions and 50 points from a uniform distribution on the square [ − 10, 10] × [ − 5, 15]. (a) unlabeled points, (b) colored according to the mixture components, (c) colors represent the two GRα (with α = 10− 4); red stars belong to NRα.

Figure 1. An artificial dataset consisting of 950 points drawn from two two-dimensional Gaussian distributions and 50 points from a uniform distribution on the square [ − 10, 10] × [ − 5, 15]. (a) unlabeled points, (b) colored according to the mixture components, (c) colors represent the two GRα (with α = 10− 4); red stars belong to NRα.

(c) shows what we define as the “reference truth,” defined next.

The idea is that we choose probability measures P1, P2, …, PG to correspond to the G “true clusters” (implying that they “cluster,” that is, generate clearly distinguishable, although not necessarily nonoverlapping, data patterns). For each of these, we define a region of points that can be considered as nonoutliers based on a mean and covariance matrix functional, which are Fisher-consistent at the Gaussian distribution, but robust and existing for other distributions, too. This defines a region of nonoutliers of Gaussian shape. We consider all points “noise” that are outliers according to this definition for all P1, P2, …, PG. Quadratic discriminant analysis assigns points to clusters that are nonoutliers with respect to more than one of the Pj. This means that points are assigned to clusters by optimal classification boundaries under the Gaussian assumption, even if the components are in fact not Gaussian. This formalizes using “Gaussian cluster prototypes” without assuming that clusters really have to be Gaussian.

To this end, let mj and Sj be the minimum covariance determinant (MCD) center and scatter functional at Pj (Rousseeuw Citation1985). Cator and Lopuhaä (Citation2012) proved existence of the MCD functional for a wide class of probability measures. The Sj can be corrected for achieving consistency at Pj equal to the Gaussian distribution (Croux and Haesbroeck Citation1999; Pison et al. Citation2002), so that when Pj is Gaussian, mj and Sj are the corresponding mean vector and covariance matrix. Let πj be the expected proportion of points generated from Pj. We allow ∑Gj = 1πj ⩽ 1, so that points could be generated by (noise-)distributions other than P1, P2, …, PG. Define the quadratic discriminant score for assigning the point yRp to the jth cluster by maximizing qs(y;πj,mj,Sj):=log(πj)-12log(det(Sj))-12(y-mj)'Sj-1(y-mj),forj=1,2,...,G.If clusters are indeed Gaussian, this is equivalent to (Equation3.4). Consider Eα(mj,Sj):={y:(y-mj)'Sj-1(y-mj)χp2(1-α)},where χ2p(1 − α) is the 1 − α quantile of the χ2p distribution. For a fixed α, the ellipsoid Eα(mj,Sj) defines the subset of Rp that hosts the jth cluster. The size of this ellipsoid is defined in terms of χ2p(1 − α), because for Gaussian Pj, Pj(RpEα(mj,Sj))=α. For a fixed level α, the α-Gaussian region is defined as the union of these ellipsoids: GRα:=j=1GEα(mj,Sj),and the noise region is given by NRα:=RpGRα.

Definition 1

(α-Gaussian cluster memberships). Given α ∈ [0, 1), a data-generating process (DGP) with cluster parameters θC ≔ {(πj, mj, Sj), j = 1, 2, …, G}, and a dataset xn_:={x1,x2,...,xn}; the α-Gaussian cluster memberships are given by (4.1) AGRα(xi;θC):=1{xiGRα}× arg max j=1,2,...,Gj1{j= arg max g=1,...,Gqs(y;πg,mg,Sg)}.(4.1)

AGRα(xi,θC)=0 means that xiNRα.

This definition is inspired by the definition of outliers with respect to a reference model as in Davies and Gather (Citation1993) and Becker and Gather (Citation1999). A difference is that here the parameter α does not directly control the probability of the noise region. Once α and the triples (πj, mj, Sj) are fixed for all j = 1, 2, …, G, the size of the noise region will depend on the degree of overlap and Gaussianity of the ellipsoids in GRα. α needs to be small because the idea of an outlier implies that under the Gaussian distributions outliers are very rare. We choose α = 10− 4, which implies that the probability that there is at least one outlier in n = 500 iid Gaussian observations is 0.0488.

The different robust clustering methods have different implicit ways of classifying points as “noise” (noise component, trimming, outlier identification in t-distributions). To make them comparable, we use (Equation4.1) to unify the point assignment of the methods by computing AGRα(·) based on the parameters estimated by the methods, from which we assume that estimators of the triples (πj, mj, Sj) (cluster proportion, center, and covariance matrix) can be computed (see Section 5.2). Let the estimated cluster parameters be θ^C,n. A misclassification rate can then be computed by applying an optimal permutation σ{ · } of cluster labels: (4.2) mcr(θC,θ^C,n)= arg min σ1ni=1n1{AGRα(xi;θC)=σ{AGRα(xi;θ^C,n)}}.(4.2) See the online supplement for computation of the MCD functional for nonnormal distributions.

5. A Comparative Simulation Study

Here, we present a comprehensive simulation study comparing the OTRIMLE with the methods introduced in Section 2.

5.1. Data-Generating Processes

The methods are compared on a total of 24 DGPs with 1000 Monte Carlo replicates each. Half of the DGPs produce two-dimensional datasets. The remaining 12 DGPs are 20-dimensional versions that are constructed adding independent 18-dimensional uncorrelated zero-means unit-variance Gaussian and/or Student-t marginals. Therefore, clusters are always only defined on the first two marginals. Note that the aim of the simulation study is not variable selection; we designed the DGPs so that clustering information is only in the first two dimensions to be able to visualize and control the clustering patterns, but we compare clustering methods that use all variables (trying variable selection methods is beyond the scope of this article). We do not think that variable selection or dimension reduction is mandatory in clustering, because the meaning of the clusters is determined by the involved variables, see the discussion rejoinder in Hennig and Liao (Citation2013). We choose n = 1000 for two-dimensional designs and n = 2000 for the 20-dimensional versions. DGPs have been designed to test a variety of “noise patterns,” numbers of clusters G, and patterns of separation/overlap between different clusters.

We consider two main classes of DGP, namely, DGPs with a uniform noise component on the first two marginals, and DGPs that do not have a noise component. The first group includes the following setups, all of which have clusters generated from Gaussian distributions, and for p = 20 the 18 uninformative variables are Gaussian: (i) for “WideNoise” DGPs, the uniform noise component produces points that are widespread but overlap with the clustered regions entirely; (ii) for “SideNoise” DGPs the uniform noise component spreads points on a wide region that overlaps slightly with some of the clusters; (iii) in “SunSpot” DGPs there is a uniform component that produces few extremely outlying points. On the other hand, we consider DGPs that do not include a noise component (i.e., π0 = 0). This second group can be divided into three further subgroups: (i)/(ii) in “GaussT” and “TGauss” DGPs, multivariate Student-t distributions with three degrees of freedom are used. In “GaussT” these are used as uninformative distributions for p = 20, whereas the first two clustered dimensions use Gaussians; in “TGauss” the clusters are generated by noncentral multivariate t3-distributions and for p = 20 the 18 uninformative variables are Gaussian; (iii) in “Noiseless” DGPs, all points are drawn from Gaussian distributions.

For each of the six setups, there are variants with a lower and a higher number of clusters G, p = 2 (denoted by “l”) and p = 20 (denoted by “h”), adding up to 24 DGPs. The nomenclature used in the following puts these at the end of the setup name, that is, “TGauss.5h” refers to the “TGauss”-setup with higher G = 5 and higher P = 20. For “WideNoise,” “SideNoise,” and “GaussT,” the lower G was 2 and the higher G was 3. For “SunSpot,” “TGauss,” and “Noiseless,” the lower G was 3 and the higher G was 5. The overlap between clusters as well as the combinations of cluster shapes varied between DGPs. Full details of the definition of the DGPs are given in the online supplement together with exemplary scatterplots of the first two dimensions of a dataset from every setup.

5.2. Implementation of Methods

summarizes settings for the compared methods. TCLUST and RIMLE/OTRIMLE are based on eigenratio constraints, but this is not the case for the MCLUST software (Fraley et al. Citation2012) and the available implementation of mixtures of t-distributions (McLachlan and Peel Citation2000). To have full comparability of the solutions, the eigenratio constraints (see Section 3.3) have been implemented by us for OTRIMLE/RIMLE, gmix, tmix, and gmix.u; the latter is computed by use of the same routine that is used for RIMLE /OTRIMLE. For TCLUST, the constraints are in the original R-package.

Table 1. Summary of the main settings for the methods under comparison.

Forall methods, the eigenratio constraint has been set equal to 20 for each of the 24 DGPs. The latter choice is motivated by the fact that 20 is larger than the maximum true eigenratio across the designs involved in the comparison, and it is in general a value that often enables rather smooth optimization (obviously in reality the true eigenvalue ratios are not known, but for variables with comparable scales and value ranges, 20 gives the covariance matrices enough flexibility for most applications). OTRIMLE has been tested with and without the penalty term β=13 in (Equation3.12), denoted by ot.rimle (β = 0) and ot.rimle.p, respectively. Other values for β, ranging from 0.1 to 0.5, have been tried, and results did not change much. A difficulty with TCLUST is that an automatic data-driven choice of the trimming level is currently not available. In tclust.f we set the trimming level to 10%. This choice is motivated by the fact the DGPs produced an average proportion of points belonging to the NRα set in the range [0%, 23%] (see in Coretto and Hennig Citationin press b). Furthermore, since the trimming level plays a role similar to δ in RIMLE/OTRIMLE, there are two versions of TCLUST for which the same idea for automatic decisions the trimming level is used as proposed here for OTRIMLE, see Section 3. In ot.tclust and ot.tclust.p, the trimming level has been selected using (Equation3.12) with the trimming level playing the role of δ, and the weights τj( · ) in Mj(·) replaced by the 0–1 crisp weights of TCLUST, again using β = 0 or β=13. For t-mixtures, in the original proposal by McLachlan and Peel (Citation2000), the degrees of freedom are assumed to be equal across the mixture components and are estimated from the data and covariance matrix constraints are not considered. In tmix, we fix the degrees of freedom to 3 and we incorporate eigenratio constraints, as for the other methods. This is motivated as follows: (i) for some of the DGPs (particularly the SunSpot designs) constraints were needed to avoid spurious solutions; (ii) for some of the sampling designs not based on Student-t distributions, estimation of the degrees of freedom of the t-distribution produced an extremely large variability in the misclassification rates; (iii) since a number of designs are based on Student-t with 3 degrees of freedom, the decision to fix this parameter to 3 gives the t-mixture a slight advantage, which seems fair given that the majority of setups rather seem to favor noise component/trimming.

For some of the DGPs, the experience suggested that solutions may depend strongly on the initialization. To reduce the bias introduced by different initializations, all methods are initialized from the same partition, see Section 3.3 (an additional set of R functions has been provided by TCLUST’s authors to allow for this).

5.3. Results

The methods are compared using misclassification rates as defined in (Equation4.2). These are more relevant in clustering tasks than parameter estimates. The results are graphically summarized in , while average misclassification rates with standard errors are given in . Each square in the plot is a color-coded representation of the misclassification rate averaged over the 1000 Monte Carlo replicates for a given method-DGP pair. Further details about average misclassification rates are given in the online supplement. It also contains boxplots of the misclassification rates for all method-DGP pairs. shows clear evidence that using robust methods is important. The gmix method only performs well for Noiseless and some DGPs with Gaussians and t-distributions, but most other methods work well (although slightly worse at times) for these DGPs, too. tmix works well for most DGPs involving t-distributions, but for the other DGPs with noise/outliers, it is often seriously worse than gmix.u, OTRIMLE, and TCLUST.

Figure 2. Level plot representing the sample mean of the Monte Carlo distribution of misclassification rates (percentage scale) for each DGP-method pair. Each square of the plot represents the average misclassification according to the bottom gray color scale.

Figure 2. Level plot representing the sample mean of the Monte Carlo distribution of misclassification rates (percentage scale) for each DGP-method pair. Each square of the plot represents the average misclassification according to the bottom gray color scale.

Table 2. Monte Carlo average misclassification rates (%) with their standard errors in brackets. Misclassification rates are computed as in (Equation4.1). Notice that both averages (and standard errors) are reported in percentage scale.

gmix.u performs relatively well, although for a number of DGPs it suffers strongly from high dimensionality. For WideNose in two dimensions, gmix.u will equal in many cases a proper ML estimator, so the method should be advantageous here, and gmix.u is indeed best for these DGPs. However, for the 20-dimensional WideNoise, taking a uniform distribution over the smallest hyperrectangle containing the data can no longer be the ML estimate for the noise-generating mixture component, and its performance deteriorates strongly.

Regarding the TCLUST methods, even though the automatic trimming level of ot.tclust and ot.tclust.p did not always improve the results, it demonstrated to provide a reasonable choice for the trimming level. In fact, for all situations where the true average noise proportion is about equal to the trimming level of tclust, performances of tclust, ot.tclust, and ot.tclust.p are very similar, meaning that the OTRIMLE criterion is a good starting point for fixing the trimming rate. Compared to the RIMLE-type methods, the performance of TCLUST suffers in situations where there is a considerable degree of overlap between clusters. For DGPs with overlap (such as WideNoise.2, SunSpot.5, GaussT.2, and Noiseless.5), the misclassification rate of TCLUST is completely dominated by misclassifications between clusters (to see this consider Tables 2 and 4 in the online supplement). The reason is that the TCLUST parameters are based on a classification-type likelihood, which relies on the separation between clusters. The TCLUST also seems to not tolerate the large number of Student-t marginals of GaussT.2h and GaussT.3h. TCLUST performs well for a number of DGPs and is clearly best in WideNoise.2h.

The OTRIMLE methods show a very good overall performance. They produce high misclassification rates only for some 20-dimensional DGPs for which all methods are in trouble (performances for GaussT.2h are generally bad with even tmix, the best method there, producing an average misclassification rate of more than 30%), and they are best for a number of DGPs, particularly 20-dimensional WideNoise and some TGauss-DGPs. The comparison between ot.rimle and ot.rimle.p is mixed (as between ot.tclust and ot.tclust.p), with β=13 improving matters clearly for TGauss.2l and TGauss.3l (the shape of the t-distribution encourages ot.rimle to assign too many points to the noise; see Tables 2 and 3 in the online supplement, but being significantly worse for WideNoise.3h.

Comparing OTRIMLE with gmix.u, there are a number of DGPs for which gmix.u has a slightly lower misclassification rate than one or both of ot.rimle and ot.rimle.p. In all of these DGPs, all of gmix.u, ot.rimle, and ot.rimle.p basically produce the same clustering structure, with disagreement only about the classification of some borderline points. Differences are more substantial in the setups in which gmix.u is worse. For WideNoise.2h, gmix.u in most cases does not detect any noise, so that one of the clusters consists mainly of noise. For WideNoise.3h, sometimes all or much noise is merged into a cluster, with some impact on the clustering structure. For GaussT.3h, substantial amounts of outliers are integrated into the clusters.

5.4. Behavior of D(δ)

Here, we investigate the behavior of D(δ) as a function of δ via Monte Carlo experiments under various DGPs from Section 5. For each of these DGPs, we produced 100 independent samples. We computed D(δ) for a grid of δ values taken from an interval [2.22 × 10− 308, 1], adding δ = 0. In , we report Monte Carlo averages±standard errors for D(δ) for two selected DGPs: WideNoise.3h and GaussT.3h. These are defined in Section 5.1, both with p = 20. The main difference is that noise is produced by a two-dimensional uniform distribution in WideNoise.3h, whereas in GaussT.3h dimensions 3–20 are from centered t3 distributions, generating some outliers. reports the behavior of D(δ) for log (δ) > −200. For smaller values of δ (including δ = 0) the behavior of the curves was basically constant. In both graphs there is a clear minimum, although for GaussT.3h this minimum lies on the border. For WideNoise.3h the OTRIMLE criterion has a nice convex behavior around its minimum. In GaussT.3h, in dimensions 3–20 the distributional shape of the clusters deviated from Gaussianity by heavier tails, although the core of the distribution looks similar to a Gaussian. D( · ) then enforces a Gaussian shape by assigning many points to the noise component. The result is that π0, n becomes large, and the optimal δ happens at point where larger values of δ do not produce parameter estimates within the constrained set anymore (unless the constraint is enforced). The latter is a reason for the use of the penalty term in (Equation3.12) (see also Section 5.5). For the remaining 22 DGPs of Section 5.1, we found similar patterns (mostly similar to Widenoise.3h here). Another observation in is that around the minimum D(δ) seems to be quite stable for different datasets from the same design.

Figure 3. Monte Carlo average for OTRIMLE criterion D(δ) (blue solid line) ± standard errors (dotted lines) computed over a grid of values for δ ∈ {0}∪[2.22 × 10− 308, 1]. The two plots refer to DGPs “WideNoise.3h” and “GaussT.3h” in Section 5.

Figure 3. Monte Carlo average for OTRIMLE criterion D(δ) (blue solid line) ± standard errors (dotted lines) computed over a grid of values for δ ∈ {0}∪[2.22 × 10− 308, 1]. The two plots refer to DGPs “WideNoise.3h” and “GaussT.3h” in Section 5.

5.5. Effect of β

For all 24 DGPs considered in Section 5.1, we also investigated the behavior of the noise proportion π0, n as a function of β, see (Equation3.12). For each design, we produced 100 independent samples, and for each of these we computed the OTRIMLE solution θnn) for various values of β ∈ [0, 1]. reports the Monte Carlo average of the estimated noise proportion π0, n ± standard error. For both WideNoise.3h and GaussT.3h, an increase in β reduces smoothly the estimated noise proportion. There is some difference, however, in scales. The impact of β is much stronger for GaussT.3h, and the same happens for all those sampling designs in which within-cluster distributions deviate from Gaussianity. Results for other DGPs are quite similar.

Figure 4. Monte Carlo average for the estimated noise proportion π0, n (blue solid line) ± standard errors (dotted lines) computed over a grid of values for β ∈ [0, 1]. The two plots refer to DGPs called “WideNoise.3h” and “GaussT.3h” in Section 5.

Figure 4. Monte Carlo average for the estimated noise proportion π0, n (blue solid line) ± standard errors (dotted lines) computed over a grid of values for β ∈ [0, 1]. The two plots refer to DGPs called “WideNoise.3h” and “GaussT.3h” in Section 5.

The discovery of the overall clustering structure was never affected by changing β between 0 and 0.5 for the DGPs from Section 5. In the majority of cases, there was no change of the clustering at all. The only difference was that larger β sometimes produced a lower percentage of data classified as noise. In the case that t3-distributions were involved, this was good, because with β = 0 only fairly small central cores (60% –70%) of the points generated by t-distributions were assigned to the clusters, whereas for larger β only points were classified as noise that were really quite “outlying.” On the other hand, with a larger β, in data from DGPs with Gaussian clusters and some noise that was not clearly separated from the clusters, more “true” noise points were assigned to the clusters. There is no objective rule for what percentage of points from a t-distribution should be considered as “truly” outlying, and therefore it needs to be decided by the user how “tolerant” OTRIMLE is desired to be toward heavier distributional tails than the Gaussian ones.

shows a situation in which the choice of β affects the clustering a lot. The dataset consists of 100 observations each of N(0,1) and N(3,1) along the x-axis and 12 observations from N(12,25). OTRIMLE was fitted with G = 2. The first two mixture components are not very well separated. β = 0 does not penalize noise, and therefore the observations from the third component are declared “noise” and the first two components are separated. However, β=13 merges the first two components and declares the third one the second cluster. The “switching point” between these two ways of “interpreting” the clustering structure is at about β = 0.3; larger values of β do not change the clustering anymore. Despite the “true” G being 3 here, regarding interpretation, depending on the application it may well make sense to either treat the smallest mixture component as noise/outliers, or to merge the first two components to a single cluster. β tunes the method to rather produce noise, or to rather tolerate not-so-Gaussian components in cases like this.

Figure 5. Clusterings with β = 0 and β=13 on artificial data example.

Figure 5. Clusterings with β = 0 and β=13 on artificial data example.

6. Applications

In this section, we apply OTRIMLE and the alternative methods mentioned before to two real datasets. The first example does not come with any “ground truth,” whereas the second one has “true” classes.

6.1. Dortmund Data

We here analyze a dataset giving information about 170 districts of the German city of Dortmund, which is described in Sommerer and Weihs (Citation2005). We used five sociological key variables and transformed them in such a way that fitting Gaussian distributions within clusters makes sense. The resulting variables are the logarithm of the unemployment rate (“unemployment”), the birth/death balance divided by number of inhabitants (“birth.death”), the migration balance divided by number of inhabitants (“moves.in.out”), the logarithm of the rate of employees paying social insurance (“soc.ins.emp”), and the square root of the number of inhabitants (“inhabitants”).

All variables were centered and standardized by the median absolute deviation. shows a scatterplot of birth.death and moves.in.out. To deal with overplotting, an existing extreme outlier with values ≈ ( − 200, 50) is not shown. shows a scatterplot of unemployment and soc.ins.emp. shows some more moderate outliers. The left side of shows a clustering from fitting a plain Gaussian mixture with G = 4 to all five variables by R’s MCLUST package. Cluster 4 is a one-point cluster made of the extreme outlier. Clusters 1 and 3 basically fit two different varieties of moderate outliers, whereas all the more than 120 districts that are not extreme regarding these two variables are put together in a single cluster. Clearly, it would be more desirable to have a clustering that is not dominated so much by a few odd districts, given that there is some meaningful structure among the other districts. Such a clustering is produced by the OTRIMLE method, shown on the right side of and on the left side of . The clustering is nicely interpretable with cluster no. 3 collecting a group of districts with higher migration balance and very scattered birth/death rate, cluster no. 1 being a high variation cluster characterized by high unemployment or high number of employees paying social insurance, cluster no. 2 being a homogenous group with medium number of employees paying social insurance and rather high but not very high unemployment, and cluster no. 4 collecting most districts with low values on both of these variables. For this dataset, values between β = 0 and β = 0.5 yield the same clustering (with eigenratio constraint γ = 20), despite the fact that β = 0 leads to log (δ) = −11.5 and π0, n = 0.055, whereas for β = 0.5, (Equation3.12) produces log (δ) = −11.9 and π0, n = 0.054.

Figure 6. Scatterplot of birth.death and moves.in.out from Dortmund dataset with MCLUST clustering (left) and OTRIMLE clustering (right) with G = 4.

Figure 6. Scatterplot of birth.death and moves.in.out from Dortmund dataset with MCLUST clustering (left) and OTRIMLE clustering (right) with G = 4.

Figure 7. Scatterplot of soc.ins.emp and unemployment from Dortmund dataset with OTRIMLE clustering (left) and TCLUST clustering with trimming rate 7% (right) with G = 4.

Figure 7. Scatterplot of soc.ins.emp and unemployment from Dortmund dataset with OTRIMLE clustering (left) and TCLUST clustering with trimming rate 7% (right) with G = 4.

Looking at the methods introduced in Section 2, it turns out that substantial disagreement may exist between different robust clustering methods. Applying the methods implemented in tclust and discussed in García-Escudero et al. (Citation2011), α = 0.07 was found to be a good trimming rate for TCLUST with G = 4 here. The resulting clustering is compared with OTRIMLE’s in . Although what was trimmed is almost identical to OTRIMLE’s “noise,” the clustering is somewhat different, with TCLUST’s clusters no. 1 and 4 ranging into the area of “high variation characterized by high unemployment or high number of employees paying social insurance” as mainly represented by cluster no. 3 of TCLUST and cluster no. 1 of OTRIMLE. It is hard to interpret OTRIMLE’s cluster no. 4 using any pair of variables. This can be seen in the online supplement (Coretto and Hennig Citationin press b), as well as solutions of the other methods.

6.2. Folk Song Data

The second dataset was provided by Daniel Müllensiefen. The observations are 776 folk song melodies, 586 of which are from Luxembourg and the remaining 190 are from Warmia in Poland. These are the “true” classes. The melodies are originally from the ESAC melody database (Schaffrath Citation1992). The 18 features (see the online supplement (Coretto and Hennig Citationin press b) for a list) were computed by the software “FANTASTIC” (Müllensiefen Citation2009).

Visual inspection reveals that there are many unusual melodies, that is, outliers in the dataset. The main bulks of melodies from Luxembourg and Warmia differ systematically from each other, although there is much overlap and no strong separation. For measuring to what extent clusterings computed with G = 2 coincided with the two regions, we used the adjusted Rand index (ARI; Hubert and Arabie Citation1985) with an expected value of 0 for two random clusterings and a maximum of 1 for perfect agreement. OTRIMLE with settings as above (β = 0, γ = 20) classified 36.9% of the observations as “noise.” The ARI between the OTRIMLE solution and the original regions is 0.155. For this (as for the other clustering methods), the OTRIMLE solution was interpreted as a three-cluster solution with “noise” as third cluster. Default MCLUST yields an ARI = −0.045, MCLUST with noise yields ARI = −0.017, ot.tclust yields ARI = 0.016 (the original TCLUST function with trimming rate 0.369 as suggested by OTRIMLE above achieves ARI = 0.089), and tmix yields ARI = 0.083. OTRIMLE’s ARI-value, though clearly better than that of the other methods, is not particularly high, but computing the ARI only on the observations that were not classified as noise achieves ARI = 0.392 (the solution with β=13 is slightly worse here but slightly better above regarding the ARI including the noise points), which suggests that there is a clear correspondence between OTRIMLE’s clustering and melodies that are typical for the regions.

7. Concluding Remarks

Despite our effort to make the simulation study fair, ultimately it would be good to have comparisons of methods run by researchers who did not have their hand in the design of any of the methods. Every method was best for certain DGPs in the simulation study, and simulation studies could be designed that make any method “win.” Readers need to make up their own mind about to what extent our study covered situations that are important to them. One of our major aims was to confront all methods with DGPs that do not exactly match their model assumptions, but for which the methods nevertheless could be legitimately used. In fact, we incorporated crucial ideas from both MCLUST (initialization) and TCLUST (eigenvalue ratio constraints), and the combination of these ideas used here could actually be beneficial for all methods. The methods presented in this article will soon be available in the new R-package OTRIMLE.

The problem of determining the number of clusters G is very important in practice. Here are two possible approaches for RIMLE. First, the most popular approach for fitting plain mixture models, namely, the Bayesian Information Criterion, can be used (treating RIMLE/OTRIMLE improper constant density as a proper one), Fraley and Raftery (Citation1998). Second, G could be decided in an exploratory way by monitoring the changes of the pseudo-likelihood over different values of G in a similar way to what is done for TCLUST in García-Escudero et al. (Citation2011). Investigating these approaches in depth is beyond the scope of this article.

Supplemental material

Supplementary Materials

Download Zip (2.9 MB)

Supplementary Materials

Supplementary materials contain: (i) additional details about methods and algorithms; (ii) detailed definitions of the sampling designs for the simulation study along with boxplots and summary tables for misclassification rates (iii) additional plots for real data applications; (iv) R software with instructions to reproduce the comparative simulation study and applications to real data.

Funding

The authors gratefully acknowledge financial support from MIUR research grant PRIN 2010J3LZEN, and the use of the high-performance computing infrastructure funded by University of Salerno (research program ASSA098434). The authors gratefully acknowledge the support from the EPSRC grant EP/K033972/1.

References

  • Banfield, J. D., and Raftery, A. E. (1993), “Model-based Gaussian and Non-Gaussian Clustering,” Biometrics, 49, 803–821.
  • Becker, C., and Gather, U. (1999), “The Masking Breakdown Point of Multivariate Outlier Identification Rules,” Journal of the American Statistical Association, 94, 947–955.
  • Byers, S., and Raftery, A. E. (1998), “Nearest-Neighbor Clutter Removal for Estimating Features in Spatial Point Processes,” Journal of the American Statistical Association, 93, 577–584.
  • Cator, E. A., and Lopuhaä, H. P. (2012), “Central Limit Theorem and Influence Function for the MCD Estimators at General Multivariate Distributions,” Bernoulli, 18, 520–551.
  • Coretto, P., and Hennig, C. (2010), “A Simulation Study to Compare Robust Clustering Methods Based on Mixtures,” Advances in Data Analysis and Classification, 4, 111–135.
  • Coretto, P. (in press a), “A Consistent and Breakdown Robust Model-based Clustering Method,” available at http://arxiv.org/pdf/1309.6895.
  • Coretto, P. (in press b), Supplement to: “Robust Improper Maximum Likelihood: Tuning, Computation, and a Comparison With Other Methods for Robust Gaussian Clustering,” available at http://arxiv.org/src/1406.0808/anc/supplement.pdf.
  • Cuesta-Albertos, J. A., Gordaliza, A., and Matrán, C. (1997), “Trimmed k-means: An Attempt to Robustify Quantizers,” Annals of Statistics, 25, 553–576.
  • Croux, C., and Haesbroeck, G. (1999), “Influence Function and Efficiency of the Minimum Covariance Determinant Scatter Matrix Estimator,” Journal of Multivariate Analysis, 71, 161–190.
  • Davies, L., and Gather, U. (1993), “The Identification of Multiple Outliers,” Journal of the American Statistical Association, 88, 782–792.
  • Day, N. E. (1969), “Estimating the Components of a Mixture of Normal Distributions,” Biometrika, 56, 463–474.
  • Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977), “Maximum Likelihood From Incomplete Data via the EM Algorithm,” Journal of the Royal Statistical Society, Series B, 39, 1–47.
  • Dennis, J. E. J. (ed.) (1981), Algorithms for Nonlinear Fitting, (NATO Advanced Research Symposium), Cambridge, UK: Cambridge University Press.
  • Fraley, C., and Raftery, A. E. (1998), “How Many Clusters? Which Clustering Method? Answers via Model-based Cluster Analysis,” The Computer Journal, 41, 578–588.
  • Fraley, C., Raftery, A. E., Murphy, T. B., and Scrucca, L. (2012), mclust Version 4 for R: Normal Mixture Modeling for Model-Based Clustering, Classification, and Density Estimation, Technical Report no. 597, Department of Statistics, University of Washington, Seattle, WA.
  • Fritz, H., García-Escudero, L. A., and Mayo-Iscar, A. (2012), “tclust: An R Package for a Trimming Approach to Cluster Analysis,” Journal of Statistical Software, 47, 1–26.
  • Gallegos, M. T. (2002), “Maximum Likelihood Clustering With Outliers,” in Classification, Clustering, and Data Analysis, eds. K. Jajuga, A. Sokolowski, and H.-H. Bock, Berlin: Springer, pp. 247–255.
  • Gallegos, M. T., and Ritter, G. (2005), “A Robust Method for Cluster Analysis,” Annals of Statistics, 33, 347–380.
  • Gallegos, M. T. (2009), “Trimmed ML Estimation of Contaminated Mixtures,” Sankhya (Series A), 71, 164–220.
  • García-Escudero, L. A., Gordaliza, A., Matrán, C., and Mayo-Iscar, A. (2008), “A General Trimming Approach to Robust Cluster Analysis,” Annals of Statistics, 38, 1324–1345.
  • García-Escudero, L. A. (2010), “A Review of Robust Clustering Methods,” Advances in Data Analysis and Classification, 4, 89–109.
  • García-Escudero, L. A. (2011), “Exploring the Number of Groups in Robust Model-Based Clustering,” Statistics and Computing, 21, 585–599.
  • Hathaway, R. J. (1985), “A Constrained Formulation of Maximum-Likelihood Estimation for Normal Mixture Distributions,” The Annals of Statistics, 13, 795–800.
  • Hennig, C. (2004), “Breakdown Points for Maximum Likelihood Estimators of Location-Scale Mixtures,” The Annals of Statistics, 32, 1313–1340.
  • Hennig, C. (2010), “Methods for Merging Gaussian Mixture Components,” Advances in Data Analysis and Classification, 4, 3–34.
  • Hennig, C., and Liao, T. F. (2013), “How to Find an Appropriate Clustering for Mixed Type Variables With Application to Socioeconomic Stratification” ( with discussion), Journal of the Royal Statistical Science, Series C, 62, 309–369.
  • Hubert, L., and Arabie, P. (1985), “Comparing Partitions,” Journal of Classification, 2, 193–218.
  • Ingrassia, S. (2004), “A Likelihood-based Constrained Algorithm for Multivariate Normal Mixture Models,” Statistical Methods & Applications, 13, 151–166.
  • Ingrassia, S., and Rocci, R. (2007), “Constrained Monotone EM Algorithms for Finite Mixture of Multivariate Gaussians,” Computational Statistics and Data Analysis, 51, 5339–5351.
  • Ingrassia, S. (2011), “Degeneracy of the EM Algorithm for the MLE of Multivariate Gaussian Mixtures and Dynamic Constraints,” Computational Statistics & Data Analysis, 55, 1715–1725.
  • Kiefer, J. (1953), “Sequential Minimax Search for a Maximum,” Proceedings of the American Mathematical Society, 4, 502–506.
  • McLachlan, G. J., and Peel, D. (2000), “Robust Mixture Modelling Using the t–distribution,” Statistics and Computing, 10, 339–348.
  • Müllensiefen, D. (2009), Fantastic: Feature ANalysis Technology Accessing STatistics (In a Corpus): Technical Report v1.5, London: Goldsmiths University of London.
  • Neykov, N., Filzmoser, P., Dimova, R., and Neytchev, P. (2007), “Robust Fitting of Mixtures Using the Trimmed Likelihood Estimator,” Computational Statistics and Data Analysis, 17, 299–308.
  • Pison, G., Aelst, S. V., and Willems, G. (2002), “Small Sample Corrections for LTS and MCD,” Metrika, 55, 111–123.
  • Qin, Y., and Priebe, C. E. (2013), “Maximum lq-likelihood Estimation via the Expectation-Maximization Algorithm: A Robust Estimation of Mixture Models,” Journal of the American Statistical Association, 108, 914–928.
  • Rousseeuw, P. J. (1985), “Multivariate Estimation With High Breakdown Point,” Mathematical Statistics and Applications, 8, 283–297.
  • Schaffrath, H. (1992), “The esac Databases and Mappet Software,” Computing in Musicology, 8, 66.
  • Sommerer, E.-O., and Weihs, C. (2005), “Introduction to the Contest “Social Milieus in Dortmund,” in Classification - the Ubiquitous Challenge, Berlin: Springer, pp. 667–673.