Abstract
Clustering datasets is not an easy problem in general, and the difficulty is compounded for massive datasets. This article develops, under Gaussian assumptions, a multistage algorithm that clusters an initial sample, filters out observations that can be reasonably classified by these clusters, and iterates the preceding procedure on the remainder. A final step uses the estimated class probabilities and dispersions to classify each observation in the dataset. Results on test experiments indicate good performance. Application to datasets from software metrics and positron emission tomography required no more than five stages each, suggesting that the procedure is practical to implement.