Validating clusters in hierarchical cluster analysis

In a previous post, I showed how to run HCA with the base-R hclust() function. Here, I introduce a package whose benefit is to provide a way of validating clusters: pvclust. This package allows the user to include confidence estimates through multiscale bootstrap resampling.

The motivation for this post is a I received after I advertised for hclust() on Twitter.

Admittedly, HCA finds clusters even when we expect there to be none. Another related issue, that I have come up against many times is making sense of clusters that are at odds with the intuition that the research question builds upon. In other words, you expect certain clusters to appear, but other clusters appear, and they don’t seem to make sense.

So, HCA is designed to find clusters, and it will find some no matter what. The researcher should at least be allowed to decide whether these clusters are valid, based on some metric.

This possibility is implemented in the pvclust() function from eponym package. I used it in Desagulier (2014). It should be noted that pvclust includes hclust(). The former augments the latter with p-values.

pvclust provides p-values for hierarchical clustering based on multiscale bootstrap resampling. Let us see how this works. We load the same data set as the one we used here (prepositions) and we select the Brown corpus data.

rm(list=ls(all=TRUE))
df <- read.table(file="https://www.nakala.fr/nakala/data/11280/64a85ca1", header=TRUE, row.names=1, sep="\t")
df.brown <- df[which(df$corpus=="brown1"), ]
df.brown$corpus=NULL

Again, we create a cross tabulation that displays the frequency distribution of text categories per preposition and we convert the output back into a data frame.

tab <- table(df.brown$category, df.brown$preposition)
mat <- as.data.frame.matrix(tab)

With na.omit, we make sure NAs are removed.

mat<-na.omit(mat)

Unlike hclust(), pvclust() clusters columns, not rows. The data must be transposed.

mat <- t(mat)

With hclust(), the two complementary steps typical of HCA are separated:

  1. creating a distance matrix with a specified distance measure;
  2. amalgamating the clusters with a specified measure.

With pvclust, both steps are part of the same function in the form of two arguments.

library(pvclust)
fit <- pvclust(mat, method.hclust="ward.D", method.dist="canberra")

R ouputs the following:

Bootstrap (r = 0.5)... Done.
Bootstrap (r = 0.6)... Done.
Bootstrap (r = 0.7)... Done.
Bootstrap (r = 0.8)... Done.
Bootstrap (r = 0.9)... Done.
Bootstrap (r = 1.0)... Done.
Bootstrap (r = 1.1)... Done.
Bootstrap (r = 1.2)... Done.
Bootstrap (r = 1.3)... Done.
Bootstrap (r = 1.4)... Done.

The fit object can be summarized as follows:

fit
Cluster method: ward.D
Distance : canberra

Estimates on edges:
au bp se.au se.bp v c pchi
1 0.536 0.329 0.032 0.005 0.176 0.267 0.555
2 0.771 0.497 0.023 0.005 -0.367 0.374 0.475
3 0.762 0.618 0.024 0.005 -0.505 0.206 0.009
4 0.708 0.712 0.028 0.005 -0.554 -0.006 0.041
5 0.990 0.996 0.008 0.001 -2.500 -0.154 0.819
6 0.837 0.616 0.019 0.005 -0.639 0.344 0.760
7 0.672 0.410 0.028 0.005 -0.109 0.336 0.432
8 0.802 0.425 0.022 0.005 -0.330 0.520 0.972
9 0.705 0.395 0.027 0.005 -0.136 0.402 0.918
10 0.553 0.389 0.031 0.005 0.074 0.207 0.123
11 0.828 0.701 0.020 0.005 -0.737 0.211 0.198
12 0.846 0.694 0.019 0.005 -0.764 0.257 0.299
13 0.763 0.383 0.024 0.005 -0.209 0.506 0.201
14 1.000 1.000 0.000 0.000 0.000 0.000 0.000

The fit object is in fact a list that contains lots of numeric information (enter str(fit)).

It is now time to plot the dendrogram with plot().

plot(fit)
A cluster dendrogram of text categories in the Brown corpus based on the distribution of prepositions with pvclust

The plot should be read from bottom to top. There are three numbers around each node. The number below each node specifies the rank of the cluster (here, from 1 to 13, i.e. from the 1st generated cluster at the bottom to the 13th at the top). The two numbers above each node indicate two types of p-values, which are calculated via two different bootstrapping algorithms: AU and BP.1

The number on the left indicates an ‘approximately unbiased’ p-value (AU) and is computed by multiscale bootstrap resampling. The number on the right indicates a ‘bootstrap probability’ p-value (BP) and is computed by normal bootstrap resampling. The number on the left is a much better assessment of how strongly the cluster is supported by the data.

Indeed, according to the package documentation,

pvclust provides two types of p-values: AU (Approximately Unbiased) p-value and BP (Bootstrap Probability) value. AU p-value, which is computed by multiscale bootstrap resampling, is a better approximation to unbiased p-value than BP value computed by normal bootstrap resampling.

http://stat.sys.i.kyoto-u.ac.jp/prog/pvclust/

In either case, the closer the number is to 100 (i.e the closer the p-value is to 1), the more valid the cluster. For example, an AU p-value of, say, 90 implies that the hypothesis that the cluster is invalid is rejected with a significance level of 0.1.

Here, we see that not all clusters represent the data fairly accurately. Indeed, the mean AU score is 76.93 (sd = 13.5). The standard deviation is relatively hight because AU p-values range from 54 to 100. We want to select only those clusters that are valid.

Likerect.hclust(), pvclust allows the user to group clusters into user-defined classes. This is done with the pvrect() function.

The code below finds clusters with AU p-values (pv="au") greater than or equal to (type="geq") the threshold given by the alpha argument (here alpha=.80) and draws red rectangles around the branches that meet the condition.

pvrect(fit, alpha=.80, pv="au", type="geq")
A cluster dendrogram of text categories in the Brown corpus based on the distribution of prepositions with pvclust (alpha greater than or equal to 0.8)

Three clusters meet the condition. The issue that HCA will find clusters no matter what remains (other clustering methods such as correspondence analysis do not suffer from this shortcoming), but at least the user can select a level of significance above which the clusters can be taken into consideration.

Package documentation

References

Desagulier, Guillaume. 2014. Visualizing distances in a set of near synonyms : rather, quite, fairly, and pretty. In D. Glynn & J. Robinson (Eds.), Corpus Methods for Semantics : Quantitative Studies in Polysemy and Synonymy, 145–178. Amsterdam : John Benjamins. doi : 10.1075/hcp.43.06des

Ryota Suzuki, Hidetoshi Shimodaira. 2006. Pvclust: an R package for assessing the uncertainty in hierarchical clusteringBioinformatics, Volume 22, Issue 12, Pages 1540–1542, https://doi.org/10.1093/bioinformatics/btl117

Cite this article as: Guillaume Desagulier, "Validating clusters in hierarchical cluster analysis," in Around the word, 21/10/2019, https://corpling.hypotheses.org/2675.
  1. The term “p-value” is the one that the authors of the pvclust package have adopted. Yet, it seems that these p-values, transformed into percentages, are confidence estimates. []

Guillaume Desagulier

UMR 7114 MoDyCo — Université Paris 8, CNRS, Université Paris Nanterre, Institut Universitaire de France.

Vous aimerez aussi...

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search