Clustering corpus data with multidimensional scaling
Multidimensional scaling (MDS) is a very popular multivariate exploratory approach because it is relatively old, versatile, and easy to understand and implement. It is used to visualize distances in multidimensional maps (in general: two-dimensional plots).
I hardly ever use MDS because I was trained in the French school of data analysis. This means that I favor equivalent multivariate exploratory approaches such as (multiple) correspondence analysis or hierarchical cluster analysis. However, this has the effect of puzzling most non-French reviewers. This is why I advise you to consider MDS an option in case you are considering a top-tier journal.
MDS comes in different flavors:
- vanilla/classical MDS (metric MDS);
- Kruskal’s non-metric multidimensional scaling;
- Sammon’s non-linear mapping.
I focus on classical multidimensional scaling (MDS), which is also known as principal coordinates analysis (Gower 1966).
MDS takes as input a matrix of dissimilarities and returns a set of points such that the distances between the points are approximately equal to the dissimilarities. A strong selling point of MDS is that, given the n dimensions of a table, it returns an optimal solution to represent the data in a space whose dimensions are (much) lower than n.
Case study
The data are from Desagulier (2014). The data set was compiled to see how 23 English intensifiers cluster on the basis of their most associated adjectives. For each of the 23 adverbs, I first extracted all adjectival collocates from the Corpus of Contemporary American English (Davies 2008–2012), amounting to 432 adjective types and 316,159 co-occurrence tokens. Then, I conducted a collexeme analysis for each of the 23 degree modifiers. To reduce the data set to manageable proportions, the 35 most attracted adjectives were selected on the basis of their respective collostruction strengths, yielding a 23-by-432 contingency table containing the frequency of adverb-adjective pair types.
The contingency table is available from a secure server:
intensifiers <- readRDS(url("https://tinyurl.com/7k378zcd"))
Here is what the first ten rows and the first ten columns look like:

The dissimilarity matrix
The contingency table must be converted into a distance object. Technically, this distance object is a dissimilarity matrix. Because the matrix is symmetric, it is divided into two parts (two triangles) on either side of the diagonal of null distances between the same cities. Only one triangle is needed.
You obtain the dissimilarity matrix by converting the contingency table into a table of distances with a user-defined distance measure. When the variables are ratio-scaled, you can choose from several distance measures: Euclidean, City-Block/Manhattan, correlation, Pearson, Canberra, etc. I have noticed that the Canberra distance metric handles best the relatively large number of empty occurrences that we typically obtain in linguistic data (i.e. when we have a sparse matrix).
We use the dist()
function:
- the first argument is the data table;
- the second argument is the distance metric (
method="canberra"
); - the third argument (
diag
) lets you decide if you want R to print the diagonal of the distance object; - the fourth argument (
upper
) lets you decide if you want R to print the upper triangle of the distance object.
dist.object <- dist(intensifiers, method="canberra", diag=T, upper=T)
The distance object is quite large. To see a snapshot, enter the following:
dist.matrix <- as.matrix(dist.object) dist.matrix[1:5, 1:5] # first 10 rows, first 10 columns

The diagonal of 0 values separates the upper and lower triangles, as expected from a distance matrix.
Running MDS with cmdscale()
The distance matrix serves as input to the base-R cmdscale()
function, which performs a ‘vanilla’ version of MDS. We specify k=2
, meaning that the maximum dimension of the space which the data are to be represented in is 2.
mds <- cmdscale(dist.matrix,eig=TRUE, k=2) mds

The result is a matrix with 2 columns and 23 rows (fit$points
). The function has done a good job at outputting the coordinates of intensifiers in the reduced two-dimensional space that we requested. Note that cmdscale()
returns the best-fitting k-dimensional representation, where k may be less than the argument k.
To plot the results, first we retrieve the coordinates for the two dimensions (x
and y
).
x <- mds$points[,1] y <- mds$points[,2]
Second, we plot the two axes and add information about the intensifiers (Fig. 1).
plot(x, y, xlab="Dim.1", ylab="Dim.2", type="n") text(x, y, labels = row.names(intensifiers), cex=.7)

The question we are addressing is whether these dimensions reflect differences in the semantics of the intensifiers. Existing typologies of intensifiers tend to group them as follows:
- diminishers (slightly, a little, a bit, somewhat)
- moderators (quite, rather, pretty, fairly)
- boosters (most, very, extremely, highly, awfully, terribly, frightfully, jolly)
- maximizers (completely, totally, perfectly, absolutely, entirely, utterly)
Maximizers and boosters stretch horizontally across the middle of the plot. Moderators are in the upper left corner, and diminishers in the lower left corner. Note the surprising position of almost.
Combining MDS and k-means clustering
We can improve the MDS plot in Fig. 1 by grouping and coloring the individuals by means of k-means clustering. K-means clustering partitions the data points into into k classes, based on the nearest mean.
We download and load one extra package from the tidyverse
, namely ggpubr
.
install.packages("ggpubr") library(ggpubr)
We convert the coordinates obtained above into a data frame.
mds.df <- as.data.frame(mds$points) # convert the coordinates colnames(mds.df) <- c("Dim.1", "Dim.2") # assign column names mds.df # inspect

We proceed to $k$-means clustering on the data frame with the kmeans()
function.
kmclusters <- kmeans(mds.df, 5) # k-means clustering with 5 groups kmclusters <- as.factor(kmclusters$cluster) # convert to a factor mds.df$groups <- kmclusters # join to the existing data frame mds.df # inspect

We are ready to launch the plot with ggscatter()
(Fig. 2). Each group will be assigned a color.
ggscatter(mds.df, x = "Dim.1", y = "Dim.2", label = rownames(intensifiers), color = "groups", palette = "jco", size = 1, ellipse = TRUE, ellipse.type = "convex", repel = TRUE)

A comparison with HCA
The distance matrix can also serve as input for another multivariate exploratory method: hierarchical cluster analysis.
We use the hclust()
function to apply an amalgamation rule that specifies how the elements in the matrix are clustered. We amalgamate the clusters with Ward’s method, which evaluates the distances between clusters using an analysis of variance. Ward’s method is the most widely used amalgamation rule because it has the advantage of generating clusters of moderate size. We specify method="ward.D"
.
clusters <- hclust(dist.object, method="ward.D")
We plot the dendrogram (Fig. 3) as follows:
plot(clusters, sub="(Canberra, Ward)")

Although based on the same distance matrix, the dendrogram clusters the intensifiers slightly differently.
References
Gower, John C. 1966. “Some Distance Properties of Latent Root and Vector Methods Used in Multivariate Analysis.” Biometrika 53 (3-4): 325–38.