Mapping lexical variation in the BNC 2014 with R

(updated Sept. 27th, 2019)

This post provides an introduction to doing regional dialectology in the UK with R. More specifically, I focus on mapping lexical variables from the spoken component of the British National Corpus 2014. The goal is to see if we observe patterns of regional variation with respect to pre-identified lexical alternations.

I was inspired by two colleagues: Jack Grieve and Mathieu Avanzi. Jack is Professor of Corpus Linguistics in the Department of English Language and Linguistics at Birmingham University. I was happy to meet him in person at the Corpus Linguistics Summer School 2019, where I taught a course on exploratory statistics. Jack pointed me to a tutorial he wrote on mapping regional variation in the US  based on Twitter data. Mathieu is Associate Professor in (French) Linguistics at Paris Sorbonne University. In France, he has made a name for himself with Le Français de nos Régions, a large-scale project aimed at visualising regional variation from lexical and phonetic variables collected via online surveys.

My goal is somewhat different because I do not use “live” data from online surveys or social networks. For the purposes of a course on corpus-based sociolinguistics, I want to plot lexical alternations on a map of the United Kingdom based on data from a corpus of spoken English.

Choropleth maps

The literature I have read on the topic has convinced me that I should go for a choropleth map.1 The map below displays the levels of popular education for each “département”. White denotes the highest education level, black the lowest education level, and shades of gray intermediate levels. Colors are indexed on a measurement.

A “tinted map” by Charles Dupin (1826). (source)

Like a heatmap, a choropleth map visualizes the variation of a measurement across a geographic area. Unlike a heatmap, it displays measurements within pre-assigned geographic boundaries.2

One argument in favor of choropleths is their ability to visualize data in a simple way within easily recognizable geographic entities. One argument against it is that these geographic entities are coarse-grained and somehow artificial.

Shapefiles

The first step is to obtain a shapefile. Technically speaking, a shapefile is a collection of files that allows a GIS (Geographic Information System) to store and display data related to positions on the surface of the Earth.

The next step is to decide upon what resolution you want (cities, counties, districts, etc.). This decision depends on (a) what you consider relevant for the purpose of your study, and (b) the geographic breakdown of the country of interest. Most shapefiles for the UK break down into five levels:

  • NUTS 1 (regions),
  • NUTS 2 (counties, but see below),
  • NUTS 3 (districts),
  • LAU Level 1 (local authority districts),
  • LAU Level 2 (local authority wards). 

For the purpose of this tutorial, we shall be working at the level of counties (NUTS 2).3 To get the corresponding shapefile, visit the Open Geography portal from the Office for National Statistics, click on download, and choose “shapefile” (the folder occupies 20.6 MB on your disk). Once the download is over, unzip the folder.

R Packages

Before proceeding further, install the required packages.

rm(list=ls(all=TRUE)
library(rgdal)
library(ggplot2)
library(dplyr)

The rgdal package is used to load and process the shapefile. The ggplot2 package is used for producing the map. The dplyr package is used for data manipulation.

Loading the shapefile

To load the shapefile, use the readOGR() function from the rgdal package. The dsn argument should be the path to the shapefile folder. The layer argument collects the relevant files in the folder.

uk.nuts2.shp <- readOGR(dsn="~path_to/shapefiles",layer="NUTS_Level_2_January_2018_Full_Clipped_Boundaries_in_the_United_Kingdom")

Convert the whole thing into a data frame. On this occasion, we use fortify(). We specify the desired level of granularity with region = "nuts218nm". Mind you, this line of code will keep R busy for quite some time!

uk.nuts2.shp.df <- fortify(uk.nuts2.shp, region = "nuts218nm")

Inspect the data frame.

head(uk.nuts2.shp.df)

We are ready to plot a map of the UK with ggplot2. First, run the ggplot() function and save the map.

map <- ggplot(data = uk.nuts2.shp.df, aes(x = long, y = lat, group = group))

Next, plot the map. Again, this will take some time as it goes over a large data frame and generates a heavy file.

map + geom_path()

Here are two tips:

  • RStudio is not good at plotting heavy maps. Base R works way better.
  • the maps you obtain are heavy (at least 20MB). I recommend that you save the map as an encapsulated postscript file with postscript().
A NUTS-2 map of the UK with geom_path()

For polygon plotting, use the following code:

map +
   geom_polygon(aes(fill = id)) +
   coord_fixed(1.3) +
   guides(fill = FALSE)
A NUTS-2 map of the UK with geom_polygon()

One color is assigned per geographical unit. 

Counties or regions?

The NUTS-2 divisions are the following:

Bedfordshire and Hertfordshire
Berkshire, Buckinghamshire and Oxfordshire
Cheshire
Cornwall and Isles of Scilly
Cumbria
Derbyshire and Nottinghamshire
Devon
Dorset and Somerset
East Anglia
East Wales
East Yorkshire and Northern Lincolnshire
Eastern Scotland
Essex
Gloucestershire, Wiltshire and Bath/Bristol area
Greater Manchester
Hampshire and Isle of Wight
Herefordshire, Worcestershire and Warwickshire
Highlands and Islands
Inner London – East
Inner London – West
Kent
Lancashire
Leicestershire, Rutland and Northamptonshire
Lincolnshire
Merseyside
North Eastern Scotland
North Yorkshire
Northern Ireland
Northumberland and Tyne and Wear
Outer London – East and North East
Outer London – South
Outer London – West and North West
Shropshire and Staffordshire
South Yorkshire
Southern Scotland
Surrey, East and West Sussex
Tees Valley and Durham
West Central Scotland
West Midlands
West Wales
West Yorkshire

In R, the above list can be accessed by entering:

levels(as.factor(uk.nuts2.shp.df$id))

In NUTS-2, some counties are grouped, which can be misleading. For example, Cambridgeshire is a county, but it does not appear in the list. You have to know that it has been included as part of East Anglia, which is a region.

It is time to populate the map with sociolinguistic data.

Data

To replicate one of the case studies found in a 2017 paper by Grieve, Montgomery, Nini, and Guo on lexical variation and social media in British English, I extracted all instances of sofa, couch, and settee produced by speakers located in the UK from the BNC 2014. I used my BNC.2014.query() script, described in a previous post, to harvest the data. 

The dataset is available from my Nakala repository. To load it into R, enter:

data <- read.table("https://nakala.fr/nakala/data/11280/5da8f47f", header=T, sep="\t") # load
head(data, 10) # display the first 10 lines

The dataset is now loaded as a data frame. It cannot be directly used as input for a map. Here is what needs to be done:

  • code each element in city for its NUTS-2 equivalent,
  • compute some metric with respect to the variable (word),
  • combine the above with the loaded shapefile (uk.nuts2.shp.df)

The code below loads a file matching cities and their corresponding NUTS-2 divisions.

city.counties <- read.table("https://nakala.fr/nakala/data/11280/16e4ff68", header=T, sep="\t")

The NUTS-2 divisions are part of the id column. 

We select the two columns that we need in data: data$word and data$city.

data.2 <- data[,c(2,5)]

We merge city.counties and data.2 using city as the column in column.

data.counties <- merge(data.2, city.counties, by="city")

I have used this data frame (more specifically the columns word and id) to perform a multinomial test. The purpose of this multinomial test is to see which words are specific to which counties. Run the next line of code to load the output of the test.

distinction <- read.table("https://nakala.fr/nakala/data/11280/83ecd42b", header=T, sep="\t")

The measurements are the log-transformed p-values of the associations between each word (couch, settee, and sofa) and NUTS-2 divisions. Positive values indicate attraction and negative values indicate repulsion. For example, we see that settee is distinctive of Bedfordshire and Hertfordshire (log-transformed p-value = 0.67), whereas sofa is not (log-transformed p-value = -0.51). We might say that sofa is “anti-distinctive” of Bedfordshire and Hertfordshire. Nothing much can be said about couch with respect to Bedfordshire and Hertfordshire as the log-transformed p-value is close to 0. 

One issue that we need to address now is this: not all NUTS-2 divisions are illustrated in the data. This is bound to be a problem when we plot the map as some parts of the UK are going to be absent. The map will look strange. 

We load the full list of NUTS-2 divisions.

districts <- read.csv("https://nakala.fr/nakala/data/11280/b2ed82ae")

We join distinction and districts with the full_join() function from the dplyr package.

full.join <- full_join(distinction, district, by="id")

Although there are no measurements for the counties at the bottom of the table (see all the NAs), this will guarantee that they are plotted on the map. It may seem strange that all three words are used and not used in the exact same counties. This artificial effect is due to the kind of multinomial test that I have run.

It is now time to combine the shapefile and the measurements. Remember: the shapefile is heavy. This process will take some time.

df.for.map <- merge(uk.nuts2.shp.df, full.join, by="id", all=T)

Plotting the maps

We now have a file that contains measurements for all three words. Let us start by plotting a map for couch. First, we create the map object.

couch.map <- ggplot(data = df.for.map[,1:8], aes(x = long, y = lat, group = group))
variable <- "couch"

And we plot it. The code below has two parts. The first part is the graphic setup with ggplot2-specific parameters.

theme_bare <- theme(
axis.line = element_blank(),
axis.text.x = element_blank(),
axis.text.y = element_blank(),
axis.ticks = element_blank(),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
legend.text=element_text(size=6),
legend.title=element_text(size=6),
panel.background = element_blank(),
panel.border = element_rect(colour = "gray", fill=NA, size=0.5)
)

The second part launches the plot.

couch.map + 
geom_polygon(aes(fill = couch), color = 'white', size = 0.1) +
ggtitle(paste(variable, " in the BNC 2014", sep="")) +
scale_fill_gradient(high = "#012c66", low = "#c8dffe", na.value = "gray", guide = "colorbar") +
coord_fixed(1.3) +
guides(fill=guide_colorbar(title="association (> 0)\nrepulsion(< 0)")) +
theme(legend.justification=c(0,0), legend.position=c(0.02,0.05)) +
theme_bare
couch

The plots for settee and sofa are obtained in a similar fashion. Here is the code for settee

settee.map <- ggplot(data = df.for.map[,c(1:7, 9)], aes(x = long, y = lat, group = group))
variable <- "settee"

settee.map +
geom_polygon(aes(fill = settee), color = 'white', size = 0.1) +
ggtitle(paste(variable, " in the BNC 2014", sep="")) +
scale_fill_gradient(na.value = "gray", guide = "colorbar") +
#scale_fill_gradient(high = "#e34a33", low = "#fee8c8", na.value = "gray", guide = "colorbar") +
coord_fixed(1.3) +
guides(fill=guide_colorbar(title="association (> 0)\nrepulsion (< 0)")) +
theme(legend.justification=c(0,0), legend.position=c(0.02,0.05)) +
theme_bare
settee

And here is the code for sofa

sofa.map <- ggplot(data = df.for.map[,c(1:7, 10)], aes(x = long, y = lat, group = group))
variable <- "sofa"

sofa.map +
geom_polygon(aes(fill = sofa), color = 'white', size = 0.1) +
ggtitle(paste(variable, " in the BNC 2014", sep="")) +
scale_fill_gradient(na.value = "gray", guide = "colorbar") +
#scale_fill_gradient(high = "#e34a33", low = "#fee8c8", na.value = "gray", guide = "colorbar") +
coord_fixed(1.3) +
guides(fill=guide_colorbar(title="association (> 0)\nrepulsion (< 0)")) +
theme(legend.justification=c(0,0), legend.position=c(0.02,0.05)) +
theme_bare
sofa

Discussion

results

It is hard from these maps to observe distinctive patterns of variation. Some local tendencies emerge, however. Kent and East Anglia display opposite preferences and dispreferences with respect to settee and sofa. The preference of Cheshire speakers for couch is all the more notheworthy as the neighbouring divisions display a dispreference.

measurement

The question of what measurement to include is worth mentioning. Raw frequencies are considered not a viable option by map experts. Minimally, these should be transformed into percentages. I have used log-transformed p-values from a test that selects only those divisions where the three words are used. 

data

The data do not cover all NUTS-2 divisions. Comparatively, maps made from Twitter data rely on huge datasets. They are therefore more comprehensive, geographically speaking.

The BNC 2014 is much smaller, and it was not designed for sociolinguistic analysis at this level of geographical specificity.

Dr Robbie Love is part of the research team that was responsible for the compilation of the Spoken BNC 2014. I asked him what level of geographic specificity he recommended. He replied: 

Judging from the above table, maps made from corpora like the BNC 2014 are bound to be partial and skewed. 

As Jack Grieves puts it:

Indeed, comparing BBC Voices and Twitter data, we should observed a neat distribution, like below.

In my opinion, the bias comes from the fact that not all areas are represented, and also the uneven number of contributions per relevant areas. In all honesty, I expected this, but thought it would be nice to give it a try. In a future post, I will illustrate choropleths with more robust datasets.

References

Grieve, Jack and Montgomery, Chris and Nini, Andrea and Murakami, Akira and Guo, Diansheng (2019). Mapping Lexical Dialect Variation in British English Using Twitter. Frontiers in Artificial Intelligence 2

The links below point to online tutorials that have proved decisively helpful to me for preparing this post. I wish to thank their authors for making these resources available to the public. 

UK Twitter and BBC Voices Lexical Alternation Map Comparison. Jack Grieve.

Step-by-Step Choropleth Map in R: A case of mapping Nepal. Anjesh Tuladhar.

Geocoding with R. Using ggmap to geocode and map historical data. Jesse Sadler.

Creating Maps in R (2019). Data Tricks

Cite this article as: Guillaume Desagulier, "Mapping lexical variation in the BNC 2014 with R," in Around the word, 10/09/2019, https://corpling.hypotheses.org/2714.
  1. From Ancient Greek khỗros “region, place” and plêthos “multitude, quantity”. Choropleth maps were invented by Charles Dupin in the 1820s and were called cartes teintées “tinted maps”. []
  2. See this blog post to know more about the distinction between a heatmap and a choropleth map. []
  3. NUTS means Nomenclature of Territorial Units for Statistics []

Guillaume Desagulier

UMR 7114 MoDyCo — Université Paris 8, CNRS, Université Paris Nanterre, Institut Universitaire de France.

Vous aimerez aussi...

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.