Simple linear regression for linguists

A regression towards mediocrity

Originally, the term regression means “going back”. It gained currency when Sir Francis Galton related the heights of children to the average height of their parents. Galton (1886) found that children whose parents were short were likely to be shorter than average, whereas children whose parents were tall tended to be taller than average. Galton also found that when the parents were “taller than mediocrity”, the children were in general shorter than their parents, while when the parents were “shorter than mediocrity”, the children tended to be slightly taller than their parents. Galton concluded that there was “a regression towards mediocrity” with regards to height.

Regression methods

The inventory of regression methods is vast. Its most popular members are: simple linear regression, multiple regression, Partial-Least-Squares regression, log-linear regression, and logistic regression. This post is the first of a series that covers simple, multiple and logistic regressions which are the most commonly found in linguistics papers.

The very basics

Linear regression is the simplest form of regression. It models the linear relationship between two quantitative variables X and YX is the variable you are using to make the prediction and Y the variable you want to predict. In statistical parlance, X is the explanatory variable and Y the response variable. You use a linear relationship to predict the value of Y for a given value of X using a straight line called the regression line. A regression line is a single line that best fits the data. If you know the slope and the y-intercept of the regression line,1 then you can input a value for X and predict the value for Y.

A condition for the prediction to work is that there must be a statistically significant correlation and a linear relationship between X and Y. The stronger the relationship between X and Y the more accurate your predictions are likely to be.

The basics

The purpose of linear regression is twofold:

  1. it can explain one variable by means of another, and
  2. it can predict the value of one variable based on the value of the other variable.

Let X be the predicting variable and Y the variable to explain. The linear model is defined by:

Y = \beta_0 + \beta_{1} X + \epsilon

where \beta_0 is the intercept (the value at which the fitted line crosses the y-axis), \beta_1 is the slope, and \epsilon the error component. I explain these concepts below.

Case study

The following simple linear regression is based on Desagulier (2016). In this paper, I adapt productivity measures implemented in morphology to a multiple-slot construction A as NP in the BNC-XML. The construction is exemplified in (1)–(3).

  1.  Oh you’ll be, be as right as rain by that time. (BNC G42)
  2. He was a poor, meagre creature (…), thin as rail from long starving in the woods (…) (BNC HGG)
  3. She was good as gold; I don’t know what he would’ve done without her! (BNC GWB)

We want to compare two measures. The first measure is what Baayen (2009) calls realized productivity: V(C,N). It is the type count V of a linguistic category C in a corpus of N tokens. Here, C is restricted to adjectives and N is the number of A as NP constructions. The second measure is \mathscr{P}, which measures “potential productivity”. It is based on V(1,C,N), which is the number of hapax legomena of a linguistic category C in a corpus of N tokens.\mathscr{P} is the ratio of the number of hapax legomena for a given category and the sum of all tokens that display the category:

\mathscr{P}=\frac{V(1,C,N)}{N(C)}.

\mathscr{P} corresponds to the probability of encountering new types. The larger the number of hapax legomena, the larger\mathscr{P} and the productivity of the affix. One known issue with V is that it does not discriminate between established forms and new forms. Indeed, types are not distributed uniformly in a corpus, and the larger the corpus, the harder it is to find innovations. \mathscr{P} is therefore considered a far more reliable measure. Out of curiosity, let us see to what extent V predicts\mathscr{P}.

A simple linear regression in R

What follows is an implementation inspired by Cornillon et al. (2012), a very clear textbook on statistics with R (in French). We use the data set linearreg_adjectives.rds, which is available for download from Nakala.

rm(list=ls(all=TRUE))
df <- readRDS("/linearreg_adjectives.rds")

In the data frame contains 402 observations (one for each type of adjective) of three variables: V, V1, and \mathscr{P}. We ignore V1 for the time being. We plot the data.

plot(P~V, data=df, cex=0.6)
fig1

Potential productivity as a function of realized productivity

We observe that the higher V , the higher P , even though the correlation is not perfect. To create a linear model of\mathscr{P} as a function of V , we use the lm() function and the following standard notation: Y ∼ X .

reg.model <- lm(P~V, data=df)
summary(reg.model)

R outputs the following:

Call: 
lm(formula = P ~ V, data = df)

Residuals: 
     Min         1Q       Median      3Q      Max
-0.0116975 -0.0001844 -0.0001844 0.0003152 0.0096277

Coefficients:
              Estimate  Std. Error t value Pr(>|t|)
 (Intercept)  6.840e-04 8.988e-05   7.61    1.98e-13 ***
 V            2.829e-04 9.566e-06  29.57     < 2e-16 *** 
--- 
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.001579 on 400 degrees of freedom
Multiple R-squared: 0.6861,Adjusted R-squared: 0.6854
F-statistic: 874.5 on 1 and 400 DF, p-value: < 2.2e-16

summary(reg.model) displays a description of the specified model. First, Residuals shows how the residuals are distributed. Second, the Coefficients matrix displays the two regression coefficients (also known as parameters, one per line). Each coefficient is described by four variables:

  • the estimate,
  • the estimated standard error2,
  • the associated t-value (knowing that under H_{0} \beta_{i} = 0, whereas under H_{1} \beta_{i} \neq 0),3
  • the significance levels associated with the t-value (the more stars there are, the more significant).

The coefficients \beta_0 (the intercept) and V are estimated by 0.000684 and 0.0002829 respectively. The associated significance levels are smaller than 5%, and are in fact very close to zero, which indicates that for each of those tests, H_{0} can be rejected. This means that the intercept should appear in the model, and that the relationship between \mathscr{P} and V is significant. The last three lines of the output display the residual standard error, the multiple R^2, the R^2 adjusted for the degrees of freedom, the F statistic for the model, and its associated p-value. In a linear regression, only multiple and adjusted R^2 values are relevant. The value of R^2 is 0.6861, which means that 68% of the variability of \mathscr{P} is explained by V. There is indeed a linear relationship between these two variables.

The fitted regression line

To visualize this linear relation, we plot the fitted regression line with abline(). When abline() takes a linear model as an argument, the function extracts the intercept and the slope to plot the fitted line (That information is otherwise available by entering: reg.model$coef).

plot(P~V, data=df, cex=0.6)
abline(reg.model)

Potential productivity as a function of realized productivity (with a fitted regression line)

The residuals

The next step consists in examining the residuals. If we want to use the model predictively, we need to make sure that there is a limited number of outliers and that 95% of residuals be found inside a reasonable interval. You normally extract the residuals with the residuals() function. However, because the variances of the residuals at different input variable values often differ (a phenomenon known as heteroscedasticity), mathematicians recommend that we use so-called “studentized” residuals (Cornillon and Matzner-Løber 2010). Such residuals are obtained with the function rstudent().

studentized.res <- rstudent(reg.model)

We plot the studentized residuals for inspection.

plot(studentized.res, pch=18, cex=0.6, ylab="studentized residuals", ylim=c(-3,3))
abline(h=c(-2,0,2), lty=c(2,1,2), col=c("brown", "red", "brown"), lwd=2)

Studentized residuals

A quick glance at the plot shows that 12 out of 402 residuals (3%) are beyond the [−2, 2] range. Therefore, 97% of the residuals are within a [−2, 2] range, which is good.

Prediction

We can now predict a value of \mathscr{P} based on a new value of V . Suppose we find an adjective whose type frequency in the construction is 32. First, we vectorize this value of V . Next, we convert the numeric vector into a data frame that consists of one observation and one variable. This is because prediction is done with the predict() function, which takes a data frame as input. Note that the name of the variable must be the same as the name of the same variable in our original data frame. Finally, we run predict(), whose arguments are the linear model, the data frame containing the new value, and a specification that the interval must be a prediction interval (as opposed to a confidence interval).

newV <- 32
newV <- as.data.frame(newV)
colnames(newV) <- "V"
predict(reg.model, newV, interval="prediction")

R outputs the following:

      fit          lwr        upr
1 0.009735763 0.006584295 0.01288723

We see that if V = 32, the predicted value of \mathscr{P} is 0.0097. The 95% prediction interval is [0.0066, 0.013].

Rather than focus on a single value of V or the observed discret values in our data set, we can predict \mathscr{P} over the full range of V . First, create a data frame that consists of a sequence of 100 equally-spaced values of V that range from its minimum to its maximum values.

range.V.pred <- data.frame(V = seq(min(df[,"V"]), max(df[,"V"]), length=100))

With predict(), generate the prediction and confidence values based on the new range of values of V obtained above.

CI.pred <- predict(reg.model, interval = "prediction", newdata = range.V.pred)
CI.confid <- predict(reg.model, interval = "confidence", newdata = range.V.pred)

Replicate the first plot from above making sure the y axis can accommodate the full range of the prediction interval thanks to the ylim argument.

plot(df$V, 
     df$P, 
     xlab = "realized productivity (V)", 
     ylab = "potential productivity (P)",
     ylim = range(df$P, CI.pred),
     cex=0.6)

With matlines(), plot six lines: the lower and upper limits of the prediction and confidence intervals and the fitted line (which is plotted twice). The limits of the prediction interval appears appear as dotted red lines. The limits of the confidence interval appear as dashed blue lines. The fitted line appears as a solid red line.

# plot the prediction interval
matlines(range.V.pred$V, CI.pred, lty = c(1,3,3), lwd=2, col = "red")
# plot the confidence interval
matlines(range.V.pred$V, CI.confid, lty = c(1,2,2), lwd=2, col = c("red", "blue", "blue"))

Finally, we add a legend.

legend("topleft", lty=1:3, lwd=2, col=c("red", "blue", "red"), c("fitted line", "confidence interval", "prediction interval"))

We obtain the plot below, which allows us to evaluate both the quality of both the linear model and its predictive value.

Potential productivity as a function of realized productivity (with confidence and prediction intervals)

Let us focus first on the fitted line and the confidence interval. The observations are distributed evenly on either side of the fitted line. The higher V , the further the observations are from the fitted line. The confidence interval is more reliable for adjectives with lower V values: the vast majority of these observations are found within the confidence interval. This is hardly surprising because there are fewer adjectives with a high type frequency. Consequently, the model is better for lower V values.

The same conclusion can be made with respect to predictions. The higher V , the higher the number of errors. Most adjectives with high V values are plotted outside the prediction interval. If you make predictions of \mathscr{P} based on the current linear model, the higher V, the more careful you should be when interpreting the values resulting from the fit.

There are two reasons why this is the case. Firstly, although the model displays a fairly good fit (as evidenced by the output of summary(reg.model)), \mathscr{P} is based on V1, not V. As opposed to V1, V does not discriminate between entrenched forms and new forms. Just because an adjective has a high type frequency does not mean that it is productive (i.e. used in novel constructions). It should therefore come as no surprise that the higher V, the worse the fit. Secondly, adjectives with a high type frequency are rare when compared to adjectives with a lower type frequency. The fewer the observations, the less reliable the predictions.

References

Baayen, Rolf Harald (2009). “Corpus linguistics in morphology: Morphological productivity.” In: Corpus Linguistics. An International Handbook. Ed. by Anke Lüdeling and Merja Kytö. Berlin: Mouton de Gruyter, pp. 899–919.

Cornillon, Pierre-André and Éric Matzner-Løber (2010). Régression avec R. Paris, Berlin, Heidelberg, New York: Springer.

Cornillon Pierre-André , Arnaud Guyader, François Husson, Nicolas Jégou, Julie Josse, Maela Kloareg, Éric Matzner-Løber and Laurent Rouvière (2012). Statistiques avec R. Rennes : Presses Universitaires de Rennes.

Desagulier, Guillaume (2016). “A lesson from associative learning: asymmetry and productivity in multiple- slot constructions.” In: Corpus Linguistics and Linguistic Theory 12.1.

Desagulier, Guillaume (2017). Corpus Linguistics and Statistics with R. Introduction to Quantitative Methods in Linguistics. New York: Springer.

Cite this article as: Guillaume Desagulier, "Simple linear regression for linguists," in Around the word, 18/06/2018, https://corpling.hypotheses.org/584.
  1. The intercept is the parameter that corresponds to the expected value of the response variable when all the explanatory variables are zero. []
  2. It is a single summary number that tells you how accurate your predictions are going to be. []
  3. Before you collect corpus data, you must ask a theory-informed question, which you convert into a research hypothesis. Basically, you make a statement about something that you suppose to be the case and then collect corpus evidence based on this statement. (…) A hypothesis breaks down into two parts: the alternative hypothesis and the null hypothesis. The alternative hypothesis (H_{1}) posits a relation of dependence between the response variable and the explanatory variables. The null hypothesis (H_{0}) posits a relation of independence between them. (Desagulier 2017, 160 []

Vous aimerez aussi...

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.