Noam Chomsky’s colorless green idea: “corpus linguistics doesn’t mean anything”
From Berkeley to Paris
In Spring 2002, I attended a lecture on linguistics by American linguist Noam Chomsky at UC Berkeley. At the time, I was working as a graduate student instructor in the French department while taking courses in the linguistics department. Chomsky exposed the latest developments of his Minimalist Program.1 To my great surprise, he conceded that elements of language variation were certainly relevant to the study of language, although he quickly asserted that they were irrelevant to Universal Grammar. I left the conference hall wondering whether this minor compromise would have implications for his theory. I soon realized that it wouldn’t.
On November 24, 2016, Noam Chomsky gave a lecture on “The Galilean Challenge : Architecture and Evolution of Language” at the invitation of Paris Lumières University. The event took place in the Grand auditorium of the Bibliothèque Nationale de France. I was curious to hear what Chomsky had to say almost fifteen years after I first saw him. Also, I have a lot of respect for him as an intellectual figure, so I went. It was worth it as I ended up asking Noam Chomsky a question on corpus linguistics I had had in store for him for several years. To watch the video of the lecture, including my question and Chomsky’s reply, please scroll down to the bottom of the post.
Generative grammar 101
Chomsky claims that the core of grammar consists of a finite set of abstract, algebraic rules. Because this core is assumed to be common to all the natural languages in the world, it is considered a universal grammar. The lexicon, context, elements of inter-speaker variation, cultural connotations, mannerisms, non-standard usage, etc. are considered idiosyncrasies of language. For this reason, they are relegated to the periphery of grammar and promptly tossed away into the waste paper basket of oblivion.
In generative grammar, pride of place is given to syntax, i.e. the way in which words are combined to form larger constituents such as phrases, clauses, and sentences. Syntax hinges on an opposition between deep structure and surface structure. The deep structure is the abstract syntactic representation of a sentence, whereas the surface structure is the syntactic realization of the sentence as a string of words in speech. For example, the sentence
Noam saw the man with a telescope.
has one surface structure (i.e. one realization as an ordered string of words) but two alternative interpretations at the level of the deep structure:
- ‘Noam saw the man thanks to a telescope’, and
- ‘Noam saw the man; this man was using a telescope’.
In other words, a sentence is generated from the deep structure down to the surface structure.
Generative grammar is a “top-down” approach to language: a limited set of abstract rules “at the top” is enough to generate and account for an infinite number of sentences “at the bottom”. If you are a generative linguist, your job is therefore to look for is the finite set of rules that core grammar consists of. This core grammar is the speakers’ competence (as opposed to performance). This is to the detriment of idiosyncrasies of all kinds.
Conversely, theories such as functional linguistics, cognitive linguistics, and contemporary typology advocate a “bottom-up” approach to language. It is usage that shapes the structure of language. Grammar is therefore derivative, not generative. There is no point of separating competence and performance anymore because competence builds up on performance. In the same vein, grammar has neither core nor periphery: it is a structured inventory of symbolic units. Any linguistic unit is worth being taken into consideration with equal attention, including ritualized or formulaic expressions (break a leg!), idioms (he snuffed it), non-canonical phrasal expressions (sight unseen), semi-schematic expressions (e.g. just because X doesn’t mean Y), and fully schematic expressions (e.g. the ditransitive construction).
The problem with pure introspection
As a cognitive linguist, I am supposed to embrace the idea that generative grammar is evil. I don’t. There are, of course, many claims that I reject, mostly because they are impossible to operationalize and therefore impossible to prove or disprove. Here are two claims that are at the top of my list: the poverty of the stimulus (the linguistic input that children receive is not enough to activate the grammar of a natural language) and the rule/list distinction (grammar is a set of rules and is worth studying; words and idioms are learned as a list because of their idiosyncrasies and are not worth studying). Having said that: the top-down approach and the bottom-up approach study the same object from two complementary angles, so why reject any of them? No matter what, generative grammar is an interesting theoretical abstraction.
As a corpus linguist, my stance is more radical. Generative linguists are known to rely on introspective judgments as their primary source of data. The method can be deemed faulty for at least two reasons. First, there is no guarantee that linguists’ introspective acceptability judgments always match what systematically collected data would reveal. Second, for an intuition of well-formedness to be valid, it should at least be formulated by a linguist who is a native speaker of the language under study. This radical position is hardly sustainable in practice.
Because of its emphasis on language use in all its complexity, the “bottom-up” approach provides fertile ground for corpus-informed judgments. It is among its ranks that linguists like me, dissatisfied with the practice of using themselves as informants, have turned to corpora, judging them to be a far better source than introspection to test their hypotheses.
Admittedly, corpora have their limitations. The most frequent criticism that generative linguists level against corpus linguists is that no corpus can ever provide negative evidence. In other words, no corpus can indicate whether a given sentence is impossible. Corpus linguists reply that grammar rules are generalizations over actual usage and negative evidence is of little import. The second response is that there are statistical methods that can either handle the non-occurrence of a form in a corpus or estimate the probability of occurrence of a yet unseen unit. A more serious criticism is the following: no corpus, however large and balanced, can ever hoped to be representative of a speaker, let alone of a language.
Video tapes of things happening in the world
In an interview (Andor, 2004, p. 97), Chomsky rejects corpus linguistics on the grounds that it is methodologically bogus:
Corpus linguistics doesn’t mean anything. It’s like saying suppose a physicist decides, suppose physics and chemistry decide that instead of relying on experiments, what they’re going to do is take videotapes of things happening in the world and they’ll collect huge videotapes of everything that’s happening and from that maybe they’ll come up with some generalizations or insights. Well, you know, sciences don’t do this.
To understand the parallel with physics and chemistry, let us remember that Chomsky’s object of study is I-language, the set of algebraic rules govern competence, as opposed to E-language (where ‘I’ means ‘internal’ and ‘E’ means ‘external’). If we reformulate the above criticism, collecting samples of E-language, regardless of their size, will never allow the linguist to access the universal laws of I-language. This is valid only if one believes that competence and performance are separated. But this is a matter of theoretical faith.
Chomsky’s first mistake is to think that corpus linguistics fails at accounting for competence. Traditional corpus linguistics does not claim to account for a language, let alone the language faculty, it does account to a certain form of competence. It just not the same kind of competence (see above).
Chomsky’s second mistake is to consider only one aspect of physics and chemistry: experimentation. Corpus linguistics is not about experimentation; it is about observation. Observation is an essential component of the hard sciences. The universe is by essence infinite. Only an infinitesimal portion of it can be observed. The strength of the hard sciences is their ability to extend the conclusions based on the observation of phenomena in a finite sample to a much larger, possibly infinite sample.
First-generation corpus linguistics has to refrain from extending conclusions beyond the investigated corpus. Suppose a linguist compares two near-synonymous expressions such as sort of and kind of in a corpus of British English that is both representative and balanced in terms of genre and register. The linguist would be well-advised to refrain from making any conclusion about British English because, as a sample, the corpus is biased and certainly not fully representative. The linguist cannot extend her corpus-based findings to the language unless she backs them up with solid inferential statistics (see this paper I wrote with Antoine Chambaz). This is where second-generation corpus linguistics, augmented with inferential statistics, steps in. It allow us to extend the conclusions obtained from a finite sample (a corpus) to a larger sample (a language) in a controlled fashion. In a way, corpus linguistics is moving closer to physics and chemistry, and it is not just about collecting videotapes of things happening in language.
I finally got to ask Noam Chomsky a question!
Chomsky’s Paris lecture took place two weeks before I defended my Habilitation à Diriger des Recherches. I had spent the previous months writing a dissertation on the validity of corpus-linguistics methods in the program of cognitive linguistics. Click here to open the pdf document, and read pp. 54-55 (it’s in French).
Initially, I hadn’t planned to confront Chomsky on corpus linguistics, guessing in advance what he would say about the split between competence and performance, but since there was room for a last question, I grabbed the microphone. If you want to know what happened next, click on play.
Chomsky starts by changing the subject a little by focusing on children and the poverty-of-stimulus hypothesis (“the child has no data”). Most of my colleagues in psycholinguistics would argue otherwise.2
Chomsky then moves on to lexicalized concepts (‘lexicalized’ meaning encoded in single words). Elegantly, Chomsky connects his reply to some examples he provided earlier in his lecture.
Anywhere you look, it turns out that what’s known is not accessible by analysis, by pure analysis of data. Of course, data does something. Like data tells you that in English it’s river but not in French. The Saussurean arbitrariness as it’s called, yeah, that’s pretty arb, totally arbitrary, almost totally arbitrary.
So some things are clearly learned. And more is learned than that. So take river again. There are, it’s not that when people say it’s cultural product, it’s not totally false. Like in English, there is a difference between river, stream, creek, you know, a couple of other things. Other languages may not have exactly those differences, they have some other set of differences. So in what used to be called, should be called, semantic fields, you know, domains in which a group of words roughly fit, the study made by a German linguist years ago, within the semantic fields you get the semantic organization of the field. So the words like, say, know, believe, think, and so on won’t be exactly the same, they’re not exactly the same in French and English, and certainly not in more remotely, languages that are more remote. So there’ll be some differences in the way the fields are cut up, and some other small differences which are learned. You have to learn in English that a certain thing is a river and another thing is a stream, not a river. Some other language may not make that distinction, or may make some other distinction.
Similarly, you have to, in simple cases like say color words there are some universals, but there are languages which just don’t have say the same, even the same richness of color words that we do and other languages have even richer ones, ok? But it turns out that the people who don’t have the words nevertheless have the conceptual structure. Now this was shown by Ken Hale pretty strikingly in some papers back in the 1970s in studying a wide range of Australian Aboriginal languages. Sure there are many languages in which you don’t have number words beyond say two, three, four, but it turns out that the people have all the concepts, that they may not have the word five but they can show things like this, ok? They may not have the word red but they can say blook-like, you know, so they have other ways of doing it. And in fact it turns out, he argues, these are what he calls cultural gaps, that the cultural gaps are exhibited in choice of words, but they apparently don’t show up in the conceptual systems.
However interesting, Chomsky’s reply misses my point, or sees it but considers it irrelevant. According to him, words do not provide access to the conceptual structure because some concepts are known, but not necessarily encoded in the form of a dedicated word. In other words, usage cannot provide access to competence. Just because a concept is not lexicalized does not mean that it cannot be expressed verbally. Even if a concept is not lexicalized, corpus linguists can still hope to capture it indirectly.
references
ANDOR, József (2004). « The master and his performance : An interview with Noam Chomsky ». In : Intercultural Pragmatics 1.1, p. 93–111 .
CHAMBAZ, Antoine et Guillaume DESAGULIER (2016). « Predicting Is Not Explaining : Targeted Learning of the Dative Alternation ». In : Journal of Causal Inference 4.1, p. 1–30.
CHOMSKY, Noam (1957). Syntactic Structures. The Hague : Mouton.
CHOMSKY, Noam (1995). The Minimalist Program. Cambridge, MA : MIT Press .
- For the anecdote, Chomsky was scheduled to give a lecture on politics a few hours later in downtown Berkeley. A friend attending the event reported the following to me: Chomsky confessed he felt relieved his linguistics lecture was over so he could finally talk about the stuff that really matters to him: politics. [↩]
- I will address Noam Chomsky’s claim that children know structure dependency from age 3 or Charles Yang’s claim that children get the inflectional paradigms quickly without hearing the full paradigm in a dedicated post on distributional semantics. [↩]
OpenEdition vous propose de citer ce billet de la manière suivante :
Guillaume Desagulier (5 décembre 2017). Noam Chomsky’s colorless green idea: “corpus linguistics doesn’t mean anything” Around the word. Consulté le 17 avril 2025 à l’adresse https://doi.org/10.58079/n4uf