Towards a distributional construction grammar
In 2017, I was appointed as Junior Research Fellow to the Institut Universitaire de France for five years (2017-2022). The goal of this post is two-fold. I am now halfway through my 5-year research project, and I would like to take this opportunity to invite colleagues and prospective PhD students to collaborate on this project. Although I cannot offer fully-funded PhD positions, I can nevertheless provide substantial funding for specific missions having to do with the project. If you are interested, please send me a cover letter and a CV (prospective PhD students, read this post to the end!). The list of available work packages is appended to this post (see below).
Summary
Vector-based distributional semantics holds that words occurring within similar contexts are semantically close and that meaning can be represented by means of distributed vectors, which record lexical distribution in linguistic contexts. Vector-based models have been directed at representing words in isolation to the detriment of complex expressions. I extend word-centered vector-based models to the representation of complex constructions. I use state-of-the-art distributional semantics techniques to develop models that compute syntactically contextualized semantic representations. Following the claim that constructions acquire their meaning from their prototypical constituents, I hypothesize that the meaning of constructions is derived from the distributional preferences of their constituents.
The project (longer version)
The project description below is as I wrote it in early 2017. Some advances have been made since!
1. Multiword expressions (MWEs)
Multiword expressions (MWEs) are strings of two or more lexemes that are idiosyncratic in some respect. Such complex strings are frequent. Sag et al. (2002) estimate that 41% of the entries in WordNet 1.7 are MWEs. MWEs assume a wide range of forms such as institutionalized phrases and clichés (love conquers all, no money down), idioms (kick the bucket, sweep under the rug), fixed phrases (by and large), compound nouns (black and white film, frequent-flyer program), verb-particle constructions (eat/look/write up), light verbs (have a drink/*an eat, make/*do a mistake), named entities (San Francisco), lexical collocations (telephone box/booth/*cabin, emotional baggage/*luggage), etc. MWEs are easily mastered by native speakers. Yet, their linguistic status is still problematic and their interpretation still poses a major challenge for NLP techniques due to their heterogeneous nature.
The grammatical status of MWEs has been an issue at least since the “rules vs. the lexicon” debate (Langacker 1987; Pinker 1999; Pinker and Prince 1988; Rumelhart and McClelland 1986). Because rules capture all the regularities in language, MWEs should have no place in the grammar proper because they are lexical. Because the lexicon consists of words or morphemes, it does not include MWEs because they are phrasal. Jackendoff (1997, chapter 7) advocates the inclusion of “phrasal lexical items” (i.e. “lexical items larger than X0”) in the lexicon. An alternative, although related, solution proposed by construction grammar approaches delegates MWEs to a “constructicon” (Goldberg 2006, p. 64). Grammar consists of a large inventory of constructions, varying in size and complexity, and ranging from morphemes to fully abstract phrasal patterns (Goldberg 2003). Yet, not all constructionist theories agree as to how grammatical information is stored in the constructicon. Four taxonomic models are recognized to exist. In the full-entry model, information is stored redundantly at various levels of the taxonomy. In the usage-based model, grammatical knowledge is acquired inductively, speakers generalizing over recurring experiences of use. In the normal-inheritance model, constructions with related forms and meanings are part of the same network. In the complete inheritance model, grammatical knowledge is stored only once at the most superordinate level of the taxonomy. At this stage, these taxonomies are mostly theoretical constructs.
2. Goal #1 – model the constructicon
My first goal is to propose a corpus-based framework to test the validity of the constructicon and ultimately decide which construct is the most empirically plausible. Ideally, constructions should be detected from large corpora and assembled in a network based on their forms and their contextual meanings. This is no easy task. What distinguishes MWEs from other complex expressions is that, even though they consist of existing words with standard syntax, they are idiosyncratic at the lexical, syntactic, semantic, pragmatic, and/or collocational levels. Constructions of adjectival intensification in English are a good case in point (Desagulier 2014, 2015a,b,c). For example, quite is likely to be interpreted as a maximizer when it modifies an extreme/absolutive adjective (this novel is quite excellent) or a telic/limit/liminal adjective (quite sufficient), but it is likely to be a moderator when it modifies a scalar adjective (quite big) (Paradis 1997). Yet, context dependency is not always decisive. For example, quite is ambiguous between a maximizer and a moderator when it modifies the adjective different (Allerton 1987, p. 25). This kind of issue is what still makes the automatic, large-scale interpretation of MWEs an insuperable challenge for state-of-the-art machine learning techniques (Sag et al. 2002).
However, recent advances in deep learning and neural networks leave room for hope in the field of corpus-based models of semantic representation. These models are known as distributional semantic models (DSMs). They are computational implementations of the distributional hypothesis: semantically similar words tend to have similar contextual distributions (Harris 1954; Miller and Charles 1991). In DSMs, the meaning of a word is computed from the distribution of its co-occurring neighbors. The words are generally represented as vectors, i.e. numeric arrays that keep track of the contexts in which target terms appear in the large training corpus. The vectors are proxies for meaning representations. However, even the best distributional-vector representations are limited by their current inability to detect MWEs and represent the non-compositional meanings of phrases.
3. Goal #2 – improve existing DSMs
My second goal is therefore to combine my linguist’s expertise and my programming experience in corpus linguistics to improve existing DSMs so that they can learn better semantic representations of constructions. In the sections that follow, I adress specific issues and ways of solving them.
4. Multiword constructions
I have worked extensively on MWEs as constructions, i.e. multiword constructions. In Desagulier (2014), I use techniques in quantitative corpus linguistics to cluster quite, rather, fairly, and pretty based on their statistical associations with the adjectives that they modify. In Desagulier (2015b), I extend the methodology to study of the predeterminer vs. preadjectival alternation with respect to quite and rather. Functional similarities and differences between these four intensifiers are approximated via their selectional preferences. Co-occurrence counts are submitted to association measures, whose scores are explored thanks to exploratory multifactorial techniques such as (multiple) correspondence analysis. In these studies, I bypass the issue of non-compositionality by infering meaning from clusters of significant intensifier-adjective collocations. Ideally, the context-dependent meaning of these constructions should be assessed directly.
In Desagulier (2015a), I focus on A as NP (thin as a rake, black as pitch, white as snow, etc.). Despite the pairing of an identifiable syntax (adjective + as + NP) and a specific reading (“very A”), Kay (2013) considers A as NP “non-constructional” and “non-productive” because (a) knowing the pairing is not enough to license and interpret existing tokens (especially when there is no obvious semantic link between the adjective and the NP, as in easy as pie), and (b) speakers cannot use the pattern freely to coin new expressions. Two idiosyncrasies further block A as NP from qualifying as a construction according to Kay. First, some expressions are motivated by a literal association between the adjective and the NP (tall as a tree, white as snow) whereas others hinge on figurative associations between A and NP, including possible puns (safe as houses), and yet others are the sign that the NP has grammaticalized to intensifying functions (jealous as hell, sure as death). Second, some expressions are compatible with a than-comparative (flat as a pancake > flatter than a pancake) whereas others are not (happy as a lark > ??happier than a lark). Such idiosyncrasies provide evidence that the constrution’s tokens are not generated by a rule, which makes their automatic extraction from a corpus difficult. However, the same idiosyncrasies do not prevent A as NP from being productive and from forming a consistent network.
5. Construction networks
The idea that constructions are stored in a network fashion is present in landmark works in Construction Grammar (e.g. Goldberg 1995, p. 67). Recent advances in the application graph theory have made it possible to plot network graphs of constructions.
A graph consists of vertices (nodes) and edges (links) whose attributes may be assigned linguistically relevant features. From a usage-based perspective, frequency is recognized to be one of the most central factors in the construction of linguistic representations. Such representations are influenced by how often speakers are exposed to language events. The more often we experience an event, the stronger its entrenchment in memory and the faster its mental accessibility. The frequency of a constituent has a correlate in the importance of the node (frequent nodes are more important than infrequent nodes). The co-occurence frequency of at least two nodes has a correlate in the number of edges between nodes (frequent co-occurrence is visualized by means of either multiple edges or one egde whose thickness is indexed on frequency). Recent research in co-occurrence has shown that collocations may be asymmetric (Ellis 2006; Gries 2013). The same can be said about the relation that holds between form and meaning, or between one or several construction slots and the whole construction (Desagulier 2015a). Likewise, the edges of a graph may have a direction associated with them and be asymmetric.
Fig. 1 is based on the study of the A as NP construction (Desagulier 2015a).

A graph of asymmetric collostructions between adjectives and NPs in A as NP (adjectives are in red, NPs in blue)
It is a graph of the asymmetric collostructions between the adjective slot and the NP slot (Desagulier to appear, Section 10.7.2.2). Among other things, the graph shows that some types are somewhat fixed as per the adjectives or NPs that they collocate with (large as life, honest as the day is long, gauche as a schoolgirl, thin as a rake, taut as a bowstring, etc.) On the other hand, some other types are more productive. They are part of more complex combinatorial constellations of adjectives and NPs (towards the center part of the graph). These complex networks are based on hubs, i.e. constituents that are connected to several other nodes, e.g. the adjectives white, cold, clear or smooth or the NP hell. We can also see that some attractive constituents are themselves attracted by other constituents. For example, the adjective sure attracts the NPs death and night follows day. At the same time, it is attracted by the NP hell.
One problem with the above network is that it is a post-hoc visualization based on observed co-occurrence counts. As such, it has no predictive value. Another problem is that it contains no semantic information. This information is inferred from the linguist’s interpretation. Ideally, the counts should be weighted for context informativeness. In concrete terms, the adjectives and the NPs should be semantically annotated, as well as their specific combinations. This is where semantic vectors provide an added value.
6. Distributional semantics models
DSMs are not new to linguists, at least on the NLP side (Baroni et al. 2014; Padó and Lapata 2007). Vector space models of word co-occurrence have been applied to tasks such as synonymy detection, concept categorization, verb selectional preferences, argument alternations, etc. What is new is their dramatic improvements thanks to deep learning and neural networks.
6.1. Principles and applications
Two methods have proved very successful in learning high-quality vector representations of words from large corpora: word2vec (Mikolov, Chen, et al. 2013; Mikolov, Yih, et al. 2013) and GloVe (Pennington et al. 2014). Based on neural networks, they (a) learn word embeddings that capture the semantics of words by incorporating both local and global corpus context, and (b) account for homonymy and polysemy by learning multiple embeddings per word. Once trained on a very large corpus, these algorithms produce distributed representations for words in the form of vectors.
As a principal investigator to a Partenariat Hubert Curien (#32168XF, 2014–2015) between Paris Nanterre University and Ajou University (Suwon, Korea), I trained GloVe on the British National Corpus to detect lexical proximities in the context of sentiment analysis (Desagulier 2016). Fig. 2 shows the 10 nearest neighbors of the adjective ironic based on cosine distance as a proximity measure between vectors.

Figure 2: Nearest neighbors to ironic in the BNC
These neighbors include synonyms, antonyms across several categories (adjectives, adverbs, nouns, and verbs). Although the corpus is relatively modest in size (100 million word tokens) and the matrix of vectors is small (50 dimensions), the vectors are surprisingly consistent, implying that the “meaning” of ironic has been captured satisfactorily.
As a follow up on Chambaz and Desagulier (2015) and Desagulier (2015b), I used GloVe to tag the adjectives intensified by quite and rather in the BNC. Because at the time I did not have access to a server that was powerful enough to train the algorithm on a very large corpus, I extracted the vectors corresponding to the adjectives in my data frame from a set of vectors pre-trained with GloVe on the Common Crawl database (http://commoncrawl.org/). Table1 is a snapshot of the resulting data frame.

Word vectors prove efficient at detecting semantic proximities between adjectives, as Fig. 3 shows.

6.2. Remaining challenges
The first challenge is the detection of multiword constructions. Suppose you investigate quite and rather constructions. The typical solution is to treat the MWE as “words-with-spaces” (Sag et al. 2002) and concatenate the words in the syntactic pattern in which they are found: e.g. quite /rather _a vs. a_quite /rather . Then, you determine the vector profile of the whole phrase. Although this might work for intensifiers, it will fail to detect light verbs (have a drink, have a go, *have an eat, etc.) for example. The erratic selectional preferences of light-verb constructions cause a lexical proliferation problem in their detection by dramatically skewing the ratio towards recall at the detriment of precision. To accommodate phrases in a vector-space model, Mikolov, Sutskever, et al. (2013) propose a detection technique that consists in subsampling frequent words. For example, closed-class words such as determiners and prepositions easily occur millions of time in any large corpus. Such words are generally considered meaningless with respect to rarer open-class words. This subsampling technique should be handled with care so as not to discard the closed-class words that are often part of idiomatic constructions (e.g. at in congressman/editor at large, or the in kick the bucket).
The second challenge has to do with context, which, even in recent word-vector algorithms, is defined as a small window of words surrounding the target word. It is assumed that all context words contribute to the target word, irrespective of syntax and long-distance dependencies. However, the assumption that contextual information contributes indiscriminately to the meaning of a phrase is linguistically limited. To enrich vector-based models with morpho-syntactic information, I suggest handling the syntactic templates of multiword constructions by first targeting supervised learning on a thesaurus of pre-identified constructions.
Once a satisfactory operationalization of context has been found, and providing context resolves ambiguities, there remains a third, most important issue: (non-)compositionality (Padó and Lapata 2007). Let W1 and W2 (e.g. red and tape) be the two lexical constituents of a nominal compound N (red tape). The syntax-dependent composition function yielding a nominal compound, adapted from Mitchell and Lapata (2010) and Dinu and Baroni (2014), should be:
\vec{N} = f_{comp} (\vec{w_1}, \vec{w_2}),
where \vec{w_1} and \vec{w_2} are the vector representations associated with W_1 and W_2.
Dinu and Baroni (2014) and Mikolov, Sutskever, et al. (2013) have found that composition can be defined as the application of linear transformations to the two constituents by summing up their respective vectors:
f_{comp} (\vec{w_1}, \vec{w_2}) = \vec{w_1} + \vec{w_2}.
I intend to test the above formula by applying it to constructions. The syntax-dependent composition function yielding a multiword construction \vec{C} becomes:
\vec{C} = \vec{c_1} + \vec{c_2},
where \vec{c_1} and \vec{c_2} are the vector representations associated with two constituents of C.
Of course, I do not believe that the issue of (non-)compositionality can be resolved by one equation. A significant share of the project will be dedicated to joint seminars with mathematicians and linguists on the best way to handle the problematic capture of the semantic subtleties of multiword constructions, notably in one of the monthly seminars at my lab.
7. Interdisciplinary goals
My project is interdisciplinary. It combines expertise in linguistics, mathematics, and computational engineering. Its has applications in a wide variety of fields such as theoretical linguistics, lexicography, the digital humanities (especially text mining), machine translation, and data analysis. After benchmarking DSMs on a set of pre-identified constructions (specifically intensifying constructions), the models will be applied to detect more complex, yet unseen constructions (including but not limited to light-verb constructions and argument-structure constructions).
Want to join me?
Let me know if you are willing to work with me on one of the following work packages. I will update the list regularly.
a. R packages
I am working on two R packages. The first package, constR2vec
is an R interface to the detection and vectorization of multiword constructions from a corpus. The second package, constRucticon
is meant to make network graphs of multiword units based on frequency counts, association measures, and vectors. The goal is to make the two packages work together.
b. Visualization
I plan to visualize construction networks by means of the tools from graph theory. The data consist of edge lists, vertex attributes, and word vectors.
c. supervised vector estimation
The first step consists in framing the vector estimation problem as a supervised task. This is done by targeting the machine learning on repositories of pre-identified constructions such as those proposed by Pattern Grammar (Francis et al. 1996; Hunston and Francis 2000)
d. unsupervised vector exploration
Once vector-based machine learning on a database of pre-identified multiword constructions has proved satisfactory, the methodology can be applied to detect these constructions in very large corpora of English. This second step is unsupervised.
The algorithm that I intend to write in R will use the outcome of supervised training as a basis for construction detection. Whether a new multiword sequence counts as a construction will be decided thanks to a semi-parametric method from biostatistics known as targeted learning (van der Laan and Rose 2011).
e. Supervisions
If you work on any of the above, or any related topic, I will be happy to consider your application for a PhD supervision (including a joint supervision).
References
Allerton, D. J. (1987). “English Intensifiers and their Idiosyncrasies.” In: Language Topics: Essays in Honour of Michael Halliday. Ed. by Ross Steele and Terry Threadgold. Vol. 2. Amsterdam: John Benjamins, pp. 15–31.
Baroni, Marco, Georgiana Dinu, and Germán Kruszewski (2014). “Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors.” In: ACL (1), pp. 238–247.
Chambaz, Antoine and Guillaume Desagulier (2015). “Predicting is not explaining: Targeted learning of the dative alternation.” In: Journal of Causal Inference 4.1, pp. 1–30. DOI: 10.1515/jci-2014-0037.
Desagulier, Guillaume (2017). Corpus Linguistics and Statistics with R. New York: Springer.
Desagulier, Guillaume (2014). “Visualizing distances in a set of near synonyms: rather, quite, fairly, and pretty.” In: Corpus Methods for Semantics: Quantitative Studies in Polysemy and Synonymy. Ed. by Dylan Glynn and Justyna Robinson. Amsterdam: John Benjamins.
Desagulier, Guillaume (2015a). “A lesson from associative learning: asymmetry and productivity in multiple-slot constructions.” In: Corpus Linguistics and Linguistic Theory. DOI: 10.1515/ cllt-2015-0012.
Desagulier, Guillaume (2015a). (2015b). “Forms and meanings of intensification: a multifactorial comparison of quite and rather.” In: Anglophonia 20.2. DOI: 10.4000/anglophonia.558. URL: http: //anglophonia.revues.org/558.
Desagulier, Guillaume (2015c). “Le statut de la fréquence dans les grammaires de constructions : simple comme bonjour?” In: Langages 197.1, pp. 99–128. DOI: 10.3917/lang.197.0099.
Desagulier, Guillaume (2016). “Deep learning and word vectors for sentiment analysis.” In: Journée d’études “Mots de sentiments en français et en coréen”. 13 janvier 2016. Université Ajou, Suwon, Korea.
Dinu, Georgiana and Marco Baroni (2014). “How to make words with vectors: Phrase generation in distributional semantics.” In: ACL (1), pp. 624–633.
Ellis, Nick C. (2006). “Language acquisition as rational contingency learning.” In: Applied Linguistics 27.1, pp. 1–24.
Francis, Gill, Susan Hunston, and Elizabeth Manning (1996). Grammar Patterns. 1, Verbs. London: Harper Collins.
Goldberg, Adele E. (1995). Constructions: A Construction Grammar Approach to Argument Structure. Cognitive theory of language and culture. Chicago: University of Chicago Press.
Goldberg, Adele E. (2003). “Constructions: A new theoretical approach to language.” In: Trends in Cognitive Sciences 7.5, pp. 219– 224.
Goldberg, Adele E. (2006). Constructions at Work: The Nature of Generalization in Language. Oxford & New York: Oxford University Press.
Gries, Stefan Thomas (2013). “50-something years of work on collocations: what is or should be next…” In: International Journal of Corpus Linguistics 18.1, pp. 137–166. Harris, Zellig S. (1954). “Distributional structure.” In: Word 10.2-3, pp. 146–162.
Hunston, Susan and Gill Francis (2000). Pattern Grammar: A Corpus-Driven Approach to the Lexical Grammar of English. Amsterdam: John Benjamins.
Jackendoff, Ray (1997). The Architecture of the Language Faculty. Cambridge, Mass. ; London: MIT Press.
Kay, Paul (2013). “The Limits of (Construction) Grammar.” In: The Oxford Handbook of Construction Grammar. Ed. by Thomas Hoffmann and Graeme Trousdale. Oxford: Oxford University Press, pp. 32–48.
Langacker, Ronald W. (1987). Foundations of Cognitive Grammar. Vol. 1. Stanford: Stanford University Press.
Mikolov, Tomas, Kai Chen, et al. (2013). “Efficient Estimation of Word Representations in Vector Space.” In: CoRR abs/1301.3781. URL: http : / / arxiv . org / abs / 1301 . 3781.
Mikolov, Tomas, Ilya Sutskever, et al. (2013). “Distributed representations of words and phrases and their compositionality.” In: Advances in Neural Information Processing Systems, pp. 3111–3119.
Mikolov, Tomas, Wen-tau Yih, and Geoffrey Zweig (2013). “Linguistic regularities in continuous space word representations.” In: Proceedings of NAACL-HLT, pp. 746–751. URL: http://www.aclweb.org/anthology/N/N13/N13-1090.pdf.
Miller, George A. and Walter G. Charles (1991). “Contextual correlates of semantic similarity.” In: Language and Cognitive Processes 6.1, pp. 1–28.
Mitchell, Jeff and Mirella Lapata (2010). “Composition in distributional models of semantics.” In: Cognitive science 34.8, pp. 1388–1429.
Padó, Sebastian and Mirella Lapata (2007). “Dependency-based construction of semantic space models.” In: Computational Linguistics 33.2, pp. 161–199.
Paradis, Carita (1997). Degree Modifiers of Adjectives in Spoken British English. Lund: Lund University Press.
Pennington, Jeffrey, Richard Socher, and Christopher D. Manning (2014). “GloVe: Global Vectors for Word Representation.” In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pp. 1532–1543. URL: http : / / www . aclweb . org / anthology/D14-1162.
Pinker, Steven (1999). Words and Rules: The Ingredients of Language. New York: Basic Books.
Pinker, Steven and Alan Prince (1988). “On language and connectionism: Analysis of a parallel distributed processing model of language acquisition.” In: Cognition 28.1-2, pp. 73–193.
R Core Team (2016). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, Austria. URL: https://www.R-project.org/.
Rumelhart, D. E. and J. L. McClelland (1986). “Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 2.” In: ed. by David E. Rumelhart, James L. McClelland, and CORPORATE PDP Research Group. Cambridge, MA, USA: MIT Press. Chap. On Learning the Past Tenses of English Verbs, pp. 216–271. ISBN: 0-262-13218-4. URL: http://dl.acm.org/citation.cfm?id= 21935.42475.
Sag, Ivan A et al. (2002). “Multiword expressions: A pain in the neck for NLP.” In: International Conference on Intelligent Text Processing and Computational Linguistics. Springer, pp. 1–15.
van der Laan, Mark J. and Sherri Rose (2011). Targeted Learning. New York: Springer.