Can word vectors help corpus linguists?

Last month, my paper “Can word vectors help corpus linguists?” was published in Studia Neophilologica. It is part of a special issue following a conference on corpus linguistics held at Avignon University on June 9-10, 2016 : Nouvelles approches du corpus en linguistique anglaise (New approaches to corpora in English linguistics, NACLA1). The event was organized by my esteemed colleague Prof. Graham Ranger, who also edited the volume along with Prof. Dr. Sebastian Hoffmann (Universität Trier, Germany). Both Graham and Sebastian put in a lot of work in this thematic issue.

What is it about?

The paper is summed up in the abstract. See the front page below.

The front page

I was willing to embark on NLP technology while remaining faithful to my corpus linguist’s spirit. My curiosity was piqued by the big buzz around deep neural networks and the promised they held in terms of  image and sound recognition in linguistics. In the media, deep neural networks strike me as a tool that is primarily geared at killing the spirit of games such as chess, poker, or go, but there is so much more than they can do. 

The project started with me wondering how word2vec and GloVe could be brought to use in the kinds of semantic-annotation tasks required in corpus linguistics. The kind of task that I’m talking about is the following. I started with a large data set like this one (except the actual data set was way larger):

A dataset ready for annotation

My goal was to annotate the adjectives to see if the intensifiers quite and rather have distinctive semantic preferences with respect to the kinds of adjectives that they modify. Option 1 is manual annotation. You get the best results because a human annotator is sensitive to context, polysemy, and is therefore very good at disambiguating. But it is excruciatingly long. Option 2 is automatic annotation with a semantic-tagger such as USAS. Now, unless taggers are probabilistic and well-trained, they are infamous for assigning incorrect tags to highly polysemous items (take a look at the meanings of “hot” in the above table to assess the difficulty of the task at hand).   

Just like any other distributional-semantic model (DSM), word2vec and GloVe take a text corpus as input and output one vector for each word found in the corpus based on the context where each word appears. Unlike traditional DSMs, these methods are said to be more powerful because they are inspired by deep neural networks. 

SGNS and GloVe are considered as neural, prediction-based embeddings. All of these methods are essentially bag-of-words models, in which the representation of each word reflects a weighted bag of context-words that co-occur with it (weighting and context are two important concepts here).1

I often read that word2vec is an example of deep learning, but this is not the case. It is a two-layer (therefore shallow) neural net. The details of word2vec/GloVe implementations are in the paper. Note that I focused on GloVe because I found it more intuitive and less suspicious than word2vec at the time. In fact, in both cases, the underlying computations are hidden.

Writing about fast-changing technology

Here are a few elements of context regarding my paper. I wrote it three years ago which, in the field of word vectors, is equivalent to a century in real life. When addressing reviewers’ comments, I must admit that I left the draft largely untouched. There are two reasons for this. First, I knew that in such a fast-changing field as neural networks, no matter how many updates I would make, the paper would unavoidably miss the latest developments in neural word embeddings by the time it is published. Heraclitus once said “No one ever steps in the same river twice”. Well, I must say that this is what it feels like for me to work on neural-network-flavored word vectors. You know you’re working on a clearly-identified field with clearly-identified algorithms, but new developments on how to best tune-in the parameters of said algorithms change monthly (not to say daily). 

Second, I wanted the paper to be faithful to my state of mind as a linguist when I embarked on neural networks and its applications to meaning capture in corpora. My opinion on word2vec and GloVe has definitely changed in the last three years, but I must say the doubts I have with respect to their paradigm-changing aspirations remain.

Shared concerns about neural networks

Echoing my doubts, recent research on distributional semantics and machine learning tends to show that state-of-the-art deep learning techniques do not necessarily perform better than older alternatives (Dacrema et al., 2019). Levy et al. (2015) compare older methods such as Positive Pointwise Mutual Information (PPMI) and Singular Value Decomposition (SVD) to word2vec (in fact SGNS: skip-gram with negative-sampling), and GloVe. They find that performance depends on the task and how the hyperparameters are tuned. Tuning these hyperparameters is not straightforward. 

About SGNS, they write:

SGNS is a robust baseline. While it might not be the best method for every task, it does not significantly underperform in any scenario. Moreover, SGNS is the fastest method to train, and cheapest (by far) in terms of disk space and memory consumption.” (p. 222).

This suggests that word2vec is good, but not as revolutionary as anticipated when it came out. Earlier in their paper, they write:

It is commonly believed that modern prediction-based embeddings perform better than traditional count-based methods. This claim was recently supported by a series of systematic evaluations by Baroni et al. (2014). However, our results suggest a different trend. (…) in word similarity tasks, the average score of SGNS is actually lower than SVD’s when win = 2, 5, and it never outperforms SVD by more than 1.7 points in those cases. In Google’s analogies SGNS and GloVe indeed perform better than PPMI, but only by a margin of 3.7 points (compare PPMI with win= 2 and SGNS with win= 5). MSR’s analogy dataset is the only case where SGNS and GloVe substantially outperform PPMI and SVD. Overall, there does not seem to be a consistent significant advantage to one approach over the other, thus refuting the claim that prediction-based methods are superior to count-based approaches. (p. 220)

According to Levy et al., GloVe, which I assumed would perform best because of its greater flexibility, does not fare better than SGNS in their experiments.

Downloading the paper

I am allowed to share 50 free online e-copies of this article with friends and colleagues via this link. After 50 downloads, the link will expire. The offer is therefore available while stocks last. If the link does not work anymore, a draft version is available for download from my HAL-SHS repository. Just click here to download a pdf copy.

References

Baroni, Marco, Georgiana Dinu & German Kruszewski (2014). Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238–247, Baltimore, Maryland, June. Association for Computational Linguistics.

Guillaume Desagulier, “Word embeddings: the (very) basics,” in Around the word, 25/04/2018, https://corpling.hypotheses.org/495.

Dacrema, Maurizio Ferrari, Paolo Cremonesi & Dietmar Jannach, (2019). Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches. arXiv preprint.

Levy, Omer, Yoav Goldberg & Ido Dagan (2015). Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3, 211–225.

Cite this article as: Guillaume Desagulier, "Can word vectors help corpus linguists?," in Around the word, 23/08/2019, https://corpling.hypotheses.org/2682.
  1. For a clear introduction to word2vec, see this blog post. []

Guillaume Desagulier

UMR 7114 MoDyCo — Université Paris 8, CNRS, Université Paris Nanterre, Institut Universitaire de France.

Vous aimerez aussi...

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.