POS-tagging in R with UDPipe
POS stands for “part-of-speech” (i.e. the grammatical nature of a word). POS tagging is the process of assigning a part-of-speech (such as noun, verb, adverb, adjective, determiner, etc.) to each word in a given text. This is a common task in corpus linguistics, NLP, and the digital humanities to help understand the structure of a text or collection of texts. Indeed, by identifying the part-of-speech for each word, it becomes easier to identify the roles that words play in a sentence and to understand the overall meaning of the sentence.
There are several R packages that can be used for POS tagging. Some of the most popular POS-tagging packages include tidytext
and openNLP
and udpipe
. The tidytext
package provides tools for text mining and analysis, including functions for POS tagging. The openNLP
package is a machine learning toolkit that includes functions for POS tagging and other NLP tasks. The more recent udpipe
package is designed to use the UDPipe (Universal Dependencies Parser) library, which includes functions for POS tagging and other NLP tasks such as tokenizing, lemmatizing, and parsing (Straka & Straková 2017). In this post, I will focus on udpipe
.
UDPipe
Universal dependencies (UD) is a framework for annotating grammar (syntax and morphological features). UD is extremely popular in NLP, perhaps slightly less so in corpus linguistics. The goal of UD is to provide a consistent, language-independent representation of the syntactic structure of sentences. This representation is called a dependency tree, and it shows the relationships between words in a sentence, including which words are the subject, object, and other grammatical roles.
Language Models
Load the necessary packages:
library(dplyr)
library(stringr)
library(udpipe)
library(lattice)
The udpipe
package includes a number of pre-trained language models for various languages. These models are trained on UD treebanks. 101 Pre-trained models are available for 65+ languages (view the full list here). Four models are available for English: english-ewt
, english-gum
, english-lines
, english-partut
. Let us download all four of them with the udpipe_download_model()
function.
# english-ewt
m_eng_ewt <- udpipe_download_model(language = "english-ewt")
#english-gum
m_eng_gum <- udpipe_download_model(language = "english-gum")
#english-lines
m_eng_lines <- udpipe_download_model(language = "english-lines")
#english-partut
m_eng_partut <- udpipe_download_model(language = "english-partut")
Once you have downloaded these models, they will be stored permanently on your computer. To avoid having to download them again, it is a good idea to know the path to each of them and save it into a character vector. Here is how to do it:
m_eng_ewt_path <- m_eng_ewt$file_model
m_eng_gum_path <- m_eng_gum$file_model
m_eng_lines_path <- m_eng_lines$file_model
m_eng_partut_path <- m_eng_partut$file_mode
To load a model, use the udpipe_load_model()
function:
m_eng_ewt_loaded <- udpipe_load_model(file = m_eng_ewt_path)
m_eng_gum_loaded <- udpipe_load_model(file = m_eng_gum_path)
m_eng_lines_loaded <- udpipe_load_model(file = m_eng_lines_path)
m_eng_partut_loaded <- udpipe_load_model(file = m_eng_partut_path)
Of course, you only need one of these models. We are using english-ewt
.
Load and pre-process the text
For the following demo, I am going to use a short text in English, available here. It is an excerpt from the preamble to the GNU General Public License.
Load the text:
text <- readLines(url("https://tinyurl.com/gnutxt"), skipNul = T)
And clean it with the stringer
package:
text <- text %>% str_squish()
FYI, str_squish()
removes whitespace at the start and end, and replaces all internal whitespace with a single space. This is what the text should look like:
[1] "The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things."
Annotate the text
The text is tokenised, tagged, and dependency-parsed in one go with the udpipe_annotate()
function:
text_annotated <- udpipe_annotate(m_eng_ewt_loaded, x = text) %>%
as.data.frame() %>%
select(-sentence)
The output is a data frame:

Two kinds of POS tags are available: upos
and xpos
. upos
tags are independent of the specific language being used (they are ‘universal’). The list of upos
tags is therefore limited:
ADJ
adjectiveADP
adpositionADV
adverbAUX
auxiliaryCCONJ
coordinating conjunctionDET
determinerINTJ
interjectionNOUN
nounNUM
numeralPART
particlePRON
pronounPROPN
proper nounPUNCT
punctuationSCONJ
subordinating conjunctionSYM
symbolVERB
verbX
other
xpos
tags, on the other hand, are language-specific. For example, in English, the upos
tag for a verb might be VERB
, while the corresponding xpos
tag might be VB
(for a base form verb) or VBD
(for a past tense verb). In French, the upos
tag for a verb might still be VERB
, but the xpos
tag might be VER:cond
(for a conditional verb).
To append a upos
tag to each word in the text, use the paste()
function:
text_postagged <- paste(text_annotated$token, "_", text_annotated$upos, collapse = " ", sep = "")
This is what you obtain:
[1] "The_DET GNU_PROPN General_PROPN Public_PROPN License_PROPN is_AUX a_DET free_ADJ ,_PUNCT copyleft_ADJ license_NOUN for_ADP software_NOUN and_CCONJ other_ADJ kinds_NOUN of_ADP works_NOUN ._PUNCT The_DET licenses_NOUN for_ADP most_ADJ software_NOUN and_CCONJ other_ADJ practical_ADJ works_NOUN are_AUX designed_VERB to_PART take_VERB away_ADP your_PRON freedom_NOUN to_PART share_VERB and_CCONJ change_VERB the_DET works_NOUN ._PUNCT By_ADP contrast_NOUN ,_PUNCT the_DET GNU_PROPN General_PROPN Public_PROPN License_PROPN is_AUX intended_VERB to_PART guarantee_VERB your_PRON freedom_NOUN to_PART share_VERB and_CCONJ change_VERB all_DET versions_NOUN of_ADP a_DET program_NOUN --_PUNCT to_PART make_VERB sure_ADJ it_PRON remains_VERB free_ADJ software_NOUN for_ADP all_DET its_PRON users_NOUN ._PUNCT We_PRON ,_PUNCT the_DET Free_ADJ Software_NOUN Foundation_NOUN ,_PUNCT use_VERB the_DET GNU_PROPN General_PROPN Public_PROPN License_PROPN for_ADP most_ADJ of_ADP our_PRON software_NOUN ;_PUNCT it_PRON applies_VERB also_ADV to_ADP any_DET other_ADJ work_NOUN released_VERB this_DET way_NOUN by_ADP its_PRON authors_NOUN ._PUNCT You_PRON can_AUX apply_VERB it_PRON to_ADP your_PRON programs_NOUN ,_PUNCT too_ADV ._PUNCT When_ADV we_PRON speak_VERB of_ADP free_ADJ software_NOUN ,_PUNCT we_PRON are_AUX referring_VERB to_ADP freedom_NOUN ,_PUNCT not_ADV price_NOUN ._PUNCT Our_PRON General_ADJ Public_NOUN Licenses_NOUN are_AUX designed_VERB to_PART make_VERB sure_ADJ that_SCONJ you_PRON have_VERB the_DET freedom_NOUN to_PART distribute_VERB copies_NOUN of_ADP free_ADJ software_NOUN (_PUNCT and_CCONJ charge_VERB for_ADP them_PRON if_SCONJ you_PRON wish_VERB )_PUNCT ,_PUNCT that_SCONJ you_PRON receive_VERB source_NOUN code_NOUN or_CCONJ can_AUX get_VERB it_PRON if_SCONJ you_PRON want_VERB it_PRON ,_PUNCT that_SCONJ you_PRON can_AUX change_VERB the_DET software_NOUN or_CCONJ use_VERB pieces_NOUN of_ADP it_PRON in_ADP new_ADJ free_ADJ programs_NOUN ,_PUNCT and_CCONJ that_SCONJ you_PRON know_VERB you_PRON can_AUX do_VERB these_DET things_NOUN ._PUNCT"
We can do the same with xpos
tags:
text_postagged <- paste(text_annotated$token, "_", text_annotated$xpos, collapse = " ", sep = "")
This time, when you inspect text_postagged
, this is what the text looks like:
[1] "The_DT GNU_NNP General_NNP Public_NNP License_NNP is_VBZ a_DT free_JJ ,_, copyleft_JJ license_NN for_IN software_NN and_CC other_JJ kinds_NNS of_IN works_NNS ._. The_DT licenses_NNS for_IN most_JJS software_NN and_CC other_JJ practical_JJ works_NNS are_VBP designed_VBN to_TO take_VB away_RP your_PRP$ freedom_NN to_TO share_VB and_CC change_VB the_DT works_NNS ._. By_IN contrast_NN ,_, the_DT GNU_NNP General_NNP Public_NNP License_NNP is_VBZ intended_VBN to_TO guarantee_VB your_PRP$ freedom_NN to_TO share_VB and_CC change_VB all_DT versions_NNS of_IN a_DT program_NN --_, to_TO make_VB sure_JJ it_PRP remains_VBZ free_JJ software_NN for_IN all_DT its_PRP$ users_NNS ._. We_PRP ,_, the_DT Free_JJ Software_NN Foundation_NN ,_, use_VB the_DT GNU_NNP General_NNP Public_NNP License_NNP for_IN most_JJS of_IN our_PRP$ software_NN ;_, it_PRP applies_VBZ also_RB to_IN any_DT other_JJ work_NN released_VBN this_DT way_NN by_IN its_PRP$ authors_NNS ._. You_PRP can_MD apply_VB it_PRP to_IN your_PRP$ programs_NNS ,_, too_RB ._. When_WRB we_PRP speak_VBP of_IN free_JJ software_NN ,_, we_PRP are_VBP referring_VBG to_IN freedom_NN ,_, not_RB price_NN ._. Our_PRP$ General_JJ Public_NN Licenses_NNS are_VBP designed_VBN to_TO make_VB sure_JJ that_IN you_PRP have_VBP the_DT freedom_NN to_TO distribute_VB copies_NNS of_IN free_JJ software_NN (_-LRB- and_CC charge_VB for_IN them_PRP if_IN you_PRP wish_VBP )_-RRB- ,_, that_IN you_PRP receive_VBP source_NN code_NN or_CC can_MD get_VB it_PRP if_IN you_PRP want_VBP it_PRP ,_, that_IN you_PRP can_MD change_VB the_DT software_NN or_CC use_VB pieces_NNS of_IN it_PRP in_IN new_JJ free_JJ programs_NNS ,_, and_CC that_IN you_PRP know_VBP you_PRP can_MD do_VB these_DT things_NNS ._."
As expected, the level of granularity is higher with xpos
. Therefore, the choice of upos
vs xpos
tags depends on the kind of study that you are conducting.
Plotting frequency distributions
To obtain the frequency distribution of POS tags, use the txt_freq
function of the udpipe
package. We do it for upos
tags…
> txt_freq(text_annotated$upos)
key freq freq_pct
1 NOUN 36 17.391304
2 VERB 29 14.009662
3 PRON 25 12.077295
4 PUNCT 21 10.144928
5 ADP 18 8.695652
6 ADJ 17 8.212560
7 DET 15 7.246377
8 PROPN 12 5.797101
9 AUX 9 4.347826
10 CCONJ 8 3.864734
11 PART 7 3.381643
12 SCONJ 6 2.898551
13 ADV 4 1.932367
…and xpos
tags:
> txt_freq(text_annotated$xpos)
key freq freq_pct
1 IN 23 11.1111111
2 NN 22 10.6280193
3 PRP 18 8.6956522
4 VB 16 7.7294686
5 DT 15 7.2463768
6 JJ 15 7.2463768
7 NNS 14 6.7632850
8 NNP 12 5.7971014
9 , 12 5.7971014
10 VBP 9 4.3478261
11 CC 8 3.8647343
12 . 7 3.3816425
13 TO 7 3.3816425
14 PRP$ 7 3.3816425
15 VBZ 4 1.9323671
16 VBN 4 1.9323671
17 MD 4 1.9323671
18 RB 3 1.4492754
19 JJS 2 0.9661836
20 RP 1 0.4830918
21 WRB 1 0.4830918
22 VBG 1 0.4830918
23 -LRB- 1 0.4830918
24 -RRB- 1 0.4830918
The barchart()
function in the lattice
package is now used to create a bar chart to display the distribution of POS tags in the text. We start with upos
tags:
freq.distribution.upos <- txt_freq(text_annotated$upos)
freq.distribution.upos$key <- factor(freq.distribution.upos$key, levels = rev(freq.distribution.upos$key))
barchart(key ~ freq, data = freq.distribution.upos, col = "dodgerblue",
main = "UPOS frequencies",
xlab = "Freq")

and do the same for xpos
tags:
freq.distribution.xpos <- txt_freq(text_annotated$xpos)
freq.distribution.xpos$key <- factor(freq.distribution.xpos$key, levels = rev(freq.distribution.xpos$key))
barchart(key ~ freq, data = freq.distribution.upos, col = "cadetblue",
main = "XPOS frequencies",
xlab = "Freq")

References
Straka, M., & Straková, J. (2017). Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In Proceedings of the CoNLL 2017 shared task: Multilingual Parsing from raw text to universal dependencies (pp. 88-99).
Cover image generated with DALL-E (https://labs.openai.com/)
OpenEdition vous propose de citer ce billet de la manière suivante :
Guillaume Desagulier (19 décembre 2022). POS-tagging in R with UDPipe. Around the word. Consulté le 20 juin 2025 à l’adresse https://doi.org/10.58079/n4vb
1 réponse
[…] POS-tagging in R with UDPipe […]