Corpus linguistics in the LLM era – the changing nature of language data
The emergence of generative AIs, such as ChatGPT, Google Gemini, Microsoft Copilot, Anthropic Claude, Meta AI, or Mistral, has brought both opportunities and challenges to the field of corpus linguistics. These systems generate vast amounts of language output that often appear natural, but is this output genuinely authentic? This raises important questions for corpus linguists about the nature of linguistic data and the methods used to study it. Key issues include understanding the nature of the language generated by AI and evaluating the implications of AI-augmented tools for analyzing linguistic data. This post is the first in a new series on AI-assisted corpus linguistics. Here, I will focus on the changing nature of language data.
What kind of AI are we talking about?
If you have been keeping up with the latest in AI, you have probably heard about two main types of AI that are making waves: generative AI and predictive AI. While both types rely on big datasets and machine learning, they serve different purposes. Generative AI creates new stuff and is often used in creative fields, while predictive AI predicts what is coming next based on what has happened before and is more about analysis and decision-making. Large Language Models (LLMs), which are the foundations on which ChatGPT-like systems are built, belong to the first kind.
Why are LLMs so powerful now?
Transformers are the driving force behind the advancements in Large Language Models (LLMs) and have been a game-changer in natural language processing (NLP).
Transformers are effective because they bring together several mechanisms, which I will briefly describe and attempt to explain based on what I have read on them. These mechanisms include self-attention, parallel data processing, the encoder-decoder architecture, positional encoding, and multi-head attention. Summaries of how they work and related tutorials can be found in many places online—for example, here, here, or here. If you are not interested in these technical details, feel free to skip this part and jump to the next section.
The first mechanism is self-attention, which allows the model to weigh the importance of different words in a sentence relative to each other. Imagine you are in a busy room trying to listen to someone talk. Your brain automatically focuses on their voice and what they are saying while tuning out the less important noises. Transformers use self-attention for roughly the same purpose. This is like your brain filtering out the noise to understand the conversation better. The way that I understand it is that, unlike previous models that processed data sequentially, transformers can look at the entire sentence at once and understand how each word influences the others.
Self-attention is combined with the ability to process data in parallel, which speeds up training and inference significantly and makes it possible to handle massive datasets efficiently. Traditional neural networks, like RNNs and LSTMs, process data one step at a time, a bit like reading a book page by page. Transformers, however, can process the entire input sequence in parallel, similar to how you can glance at a whole paragraph and understand its context instantly. This parallel processing significantly speeds up training and inference.
The encoder-decoder architecture of transformers is also a great asset. Think of the transformer as a translator. The encoder is like the person who reads and understands the original text, summarizing its essence. The decoder is like the person who takes this summary and translates it into another language, generating the output sequence. Because the encoder-decoder setup is highly flexible, it can be adapted for various tasks like text generation, translation, and summarization.
Because transformers do not process data sequentially, they need a way to keep track of the order of words. This is achieved through positional encoding, which is like adding a timestamp to each word so the model knows where it fits in the sentence. It is similar to how you use chapter numbers and page numbers to navigate a book.
The multi-head attention mechanism is like having multiple pairs of eyes looking at the same scene from different angles. Each head focuses on different aspects of the input, allowing the model to capture a more comprehensive understanding of the context. This is needed for tasks that require contextual “understanding” (I am extremely uncomfortable using terms referring to human abilities when discussing LLMs, hence the scare quotes), such as teasing apart bank as a financial institution and bank as the side of a river.
Finally, transformers are efficient because they are pre-trained on vast amounts of text data. Not only that, they are also fine-tuned for specific tasks, similar to how a general practitioner might specialize in a particular field of medicine. This approach allows each LLM to use its general “knowledge” (scare quotes, again) of language and adapt quickly to new tasks without starting from scratch.
Why should we be concerned about corpora?
By definition, corpora consist of naturally occurring language produced in authentic contexts, without the speakers or writers being aware that their language output would be used one day for linguistic analysis. This spontaneous and unself-conscious production of language is a necessary condition for linguists, who base their studies on materials that are free from the potential biases or alterations that might occur if participants knew their language was being scrutinized. The resulting data thus represents a more accurate snapshot of genuine language use in various communicative situations.1
As the boundaries between human-produced and machine-generated language become increasingly blurred, the traditional conception of corpora as carefully curated collections of authentic language use is being challenged by the growth of AI-generated text. This development raises questions about the nature of linguistic authenticity and representation.
As said before, LLMs are trained on a massive amount of text data: we are talking billions of words and phrases. This training allows them to mimic how we speak and write, generating text that appears coherent and relevant. Whether it is answering questions, writing articles, or even having a chat, LLMs can produce text that sounds a lot like it was written by a human.
Let us discuss the Turing test for a moment. This concept, introduced by Alan Turing in 1950, serves as a benchmark for evaluating AI. Imagine you are conversing with someone via text and cannot discern whether you are speaking to a human or a highly sophisticated computer program. That is essentially what the Turing test measures: if an AI can convincingly imitate human conversation to the extent that a person cannot reliably tell it apart from a human, it passes the test. ChatGPT did pass the Turing test. However, the Turing test does not imply that the AI truly understands the language it generates. Rather, it demonstrates an impressive ability to mimic human-like responses, like a parrot (see below).
The role of corpus linguists, from my perspective, is to ensure that the texts we study are produced by humans. This is vital because linguistics belongs to the human sciences, and the grammatical phenomena we study are inherently human. Grammar and linguistics are not exact sciences. We are as interested in the regularities that govern language practices as we are in the peculiarities that disrupt them.
LLMs like ChatGPT produce an “average language” – expressions abstracted from millions of ways of expressing oneself – as a direct result of their training process. This process begins with a vast corpus of text data that brings together a wide range of linguistic styles, topics, and contexts. As the model iterates through multiple training epochs, it refines its “understanding” of language but inevitably smooths out the idiosyncrasies that make human language so diverse. The model learns to generate text by predicting the most probable sequences based on statistical patterns, rather than preserving what makes each individual voice unique.
While ChatGPT may be adept at imitating accents and dialects, what value is there in studying an imitation if linguists cannot confidently and precisely associate this linguistic marking with genuine human experiences? Furthermore, what purpose does it serve to make generalizations about an ersatz language that is itself the product of a dehumanized generalization process?
Because the grammatical phenomena observed in LLM outputs are not the product of genuine human cognitive processes or social interactions, but rather the result of complex statistical computations abstracted from massive datasets, this “average language” is of limited interest to linguists. It merely represents a form of linguistic expression fundamentally detached from the human experiences and social contexts that traditionally inform linguistic study.
Stochastic parrots?
Generative AI, which includes LLMs, is primarily focused on producing new content, where “new” does not necessarily mean “original”. This type of AI can generate everything from text and images to videos, music, and even software code (in fact, it is very good at coding, as we shall see in a future post). The goal is to develop not creativity but productivity into various creative tasks such as content creation, art, music, and fashion. However, whether we can truly consider this output as “real” creativity is debatable given the parrot-like nature of generative AI.
Before ChatGPT was released in late 2022, renowned NLP figure Emily Bender and her colleagues raised concerns about the implications of these technologies. They popularized the term “stochastic parrot” in their 2021 paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” to address the limitations of LLMs.
“Stochastic parrot” is a catchy phrase that conveys the idea that LLMs are essentially advanced mimics. Picture a parrot that has listened to every conversation ever held; this parrot can piece together sentences that sound remarkably human-like but lacks any true understanding of what it is saying. That is similar to how LLMs operate but on a much larger scale. The term “stochastic” refers to the element of controlled randomness in how these models select their words. They do not simply repeat exact phrases from their training data; instead, they mix and match in ways that can appear creative or insightful. However, despite their impressive outputs, these models do not possess genuine understanding or reasoning capabilities. They are biased, cannot fact-check themselves or apply common sense and may confidently present misinformation if it aligns with their training data.
In the face of worldwide admiration for LLMs, Bender and her colleagues take a step back to ask: How big is too big? They caution against over-relying on language models that can produce human-like text without any real comprehension of truth or ethics. They explore the possible risks associated with developing larger models and propose paths for mitigating those risks. Bender et al.’s recommendations include weighing environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything from the web,2 conducting pre-development exercises to evaluate how planned approaches fit into research goals, and encouraging research directions beyond merely increasing model size.
Traditional corpora are safe…
Carefully curated corpora like the British National Corpus, BNC2014, the Corpus of Contemporary American English, the Brown Corpus and its family members, the Lancaster-Oslo/Bergen Corpus (LOB), the International Corpus of English, etc.3 are holding their ground. For now, they remain unaffected by the challenges facing other types of linguistic data. Because these corpora are built from carefully chosen, verified samples of human-produced language, often focusing on specific time periods, genres, or varieties, they are the gold standard in linguistic research. Their creation involves meticulous quality control: texts are checked and cleaned manually to ensure they truly represent the linguistic phenomena they aim to capture. Historical corpora, like the Helsinki Corpus or COHA, for instance, have an added layer of security. Their texts come from before the rise of AI-generated content, meaning they are undeniably human-authored. What is more, these corpora typically have fixed timeframes for data collection, so they avoid including material created after a certain date, AI-generated or otherwise.
Another big strength is their thorough documentation. Researchers can see exactly how the corpora were built, what sources were used, and how the data was prepared4. This transparency makes it clear what kind of language data is being analyzed. These corpora also aim for balance and representativeness, with each source carefully processed to be as free from distributional skew as possible and ready for linguistic study. Such resources continue to stand strong amidst the shifting landscape of language data. I believe they are safe.
…but what about web-based corpora?
However, the contamination of the web with AI-generated texts could affect corpora based on web scraping, such as Sketch Engine’s enTenTen, frTenTen, deTenTen, or ruTenTen, as well as other web-based resources like the frWaC or ukWaC, and various social-media corpora derived from platforms like X/Twitter and Reddit. Web-based corpora like these, as long as they keep being compiled on present-day scrapings, are increasingly at risk of contamination by AI-generated content.
The primary concern is that, in a matter of months or years, the proportion of natural language data on the web could decrease to the point where it is overshadowed by AI-generated text. Such contamination will skew linguistic analyses, leading to misrepresentations of actual human language patterns and usage. At best, it will add an extra layer of complexity to the composition of language samples that linguists will have to disentangle. While linguists may be prepared and equipped for this challenge, it will undoubtedly lengthen the workflow.
This issue is not specific to corpus linguistics, of course. It also impacts journalism, fiction and non-fiction publishing, educational institutions, and other sectors that rely on authentic human-generated content. For instance, news organizations are increasingly struggling to differentiate between genuine user-generated content and AI-fabricated stories. Literary agents and publishers will soon find it challenging to identify original works amid a flood of AI-generated manuscripts. Similarly, academic publishers are facing difficulties in verifying the authenticity of submissions, while educational institutions contend with issues of academic integrity as AI-generated essays become more prevalent. Market research firms relying on web-scraped data for consumer insights may find their analyses skewed by AI-generated opinions and reviews. Ethical social media platforms (such as Mastodon) may face challenges in maintaining genuine user engagement metrics due to the rising prevalence of AI interactions. This issue is particularly evident on controversial platforms like X (formerly Twitter), where AI-driven bots produce fake interactions that artificially inflate engagement statistics.
As AI-generated content becomes more sophisticated and pervasive, these sectors, just like corpus linguistics, will need to develop new strategies and tools to authenticate and validate human-authored content. Otherwise, there will be no way of ensuring the integrity and reliability of their work. As an avid fiction reader, I cannot help but imagine that, in the near future, publishers might introduce some kind of authenticity seal on book covers to distinguish human-authored fiction from AI-generated fiction. Perhaps web-based corpora will feature such as seal.
The ouroboros menace
The impending contamination of linguistic datasets with AI-generated content poses big methodological challenges. The most insidious danger in this scenario is the emergence of a linguistic ouroboros (Fig. 1): a self-consuming cycle where AI models are trained on data increasingly polluted by AI-generated content, only to produce more AI content that further contaminates the datasets.

This self-reinforcing loop could lead to a progressive distortion of what we consider natural language, as each generation of AI models learns from and amplifies the artifacts and biases of its predecessors. The result could be a gradual drift away from authentic human language patterns, creating a sort of linguistic “uncanny valley” where AI-generated text becomes simultaneously more prevalent and less representative of genuine human communication (Radivojevic et al 2024).5
Moreover, this contamination is not limited to just skewing language models. It could also impact a wide range of NLP tasks, from sentiment analysis and topic modeling to machine translation and text summarization. As these models inadvertently incorporate AI-generated patterns, their outputs may become less aligned with human linguistic intuitions and communicative norms.
Other issues beyond corpus linguistics
The stakes are high because they extend beyond just preserving the validity of language studies. As AI takes on an increasingly significant role in content creation, we should also consider three additional concerns: the carbon footprint of generating content with AI, the traceability of text sources used in corpus composition, and copyright issues.
The environmental impact of training and running LLMs is huge, with some estimates suggesting that training a single large AI model can emit as much carbon as five cars over their lifetimes. Additionally, because AI systems rely on vast and often uncredited data sources, thus involving frequent copyright infringements, it becomes increasingly difficult to verify the origin, authenticity, and potential biases of the text used to train these models. When no copyright applies, the unauthorized use of data can still be considered theft of intellectual property. As evidenced by ongoing legal battles and discussions around these topics, clearer regulations and ethical guidelines are needed to ensure that AI development respects intellectual property rights while preserving innovation. In other words, innovation is good, but it must be pursued responsibly and fairly.
Breaking the cycle
This blog post does not offer solutions but aligns with the general blueprint that breaking the cycle requires researchers and developers to continue devising robust methods for detecting, flagging, and filtering AI-generated content. We need to make sure that AI-free datasets are created for training and evaluation in NLP and that no AI-generated text contaminates natural language corpora in corpus linguistics. This task is becoming increasingly difficult as AI models become more sophisticated and AI-generated content becomes harder to detect.
Going further
To go further, I invite you to listen to this episode of Lingthusiam, “Helping computers decode sentences – Interview with Emily M. Bender“, which was released just as I finished writing this post, and in which Lauren Gawne interviews Emily Bender. In this episode, EB talks about the complexity of language processing and explains how much computers struggle to understand language in the same way humans do. She also mentions her involvement in the Mystery AI Hype Theater 3000 podcast and her research on the societal impacts of language technologies. As you may have guessed, she advocates a critical approach to computational linguistics and artificial intelligence.
References
Bender, E. M., & Friedman, B. 2018. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics, 6:587–604.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FATML ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623.
Mori, M. (1970). The uncanny valley phenomenon. Energy, 7(4), 33-35.
Radivojevic, K., Chou, M., Badillo-Urquiola, K., & Brenner, P. (2024). Human Perception of LLM-generated Text Content in Social Media Environments. arXiv.
- The only exception to this principle are corpora of elicited texts, but those are designed for specific purposes, such as studying particular linguistic phenomena that may be rare in natural discourse or analyzing language use in highly specialized domains or professions, to give just two examples. In these cases, the controlled nature of elicitation allows linguists to focus on specific aspects of a given language. By doing so, linguists accept to sacrifice some degree of naturalness in exchange for the investigation targeted language phenomena. [↩]
- Data statements include details such as curation rationale and data sources (Bender & Friedman 2018). They make it possible to understand how experimental results might generalize and what biases might be reflected in systems built on a given software. Data statements also address harms caused by bias in datasets. While initially developed for language data types, data statements could be adapted for a wide range of data types, including corpora, with adjustments to account for their unique characteristics. Practices involving corpora should likewise support better transparency in the compilation and documentation of natural language data. [↩]
- Because I am a professor of English linguistics, I have chosen only corpora of English as examples. Of course, corpora are not limited to English. [↩]
- By way of illustration, this link takes you to a spreadsheet that explains the composition of the COCA. [↩]
- The term “uncanny valley” was originally coined by roboticist Masahiro Mori in 1970 to describe the unsettling feeling people experience when encountering robots or digital representations that closely resemble humans, but are not quite convincing (Mori 1970). [↩]
OpenEdition vous propose de citer ce billet de la manière suivante :
Guillaume Desagulier (25 novembre 2024). Corpus linguistics in the LLM era – the changing nature of language data. Around the word. Consulté le 17 avril 2025 à l’adresse https://doi.org/10.58079/12qwh
1 réponse
[…] [Billet] Corpus linguistics in the LLM era – the changing nature of language data […]