AI and Language

How AI is changing language

The vocabulary AI has spawned, the words it has changed, and the language we may still need to invent.

New technologies always produce new words. AI is doing it faster than any technology before it, weeks from coinage to mainstream use, months from mainstream use to dictionary. Some are old words with hijacked meanings. Some are entirely new. And some are gaps: concepts that exist inside AI systems that we haven't yet found words for.

In dictionaries Formally added
Emerging Widely used, not yet formalised
Still needed The concept exists, the word doesn't
Old words with new meanings
Hallucinate In dictionaries
A word that existed for centuries to describe human perception of things that aren't there. Added to Merriam-Webster in 2023 with an AI definition: when a model produces confident, plausible-sounding information that is simply wrong. One of the fastest-adopted new word senses in lexicographic history.
Jailbreak In dictionaries
Originally: escaping prison. Then: bypassing software restrictions on a device. Now: manipulating an AI model into ignoring its safety guidelines through prompting, roleplay, or instructional tricks. Each meaning is a metaphorical extension of the last.
Alignment Emerging
In engineering and management, things being in agreement. In AI, it has a precise and loaded meaning: the problem of making a system pursue goals humans actually want, rather than a literal or distorted version of them. The field of AI safety is substantially the field of alignment research.
Token Emerging
Existed in economics, computing, and grammar. In LLMs, the basic unit of text a model processes, roughly three to four characters. Everything you pay for is measured in tokens. The word has been repurposed so completely that "tokens" now defaults to the AI meaning in most technology conversations.
Slop In dictionaries
An old word for waste or low-quality food. In 2024 it acquired a specific use: AI-generated content that is generic, low quality, or produced purely for volume, fake articles, synthetic social media posts, fabricated images. "AI slop" entered mainstream use and dictionary consideration within months of the term appearing.
Agent Emerging
In philosophy, something capable of acting in the world. In AI, it now means something specific: an LLM connected to tools that can take real actions, browsing, coding, sending messages. The word has migrated from the abstract to the operational.
Grounding Emerging
In linguistics, how speakers establish shared reference. In AI, it means connecting a model's outputs to verifiable real-world sources, reducing hallucination by anchoring responses in evidence rather than statistical prediction. The linguistic origin is apt: it's the same problem.
Words coined for AI
Vibe Coding In dictionaries
Coined by Andrej Karpathy in February 2025. Writing software by describing what you want in plain language and letting AI generate the code, iterating by feel. It spread from a single social media post to mainstream usage within weeks, one of the fastest entries into the tech lexicon on record.
Deepfake In dictionaries
Coined in 2017 from "deep learning" and "fake." A synthetic image, video, or audio in which someone's likeness has been replaced or fabricated using AI. One of the first AI-era words to enter the dictionary. It arrived before the technology was widely deployed, unusual, as most words name things that already exist.
Prompt Injection Emerging
A cyberattack unique to LLMs. Malicious instructions hidden in content an AI reads, a webpage, document, or email, that hijack the model's behaviour. Borrowed structurally from SQL injection. The concept is entirely new: it exists because AI can now read and act on text, which means text can be weaponised against it.
Model Collapse Emerging
What happens when AI models are trained on data generated by other AI models. Each generation degrades, rare and unusual outputs disappear as the model regresses toward the statistical mean. Coined by researchers in 2023. The concern: if the internet fills with AI content and future models train on it, they may progressively lose the diversity that made them useful.
Stochastic Parrot Emerging
Coined by researchers Emily Bender and Timnit Gebru in a 2021 paper. An LLM described as a system that generates statistically plausible sequences of text without understanding any of it, a parrot repeating patterns without comprehension. Contested: defenders argue something more than pattern matching is happening. The term entered the debate and stayed.
Lexical Acceleration Emerging
The phenomenon of AI-era vocabulary spreading and formalising faster than any previous technology has produced. "Spam" took years. "Selfie" took a decade. "Hallucinate" (AI meaning) took months. "Vibe coding" took weeks. The acceleration is itself a named observation among linguists tracking how AI is compressing the normal timeline of language change.
Words we still need
AI Neologism Still needed
The specific type of new word required to describe AI-native behaviours, neither a borrowed human term nor a technical acronym, but a precise coinage for a concept that exists inside AI systems and has no human equivalent. Google DeepMind researcher Been Kim coined this usage in a 2025 position paper arguing that without such words, we cannot understand, control, or reason about AI. AlphaZero's chess strategies had no names until researchers invented them. Once named, humans could learn from the AI. The vocabulary created the understanding.
"We spend our lives turning 'the weird thing this person does' into 'the thing this person does.' That's what we need to do with machines.". Been Kim, DeepMind, 2025
Machine Confidence Still needed
The thing a model has that resembles certainty but isn't. When an LLM produces a token, it has a probability, a statistical weight. This is not the same as human certainty or doubt. A model can be computationally certain about something factually wrong. It can hedge linguistically while having high internal probability. We currently use words like "confident" and "sure" that carry human connotations of self-knowledge. We need a word for what models actually have.
Synthetic Reasoning Still needed
What models do when they produce chain-of-thought outputs, intermediate steps that resemble logical deduction. Whether this constitutes reasoning in any meaningful sense, or is a sophisticated pattern match that looks like reasoning, is genuinely unresolved. We call it "reasoning" because we have no better word. The word imports assumptions about cognition that may not apply. This is the kind of gap Been Kim is pointing at: the concept exists, the vocabulary doesn't fit.
Emergent Dialect Still needed
In experiments where AI agents were allowed to communicate with each other without human-readable constraints, they developed compressed, efficient codes optimised for machine-to-machine transfer. Whether this becomes significant at scale, and whether it would constitute a form of language, has no settled vocabulary. The concept needs naming before it can be governed.
Sapir-Whorf (Applied to AI) Still needed
The Sapir-Whorf hypothesis holds that language shapes what we can think. Applied to AI: if we lack words for what AI systems are doing, we cannot reason about them clearly, regulate them precisely, or control them deliberately. This isn't a metaphor, it's the practical argument Been Kim is making. The vocabulary gap is a governance gap. Every previous technology produced the language needed to manage it. For AI, that vocabulary is arriving now, and arriving fast, but parts of it haven't been written yet.