The words
Delve "Let's delve into this topic..."
The most documented AI tell. ChatGPT uses "delve" at a rate far higher than human writers, its appearance in academic and medical papers spiked sharply after ChatGPT's public release in 2022. Why? The honest answer is: nobody knows for certain. The leading theory, reported by The Guardian in April 2024, is that OpenAI's RLHF process, where human contractors rate model outputs to shape its behaviour, was partly carried out by workers in Nigeria, where "delve" is more common in formal business English than in British or American usage. The word may have been reinforced as a marker of thoroughness. OpenAI has not confirmed this. It remains a plausible, widely circulated hypothesis, not a proven cause.
"Delve" is now so associated with AI that writers who used it legitimately have started avoiding it to avoid being mistaken for a machine.
Tapestry "a rich tapestry of..."
Almost always appears with "rich" in front of it. A signal phrase that models reach for when summarising something complex or diverse. The word itself is fine. The pattern, using it to wrap up a description that doesn't have a cleaner ending, is a statistical habit, not a stylistic choice.
Nuanced "a nuanced understanding of..."
One of the most overused words in AI output. Often appears where no nuance is then demonstrated. Models have learned that calling something nuanced signals sophistication. The irony is that genuine nuance rarely needs announcing.
Leverage "leverage this capability to..."
Used as a verb meaning "use." Models absorbed it from business and tech writing where it was already overused by humans. AI has concentrated and amplified it. If a sentence uses "leverage" as a verb and you can replace it with "use" without losing anything, it's filler.
Navigate "navigate the complexities of..."
A metaphor that models reach for when describing any difficult situation. "Navigate the challenges." "Navigate this transition." "Navigate an uncertain landscape." It sounds purposeful. It says very little. Usually a sign that the underlying thought hasn't been completed.
Landscape "the evolving landscape of AI..."
Almost never means an actual landscape. In AI writing it means "the general state of things." Almost always paired with "evolving," "competitive," or "complex." The word does no work, it's a filler noun that signals breadth without providing it.
Underscore "This underscores the importance of..."
Used as a verb meaning "emphasise" or "highlight." Common in formal writing, which is probably where models learned it. Appears most often at the end of a paragraph as a way of landing on a conclusion without fully earning it. "This underscores the need for careful consideration" is almost always a place where a sharper sentence should be.
Robust "a robust framework for..."
Originally an engineering term meaning structurally sound. In AI writing it means "good" or "thorough." Appears in contexts where no robustness has been demonstrated or defined. Like "nuanced," it signals quality rather than providing it.
Holistic "a holistic approach to..."
Means "considering the whole." In practice, AI uses it to make something sound more comprehensive than it is. If the response then addresses only one or two aspects of a topic, "holistic" was doing marketing, not description.
Testament "a testament to human ingenuity"
A phrase models reach for when making something sound meaningful or significant. Usually a sign that the actual significance hasn't been articulated. "It is a testament to" almost always can be deleted and replaced with a sentence that actually says what makes the thing significant.
Foster "to foster collaboration and innovation"
A soft, aspirational verb. Usually appears in output about organisations, communities, or environments. "Foster a culture of..." is a phrase that has passed through so many strategy documents and AI outputs that it now carries almost no information.
The phrases
Certainly! / Absolutely! / Of course!
The eager opener. Models learned from human feedback that beginning with enthusiastic agreement felt helpful. It's the verbal equivalent of a customer service smile. Real writers don't start responses with "Certainly!" People noticed. Most labs have tuned it down. It still leaks through.
It's worth noting that...
A hedge phrase that signals an upcoming caveat or qualification. Usually true, the thing being noted is worth noting, but the phrase itself adds nothing. If something is worth noting, note it. The introduction is filler.
It's important to remember that...
Same structure as "it's worth noting." The phrase is doing work that the content should do. If something is important to remember, the writing should make it memorable, not announce that it should be remembered.
Feel free to...
"Feel free to ask if you have more questions." Almost always the last line. A sign-off that implies generosity but is cost-free for the model. Every AI outputs it. Nobody says it in real conversation. It's a trained politeness gesture that has become wallpaper.
In conclusion... / In summary...
Models learned from formal essays that conclusions exist. They reproduce the structure even when no conclusion is needed, when the response was already short, or when a summary just repeats what was said. The tell is a summary that adds no new synthesis, only repetition with a label on it.
Additionally... / Furthermore... / Moreover...
Connective tissue words from formal writing that models overuse. Correct in academic prose. Stilted in conversation. The giveaway is when they appear in a list that could simply use commas, or when three points are each introduced with a different one of these words in sequence.
I'd be happy to help with that.
A warmth phrase that training reinforced because human raters liked it. It adds nothing to the response. The help is either there or it isn't. Saying you'd be happy to provide it before providing it is a performance of helpfulness, not helpfulness itself.
The punctuation
The Em Dash, used everywhere
The most studied AI punctuation tell. Models use em dashes at a rate that stands out clearly against human writing. Why is genuinely unclear, the African RLHF contractor theory has been applied here too, but the em dash story predates any documented connection. What is documented: models use them constantly, users have tried and largely failed to instruct models to stop, and the habit appears to be deep in the weights rather than a surface behaviour. Real writers who liked em dashes have started avoiding them to avoid being mistaken for AI.
OpenAI forum threads document users' unsuccessful attempts to instruct models to stop. The habit resists prompting.
Bullet Point Everything
Models default to bullet points for almost any multi-part answer. Human raters rewarded structured, scannable output. Models learned that bullets signal thoroughness. The result: information that would flow naturally as prose gets broken into fragments that are harder to read and lose the connections between ideas. A bulleted list where prose was needed is a structural tell.
Bold Headers in Conversational Replies
A response to a simple question that arrives formatted like a Wikipedia article, bold section headers, sub-bullets, a summary at the end. Models learned that structured output felt professional. Applied to conversational exchanges it feels clinical, and signals that the model is performing thoroughness rather than actually thinking about what format serves the answer.
The patterns
Regression to the Mean
The deepest structural tell. Because models predict statistically likely text, they drift toward the most generic version of any statement. The specific, unusual, or genuinely surprising gets smoothed out. A person becomes "a visionary leader." A product becomes "a comprehensive solution." The writing is polished and says less than a rougher, more specific sentence would. If everything sounds like marketing copy, it probably came from a model.
The Balanced Take
On controversial topics, models were trained to present multiple perspectives. "On one hand.., on the other hand..." The balance is technically correct and often tells you nothing. A model trained to avoid giving offence produces output that avoids commitment. Real writers take positions. AI hedges.
Excessive Caveats
Statements followed by qualifications that partially or fully retract them. "X is generally true, though it's important to note that there may be exceptions depending on context, and individual results may vary." The caveats are individually defensible. Together they erode any usefulness the original statement had. A trained response to liability, not a thinking response to complexity.
The Unnecessary Summary
A response that ends by summarising what it just said. "In summary, we've explored X, Y, and Z, and seen how they relate to each other." If the response was short enough to read, the summary was not needed. If it was long enough to need a summary, the summary rarely adds synthesis. It's a learned essay structure applied indiscriminately.
Positive Inflation
The tendency to describe things as more significant, more impressive, or more important than the evidence supports. Famous people become "revolutionary." Products become "groundbreaking." Ordinary developments become "paradigm shifts." Models learned from internet data where positive, important-sounding language is the norm. Specific praise got smoothed into generic celebration.