Language That Kills Voice
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 2 · Section 4 of 7
Language That Kills Voice
The patterns we’ve covered so far are relatively easy to spot — manufactured enthusiasm and template structures jump out once you know what to look for. But there’s a subtler category of problems that make AI writing technically correct while completely soulless. These are the language patterns that strip personality from text, leaving something that reads like it was written by a very polite, very careful committee.
The Hedging Problem
AI hedges everything. It can’t help itself. “It’s worth noting that” introduces points that should be direct. “You might find that” softens claims unnecessarily. “In some cases” qualifies statements that don’t need qualification. “Generally speaking” appears before specific assertions, undermining their specificity.
The model doesn’t want to make incorrect claims, so it softens everything for safety. Academic training data and enterprise content teach it that qualification equals rigor. The result reads like someone who’s afraid to have an opinion — or worse, someone who doesn’t trust you to understand that context matters without being explicitly told.
❌ “It’s worth noting that, in many cases, you might find that AI generally works better when you provide specific context.”
✅ “AI works better with specific context.”
The first version takes twenty-one words to say what the second says in seven. The hedging doesn’t add nuance — it adds noise.
The fix: make direct claims when you can support them. If you’re uncertain, say so — “I don’t know” is more honest than dressing up uncertainty in academic hedging.
The Formality Problem
AI defaults to clinical language when plain language works better. “Individuals with diabetes” instead of “people with diabetes.” “Utilize” instead of “use.” “In order to” instead of “to.”
This happens because training data includes massive amounts of academic papers and clinical research. AI learns that formality correlates with authority. What it doesn’t learn is that formality also creates distance — it makes writing feel like it’s being delivered from behind a podium instead of across a table.
❌ “Individuals seeking to optimize glucose regulation should implement systematic dietary interventions.”
✅ “People with diabetes should seek professional guidance on their diet.”
If you wouldn’t say it in conversation, don’t write it.
The Paired Adjective Problem
AI’s obsession with paired adjectives: “Unique and intense.” “Comprehensive and thorough.” “Simple and straightforward.” “Complex and nuanced.”
When AI thinks something needs description, it piles on adjectives under the assumption that more equals better. It doesn’t. Paired adjectives are almost always redundant — if something is thorough, it’s already comprehensive. If something is unique, adding “intense” doesn’t clarify anything. The second adjective rarely adds information; it just adds words.
❌ “The approach is comprehensive and thorough.”
✅ “It’s a thorough approach.”
Pick one — usually the stronger one.
False Specificity
Particularly insidious because it sounds authoritative. “Studies show…” with no citation. “Research indicates…” with no source. “Experts agree…” without naming which experts. “X% of people…” with made-up statistics.
When AI lacks actual data, it fakes specificity because that’s what appears in authoritative writing. The fix: provide real citations or acknowledge limitations. “In my experience” is more honest than “studies show” when you don’t have studies.
The Common Thread
These language patterns — the hedging, the formality, the adjective piling, the false specificity — all stem from the same misunderstanding: AI thinks “professional” writing means using more words, more qualification, and more formal vocabulary.
What professional writing actually means is clarity, precision, and respect for the reader’s time and intelligence.
When you catch yourself hedging every claim, using clinical language for common concepts, or piling on adjectives, you’re letting AI push you toward a version of “professional” that’s actually just distant and wordy. Your voice is in the directness, the plain language, and the confidence to make claims you can support.