Teaching AI Your Voice
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 3: Teaching AI Your Voice
The difference between AI-assisted writing that readers can detect and AI-assisted writing that reads like you is a voice profile.
Without one, every AI draft sounds the same: hedging language, passive constructions, a kind of enthusiastic neutrality that could have been written by anyone. Readers can’t always articulate why it feels off, but they feel it. Open rates drop. Replies stop.
With a good voice profile, the draft agent is constrained to patterns that actually sound like you — your sentence rhythms, your preferred phrases, your characteristic way of introducing a problem or landing a point. The agent still generates, but it generates within your range rather than some average of all the text it trained on.
VOICE.md
The voice profile lives in a file called VOICE.md. This is not a style guide in the marketing sense — it’s a reference document that Claude Code agents read before drafting. It needs to be specific enough that an agent following it produces noticeably different output than one without it.
What goes in VOICE.md:
Voice characteristics. Not abstract adjectives (“conversational,” “authentic”) but observable patterns. For mine: I front-load the point, I use short sentences for emphasis at natural breaks, I write “Here’s the thing” before a reality check, I avoid passive constructions, I acknowledge limitations explicitly rather than hedging with qualifiers.
Banned phrases. Words and constructions that are common in AI output but not in your writing. My list includes “delve,” “navigate,” “crucial,” “leverage,” “dive in,” and “it’s worth noting.” Any sentence with those in a draft is automatically suspicious.
Positive examples. This is the part most people skip and it’s the most important. Two or three paragraphs from your actual published writing that represent your voice at its best. Rules tell the agent what to avoid; examples show it what to hit. The agent can pattern-match against published prose in a way it cannot against a list of adjectives.
Anti-examples. Paragraphs that are plausible but wrong — the kinds of sentences an AI produces that sound vaguely like you but aren’t. These help the agent identify the gap between generic AI output and your actual voice.
Reading level target. For my newsletters, this is grade 8 to 10. Not because my readers aren’t intelligent — they are — but because accessible writing at that level is faster to read and doesn’t require effort to parse. Complexity should live in the ideas, not the sentences.
Building Your Voice Profile
You cannot write a good VOICE.md from memory. You have to build it from your actual published writing.
The process I used:
Step 1: Gather a sample set. Pull ten to fifteen pieces of your best work — the issues that got the most replies, the posts you still think are good when you reread them. If you’re starting a new newsletter and don’t have much yet, use anything you’ve written: emails, Slack messages, LinkedIn posts. Voice is consistent across formats.
Step 2: Run the voice-analyzer. This is a Claude Code skill that reads a set of files and extracts voice patterns: characteristic sentence lengths, opening constructions, transition phrases, vocabulary preferences, what you do when making a strong point versus hedging. It doesn’t tell you what your voice should be — it tells you what your voice actually is.
# In Claude Code
/voice-analyzer --files "path/to/sample/*.md"
The output is a draft VOICE.md you then edit. The analyser gets most of the patterns right but misses nuance — you’ll remove some things it includes (patterns that are in your writing but aren’t intentional or good) and add things it misses (distinctive choices you make that don’t appear in the sample frequently enough to register).
Step 3: Refine with examples. Pull two or three paragraphs from your sample set that represent your voice best and add them directly to the VOICE.md as positive examples. These are not paraphrased or summarised — they’re verbatim. The agent reads them and uses them as anchors.
Step 4: Add the banned phrases. Review your samples for what’s notably absent — phrases you’ve never used, constructions you actively dislike. Add them to the banned list. Start small; you can always add more.
Step 5: Test it. Give the newsletter-writer agent a brief and your VOICE.md and ask it to draft a section. Read the output. Does it sound like you? If not, where specifically does it diverge? Adjust the relevant section of VOICE.md and test again. Two or three rounds is usually enough to get a profile that produces usable drafts.
The draft-reviewer as Quality Gate
VOICE.md constrains what the newsletter-writer agent produces. The draft-reviewer is the gate that catches what slips through.
The draft-reviewer agent runs after every draft. It’s a different agent from the writer — the same context that produced the draft will rubber-stamp its own problems, so the review needs to be a separate step. The reviewer reads VOICE.md, reads the draft, and applies fixes directly to the file. It doesn’t produce a list of suggestions for you to implement. It edits.
What it fixes:
Tier 1 slop. Phrases that are immediate credibility killers in AI-assisted writing. These get replaced without question.
Staccato fragments. Three or more short declarative sentences in a row is a pattern AI produces constantly and humans rarely do. The reviewer identifies these clusters and reconstructs them.
Weak openings. The first sentence of a newsletter is the most important one. If it’s hedging or generic, readers stop. The reviewer flags openings that fail this test and rewrites them.
Voice mismatches. Paragraphs that pass the slop test but don’t sound like you. The reviewer compares against the voice examples in VOICE.md and flags the specific gap.
The reviewer reports what it changed, with before/after for each edit. You’re not flying blind — you can see exactly what was adjusted and disagree if the edit was wrong.
The voice-editor Skill
There’s a lighter-weight option for voice checking: the voice-editor skill. Where the draft-reviewer is a full quality agent that runs the entire review pipeline, the voice-editor is a focused skill for voice alignment specifically. It’s faster and cheaper when you’ve already reviewed for slop and craft and just want a final voice pass.
Both the draft-reviewer and voice-editor read VOICE.md. The difference is scope: voice-editor does voice checks, draft-reviewer does voice checks plus everything else.
For the newsletter pipeline, I use draft-reviewer as the post-draft gate and voice-editor for spot checks when I’ve edited the draft manually and want to make sure my edits haven’t introduced inconsistencies.
What Voice Profiles Don’t Fix
A voice profile doesn’t compensate for weak research or a poor structure. If the brief is thin and the outline is vague, the draft will be vague regardless of how accurate the voice profile is.
Voice is the top layer. It makes a solid draft sound like you. It cannot make a weak draft good.
This is worth repeating because it’s where the automation fails when people implement it too quickly: they skip the research and outline stages and try to have the voice profile carry the whole output. It doesn’t work. The pipeline needs each stage to do its job.
Maintaining the Profile
Your voice evolves. The VOICE.md you build today from a sample of last year’s writing will be slightly wrong in a year.
The fix is periodic maintenance rather than constant updates. Every three to four months, run the voice-analyser again on a new sample of your most recent writing. Compare the output against your current VOICE.md. Update the banned phrases list as you notice new patterns you dislike. Replace the voice examples with newer work if your older examples no longer feel representative.
It’s a twenty-minute maintenance task, not a rebuild.
Module 4 covers the distribution end: Kit.com, the Kit CLI, and how to manage everything about your newsletter without touching the web dashboard.
Check Your Understanding
Answer all questions correctly to complete this module.
1. What is MORE important in VOICE.md than abstract adjectives like 'conversational'?
2. Why are positive examples the most important part of VOICE.md?
3. How often should you update VOICE.md?
Pass the quiz above to unlock
Save failed. Please try again.