The Full Workflow
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 5: The Full Workflow
Everything in this course so far has been individual components. Module 5 shows how they work together.
A typical Signal Over Noise issue, from “I know what I want to write about” to “it’s scheduled in Kit.com,” now takes about fifty minutes. Research, draft, review, queue. Most of that time is reading outputs and making editorial decisions, not producing anything.
Here’s the actual workflow.
Step 1: Pick the Topic (5 minutes)
Open the topic backlog in Obsidian. It’s a single markdown file with a running list of ideas organised by rough priority — things I’ve been thinking about, news hooks with a short window, evergreen topics that will still be relevant in three months.
Pick one. Confirm it hasn’t been covered recently. If the pipeline check surfaces a related issue from three months ago, decide whether the new angle is different enough to warrant another issue or whether I should reference the old one and go deeper.
This step doesn’t automate well because the judgement involved is genuinely mine. What does the audience need this week? What am I most curious about right now? What topic connects to something in the news that will increase relevance? These are editorial decisions, and the topic selection is the most consequential one in the whole process.
Step 2: Research (15 minutes waiting, 5 reviewing)
Kick off the newsletter-researcher agent with the topic:
/newsletter-researcher "topic: AI pipeline architecture"
The agent goes off and builds a research brief. While it runs, I do something else. When it comes back — usually ten to fifteen minutes for a thorough brief — I spend five minutes reviewing what it found.
The review is not passive. I check:
- Are the sources credible and current?
- Did it find the angle I was thinking about, or a different one that’s actually better?
- What’s missing that I know matters?
I often add two or three points manually — things I know from experience that the research didn’t surface, or a counterargument the brief glossed over. The brief is a starting point, not a final document.
Step 3: Outline (10 minutes)
With the brief in front of me, I sketch the structure. This is the most manual step and I don’t try to automate it.
Typical structure for a Signal Over Noise issue:
- Opening hook — a specific story, observation, or problem that pulls readers in
- The main argument or framework — the thing the issue is actually about
- Practical application — what to do with it, specifically
- A wrinkle — the complication, the exception, the thing most people get wrong
- Close — the takeaway and a forward-looking line
Not every issue follows that structure. Analysis pieces have a different shape than how-to pieces. But having a skeleton before the draft agent starts produces much better output than letting the agent decide the structure itself.
Step 4: Draft (15 minutes waiting, 10 reviewing)
Hand the brief and outline to the newsletter-writer agent:
/newsletter-writer brief="path/to/brief.md" outline="[paste outline]"
The agent runs through the multi-draft process described in Module 2 — the ugly first draft, the trimming pass, the empathy pass, the voice and style pass — before producing the final draft. With the Opus model, this takes ten to fifteen minutes.
When it comes back, I read the whole thing. Not a skim — the full read. I’m looking for:
- Does the opening work?
- Is the main argument actually made, or just implied?
- Are there sections that feel like filler?
- Are there specific details from my experience that the agent couldn’t know but would make the issue better?
I typically make five to ten edits at this point. Small ones — a sentence rewritten, a section reordered, a specific detail added that only I could add. The draft is usually about 80% of the way there. The edits get it to 95%.
Step 5: Review (5 minutes)
Send the edited draft to the draft-reviewer agent:
/draft-reviewer path/to/draft.md
The reviewer checks for the things I might have reintroduced during my edits: slop phrases, voice drift, structural issues. It applies fixes directly to the file and reports what changed.
This step takes about five minutes and I’ve stopped second-guessing it. The reviewer catches things I miss — usually a phrase that crept in during editing that sounds vaguely AI-generated, or a sentence that works grammatically but reads like a corporate memo. The before/after report means I can override specific edits if they were wrong, but that happens rarely.
Step 6: Final Read and Schedule (10 minutes)
One more read. This is the “would I be proud to send this?” check.
If yes: open Kit.com, paste in the HTML, set the subject line and preview text, and schedule. If no: identify the specific problem and fix it.
The subject line usually takes five minutes on its own. I test two or three options — the one I started with, a version that leads with the benefit rather than the topic, a version that uses a question. Kit.com’s broadcast editor doesn’t offer split testing for subject lines in the tier I’m on, but I still write options and pick the one I’d most likely open if it arrived in my inbox.
The Human-in-the-Loop Principle
I keep three things fully manual and I don’t plan to automate them:
Topic selection. The editorial judgement about what to write is the core of what makes a newsletter worth reading. Automating it would produce a newsletter that covers whatever the AI thinks is popular, which is not the same as what I think my readers need.
The outline. Structure is argument. How you organise information is a position. An AI-generated outline reflects a statistical sense of what essay structures look like, not an editorial sense of what this particular argument needs.
The final send decision. I review every issue in Kit.com before it goes. Not because I expect to find problems at this point — I usually don’t — but because the once-over is a commitment. I’m saying “yes, this represents me and my standards.” Fully automated sends would remove that accountability.
Everything else is automatable with the right setup, but I’d rather keep these three checkpoints than optimise them away.
What Takes the Most Time to Get Right
The part of this pipeline that took longest to get working well was the voice profile, not the tooling.
The newsletter-writer agent produces usable drafts from day one. The Kit CLI works immediately. The draft-reviewer catches the obvious problems quickly.
But the voice — the quality that makes a draft sound genuinely like you rather than like a competent approximation — took three months of refinement. Adding examples to VOICE.md, noticing where drafts were still diverging, adjusting the banned phrases list, updating the positive examples with newer published writing.
The profile I’m running now produces drafts that, when I read them back, feel like my writing. Early versions felt like my writing with the personality filed off. That gap closed through iteration, not through getting the initial setup perfect.
If you’re starting this pipeline and wondering why the drafts feel almost-right but not quite, that’s the likely cause. The VOICE.md isn’t wrong — it’s just not finished yet.
Running Both Newsletters
I publish Signal Over Noise weekly and Second Brain Chronicles on a slower schedule. Both run through the same pipeline with different VOICE.md profiles.
They’re different audiences and different tones. SoN is direct and technical. SBC is more narrative, more personal, more about the process of building the system than about the system itself. The pipeline handles both because the voice profile does the heavy lifting of making them distinct.
The topic backlog is separate for each newsletter — two files, two queues. The research, drafting, and review steps are the same agents running against different inputs.
Starting Simple
If this feels like too much to set up at once, start with one thing: the newsletter-writer agent with a basic VOICE.md.
Write a brief by hand — just your notes on the topic, the key points, the sources you found yourself. Give that to the agent. See how close the draft is to what you’d have written. Edit it. Send it.
Do that three or four times before adding more automation. You’ll develop a sense of where the agent is reliable and where it needs the most guidance, and that understanding will make every subsequent step easier to configure correctly.
The goal was never to remove you from the process. It was to remove the production overhead that makes publishing feel like a burden rather than a craft. When the automation is working well, you spend your time on the decisions that matter — what to say, how to say it, what to cut — and almost no time on the machinery around those decisions.
That’s the pipeline. Now go build yours.
Check Your Understanding
Answer all questions correctly to complete this module.
1. How long does a typical Signal Over Noise issue take from topic to scheduled?
2. Which three things should remain fully manual and never automated?
3. What part of the pipeline took longest to get working well?
Pass the quiz above to unlock
Save failed. Please try again.