Building Your Automation Layer
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 3: Building Your Automation Layer
There’s a useful distinction between automation that requires you to invoke it and automation that runs on its own. Both are valuable. They serve different purposes and get built differently.
The first kind — invocable automation — is what skills handle. You decide when to run the draft-reviewer or the mail-triage. You start it, it executes, you review the output. The automation compresses a multi-step process into a single command, but you’re still the one choosing when that happens.
The second kind — autonomous automation — runs whether you think about it or not. Your morning briefing arrives in your daily note every day. Your site health check pings every six hours. Your content pipeline starts processing when a file lands in the right folder. You set it up once and it runs until you turn it off.
Both are worth having. The question is which tasks belong in which category.
Starting With Skills
A skill is just a markdown file in ~/.claude/skills/ with a clear name and instructions. That’s it. No code required unless the task needs it.
Here’s the rough shape of a useful skill:
# draft-reviewer
You are reviewing a draft for quality and voice consistency.
1. Check for AI slop phrases (see VOICE.md for the full list)
2. Check reading level targets grade 8-10
3. Flag any sentences over 35 words
4. Identify weak reasoning or unsupported claims
5. Check that the piece has a clear point and delivers on it
Report issues by category with line references. Then give an overall verdict: ready, needs work, or rewrite.
The specificity matters. A vague skill produces vague output. The more precisely you’ve described what the skill should check — and what good looks like — the more useful it is.
A few principles from building and running skills in production:
Name them for what they do, not what they are. draft-reviewer beats writing-helper. log-to-daily beats note-taker. The name should tell you exactly when to reach for it.
Build skills from pain, not inspiration. The best skills come from noticing you’ve done the same multi-step thing three times this week and thinking “that could be a skill.” Building skills speculatively — because they seem like a good idea — usually produces things you never actually use.
Iterate before you trust. Run a new skill on five or six real examples before you rely on it. The first version will almost always have edge cases you didn’t anticipate. Refinement takes maybe twenty minutes of watching it run and noting what it missed or got wrong.
The Build-Once Automation Pattern
Once a skill is working reliably, you can wrap it in automation so it runs on a schedule or trigger.
The tools for this are:
macOS launchd — for scheduled local scripts. This is how my morning-brief runs: a launchd plist triggers a shell script at 7am that starts a Claude Code session, invokes the morning-brief skill, and writes the output to my Obsidian daily note. No input from me required.
n8n — for event-driven workflows. Something happens (a form is submitted, a file arrives in a folder, a webhook fires), and n8n triggers a sequence of actions. I use this for contact form notifications, site health alerts, and a handful of content processing workflows. If you’ve gone through the n8n + AI module, you’ll recognise the pattern.
GitHub Actions / cron jobs — for server-side automation. Site health checks, scheduled builds, anything that needs to run reliably on infrastructure rather than your laptop.
File watchers — for folder-based triggers. Drop a file in the right place and automation picks it up. I use this for the content pipeline: raw notes go into an inbox/ folder, a watcher detects the new file, and the pipeline processes it.
The pattern in each case is the same: take a skill that works manually, add a trigger, and remove yourself from the invocation step.
The Content Pipeline Example
This one gets explained in detail because it illustrates the full build-once pattern.
The problem: I was spending too much time moving content from raw idea to publishable draft. The steps were consistent — research, structure, draft, review — but I was doing them manually each time, which meant context-switching between tools and forgetting which stage I was at.
The solution: a skill that handles each stage in sequence, with a subagent for each step.
The pipeline works like this:
- Raw note lands in
content/inbox/— could be a title and a few bullet points, or a full rough draft - The pipeline detects the file and reads it
- Stage 1: a research subagent looks up any claims that need verification and appends a sources section
- Stage 2: a structure subagent assesses whether the piece has a clear point and logical flow, and adds an outline if it doesn’t
- Stage 3: a draft subagent expands the piece to target length, following voice guidelines from VOICE.md
- Stage 4: the draft-reviewer skill runs and flags any issues
- Output goes into
content/review/with a summary of what was changed and what the reviewer flagged
I review what’s in content/review/ and either approve it as-is, make edits, or send it back through with additional instructions.
The build time was roughly two hours. The skill has processed dozens of pieces since. The consistent quality floor it enforces — same voice, same reading level, same review criteria every time — is something I couldn’t maintain manually without significant effort.
What Not to Automate
It’s easy to get caught up in the building and start automating things that shouldn’t be automated. A few categories to avoid:
Anything that requires your specific relationship. Automated responses to clients or readers sound automated. People can tell. The warm email that makes someone feel seen takes thirty seconds to write and isn’t worth automating.
Decisions with high stakes and low frequency. If you’re making a decision once a month and it matters significantly, that’s not automation territory — that’s a judgment call that deserves your full attention.
Things you haven’t done manually enough to understand. Automating a process you’ve never run by hand means you don’t know what good looks like. You’ll build automation for the wrong thing, or build it wrong, and won’t recognise it. Always manual first.
Processes that are still changing. Automation crystallises a process as it is now. If you’re still experimenting with how something should work, automation gets in the way. Stabilise first, then automate.
The Automation Audit
Once a quarter, I do a quick audit of my automation layer:
- Which automations are running? (Check cron jobs, launchd entries, n8n workflows)
- Which haven’t fired in the last month? (Probably dead weight — disable or delete)
- Which are firing but being ignored? (The output isn’t useful — fix or remove)
- Which are firing and being used? (These are working — leave them alone)
- What am I still doing manually that has the right profile for automation? (New candidates)
The audit takes about thirty minutes. It keeps the automation layer from accumulating things that run but don’t help, which is its own kind of cognitive overhead.
The next module looks at how prompt, delegate, and automate work together as a system, and how to assess where each of your current workflows sits.
Check Your Understanding
Answer all questions correctly to complete this module.
1. What is the distinction between the two kinds of automation described in this module?
2. Which of these is NOT listed as a reason to avoid automating something?
3. What does the quarterly automation audit check for?
Pass the quiz above to unlock
Save failed. Please try again.