SHAPE Implementation Card
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readSHAPE Implementation Card
Most AI projects fail because of poor execution, not bad strategy. SHAPE gives you five phases that move you from “this could work” to “this is working” — without the usual drift into complexity.
This card is a condensed reference. For the full worked examples and decision frameworks, see the SHAPE Method chapter.
Situation — Where are you now?
Get honest about your current state before changing anything.
There is almost always a gap between how you think you work and how you actually work. Call it the Reality Gap. It shows up between what you are paying for and what you actually use — and between your intended workflow and the shortcuts you have quietly adopted.
Ask yourself:
- What tools do you pay for vs. actually use every day?
- Where have you built workarounds because the proper tool was too clunky?
- What are you doing manually that feels like it should be faster?
- Which repetitive tasks eat the most time each week?
Without an honest Situation assessment, your Hypothesis will be based on guesswork. You will fix the wrong thing.
Hypothesis — What does success look like?
A hypothesis is a testable statement with a number attached.
Before you start using a tool, write down what you expect it to do for you. Be specific.
Bad: “I think this will save time.”
Good: “This will cut research time from 90 minutes to 30 minutes per client.”
If you cannot put a number on it, you are not ready to test. Writing a vague hypothesis gives you nothing to check against in a month. A specific one tells you immediately whether it worked.
What to measure:
- Speed — how much faster is this specific task?
- Quality — are outputs more consistent, needing less editing?
- Adoption — are you still using it after two weeks, or did it drift?
Action — Test it on real work
A hypothesis without a test is just an opinion.
Run your test on real work for one week — not a hypothetical project, not a trial run. Pick something you actually need to do this week and use the AI tool to do it.
The Takers/Shapers/Makers decision:
How much you customise your AI tools has a large impact on whether they succeed.
- Takers (67% success): Use tools out of the box — ChatGPT, Claude, Copilot — with no custom setup. This is where most people should start, and where most should stay.
- Shapers (45% success): Customise the tools for your specific workflow — custom prompts, templates, simple automations.
- Makers (33% success): Build from scratch — custom code, bespoke integrations. Unless you are technical and have a clear competitive reason to build, this is almost always the wrong choice.
The pattern is clear: simple tools that work beat complex setups that need maintenance. Most of us overestimate how unique our needs are.
If your test saves time and the output is usable, keep going for a month. If it is not saving time, try a different approach before adding complexity.
Process — Apply what works more broadly
Your test worked. Now take what worked and apply it adjacently.
If AI-assisted research saved you time, ask: what other research-heavy tasks could use the same approach? If a prompt template worked well for proposals, could you adapt it for case studies or project briefs?
The simplicity principle: Resist adding integrations, automations, or custom tooling until the basic workflow is running smoothly across multiple use cases. Every additional layer reduces the chance it keeps working.
If you have a small team or a VA: teach one other person the workflow before building documentation. Start with the person most comfortable with the tools. Keep the same tool and approach — do not let each person pick their own.
Evaluation — Is this actually working?
Without measuring, you will drift from “this is working” to “I think this is working” to “I am not sure anymore.”
Set a recurring reminder — first of every month, spend 15 minutes answering these questions:
- Am I still using this regularly, or has it drifted into the subscriptions I ignore?
- What is my time-per-task compared to my baseline? (Check your hypothesis.)
- Am I adding unnecessary complexity — extra steps or integrations that are not pulling their weight?
- Could I get similar results with a simpler approach?
- What is the next workflow I should apply this to?
The simplicity check: Every month, ask yourself: is this getting more complicated than it needs to be? Simple tools people actually use beat sophisticated tools they abandon.
Evaluation closes the loop — and opens the next one. SHAPE is a cycle, not a checklist. Each evaluation feeds the next situation assessment.
Quick Reference
| Phase | Question | Time |
|---|---|---|
| Situation | Where am I honestly? | Day 1 |
| Hypothesis | What number will change? | Day 1 |
| Action | Does it work on real tasks? | Week 1–2 |
| Process | What else can I apply this to? | Week 3–4 |
| Evaluation | Is it still worth it? | Monthly |
Ready to put it into practice? The SHAPE Assessment turns this method into your implementation blueprint — walking you through each phase for a specific workflow you want to improve.