A Real AI Workflow: All Five Skills in Action
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 7 · Section 2 of 3
A Real AI Workflow: All Five Skills in Action
The scenario: you have a client meeting in two days and they’ve asked you to come with a competitive analysis. You’re going to use AI to help. Let’s work through it.
Decomposition: what does “competitive analysis” actually mean?
Your first instinct might be to open a chat window and type: Analyse my competitors in the project management software space.
Don’t. That prompt is a single blob of work dressed up as an instruction. AI will produce something in response, but it’ll be its version of a competitive analysis — shaped by whatever examples fill its training data — not the specific thing your client needs.
Before you write a single prompt, decompose the task. What does a competitive analysis actually contain?
- Market positioning: how do competitors describe themselves and who are they targeting?
- Pricing: what are the tiers, what’s included at each level, how do they structure annual vs monthly?
- Features: what does each product do, and what’s conspicuously absent?
- Customer segments: who’s actually buying this, and is that the same as who the company claims to target?
- Strengths and weaknesses: where does each competitor have a genuine advantage, and where are they vulnerable?
Now you have five separate research tasks. Each one can be a focused AI conversation. Each one produces a section of the report. Decomposition turned one vague request into a process you can actually manage.
Algorithms: structure the prompts before you write them
Here’s where most people skip a step. They decompose the task, then jump straight into asking AI the questions.
But a prompt is a process instruction. If you want consistent, usable output, you need to think about the sequence before you start.
For the pricing section, a single “what are the pricing tiers for these five competitors?” prompt will get you a table. Maybe a useful one. But the AI doesn’t know what matters to your client — are they a 50-person team? Are they price-sensitive? Do they need specific integrations?
A better approach structures the prompt as a step-by-step process:
First, list the pricing tiers for each of these five products: [list]. Then, for each tier, note what’s included and what’s excluded. Then flag any pricing structures that are unusual — usage-based, per-seat caps, or hidden per-feature costs. Assume the buyer is evaluating options for a 40-50 person team with a limited budget.
That’s an algorithm. It tells the AI what to do, in what order, with what constraints. The output will be more useful — not because the AI got smarter, but because you gave it a process instead of a question.
Pattern Recognition: spotting the hallucination tells
You’ve got your first set of outputs. Now read them carefully, because this is where pattern recognition earns its place.
AI output has hallucination signatures. They’re not random — they follow predictable patterns once you know what you’re looking for.
Watch for suspiciously specific numbers. “Asana holds 23.4% of the project management software market.” That kind of precise market share figure should immediately raise a flag. Actual market share data is expensive, contested, and rarely that clean. Where did that number come from? In most cases: nowhere. The AI generated something that sounds like a real statistic.
Watch for citations that look too clean. A reference formatted perfectly — author, publication, year, page number — is not evidence it’s real. AI generates plausible-looking citations fluently. If it matters, verify it.
Watch for consensus language around contested claims. “Most analysts agree…” or “It’s widely accepted that…” are often covering for the absence of a specific source. Real analysts disagree constantly. If something is described as universally agreed upon, be curious about why.
None of this means the output is wrong. It means these are the spots worth checking before you put them in a client document.
Logical Reasoning: do the conclusions follow?
You’re reviewing the competitive strengths section. The AI has written: “Competitor X is the clear market leader, making them the default choice for enterprise teams.”
Slow down here. Market leader based on what? Revenue? Number of customers? Brand recognition? Search volume? These can all point to different products. “Market leader” is not a single fact — it’s a conclusion that depends entirely on which metric you’re using.
Ask the AI directly: On what basis are you describing Competitor X as the market leader? What evidence supports that claim?
Sometimes it’ll point to something real. Sometimes it’ll admit the claim was a generalisation. Either way, you now know whether that sentence belongs in your report.
This is logical reasoning in practice — not accepting a conclusion because it sounds authoritative, but tracing back to ask: what would need to be true for this to be correct?
Debugging: when the output misses the mark
You run the features section and the output is generic. It reads like a comparison you could find on any review site. There’s nothing specific to your client’s context, nothing that helps them make a decision.
Don’t just regenerate. Debug.
Trace backwards: why did this happen? What was in the prompt that would have produced this output? Usually the answer is obvious once you look — the prompt didn’t specify enough context. In this case: you didn’t tell the AI what industry the client is in, what their current workflow looks like, or what problem they’re actually trying to solve.
That’s the constraint that was missing. Add it:
The client is a professional services firm with 45 staff. Their current process is largely email-based with some use of spreadsheets for project tracking. They’ve tried two project management tools before and abandoned both due to adoption problems. Focus the feature comparison on ease of adoption, not feature depth.
Regenerate with that context and you’ll get something usable. The debugging wasn’t about the AI — it was about finding the gap in your own instructions.
That’s all five skills working together on one real task. None of them appeared as a formal step in a checklist. They appeared as natural thinking moves at the moment each was needed.
That’s what integration looks like.