Human Intuition vs Machine Calculation
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 2 · Section 3 of 3
Human Intuition vs Machine Calculation
On 11 May 1997, Garry Kasparov resigned Game 6 of his rematch against Deep Blue. Final score: 3.5 to 2.5 in the computer’s favour. It was the first time a computer had beaten a reigning world chess champion in a regulation match.
The result made headlines. The story behind it is more useful.
What each side actually brought
Kasparov had been world champion for over a decade. What made him exceptional wasn’t calculation speed — any strong player can calculate. It was pattern recognition built from thousands of games, and what chess players call positional understanding: the ability to sense when a position was good or bad, even without calculating every variation.
This is intuition in a technical sense. Not a vague feeling, but a fast, compressed judgement based on deeply internalised patterns. Kasparov could look at a position and immediately sense which pieces were well-placed, which weaknesses mattered, which long-term plans were available — without consciously walking through each possibility. Years of experience had made that analysis automatic.
He could also adapt. Kasparov changed style mid-game, mid-series, based on what he observed. When a line stopped working, he found a different one. He was reading his opponent as much as the board.
Deep Blue brought something different entirely. It evaluated 200 million positions per second and had no opinions about any of them. It didn’t sense that a position was promising. It calculated a numerical value for it based on rules its designers had encoded, picked the move with the best expected outcome, and repeated this process for every single move, at the same level of performance, without fatigue or emotion.
It could not adapt between games. It could not read Kasparov. But the IBM team could — and between games, they updated Deep Blue’s evaluation functions based on what they observed. The machine itself didn’t learn. The humans running it did, and they fed that learning back in.
The moment that changed the match
Game 2 is the pivot point. Deep Blue played a move that chess experts called “uncomputer-like” — a subtle positional manoeuvre that appeared to show strategic understanding rather than tactical calculation.
Kasparov was unsettled. He spent significant time analysing it, began second-guessing his read of the machine, and ultimately lost the game. The psychological effect carried through the rest of the match.
The twist: that move wasn’t strategic genius. Due to a software bug, Deep Blue had run out of usable calculations and selected a move at random from its database. The move looked deep because Kasparov assumed it must be. He attributed meaning to something that had none.
This is worth sitting with. Kasparov’s instinct — that a move this unusual must reflect something he wasn’t seeing — was reasonable. It’s how you’d interpret an unusual move from a human opponent. Applied to a machine, it led him wrong. The machine’s opacity became an advantage.
What this tells us about working with AI today
The Kasparov match is a clean illustration of a problem that comes up constantly in professional AI use: knowing when to trust the output and when to interrogate it.
AI tools are impressive in ways that can make their failures hard to spot. A language model produces fluent, confident text even when the underlying reasoning is poor. A recommendation algorithm suggests options with no indication of how strongly it’s actually calibrated. The surface quality of the output doesn’t tell you much about whether the output is right.
Kasparov’s mistake wasn’t irrationality. It was applying the right heuristic — unusual moves from strong opponents usually mean something — in a context where it didn’t apply.
The equivalent mistake with AI tools is assuming that plausible-sounding output reflects sound reasoning. It often does. But the cases where it doesn’t tend to be invisible from the output alone.
A few principles that follow from this:
Leverage your domain knowledge. Kasparov’s intuition was an asset — it let him create positions Deep Blue found genuinely difficult to evaluate. The same principle applies when you’re working with AI on things you know well. Your subject matter expertise lets you spot when output looks plausible but is subtly wrong. This is one of the places where human judgment adds the most value.
Be more careful where you know less. Kasparov was at risk precisely because he understood chess so well that he could project meaning onto Deep Blue’s moves. In areas where you lack deep expertise, you’re more vulnerable to the opposite problem: you can’t easily tell when AI output is wrong because you don’t have enough context to check it. That’s when verification matters most.
Consistency isn’t the same as correctness. Deep Blue performed at the same level for every move, with no variation. This consistency looked like a strength, and usually was. But it also meant its errors were systematic — when its evaluation function was wrong, it was confidently wrong, every time. Modern AI tools have this property too. A model that consistently makes a particular type of error will do so reliably, without signalling that anything is off.
The story didn’t end with the match
After 1997, chess computers didn’t replace human players. They became training tools. Grandmasters now use engines to analyse their games, explore positions, and prepare for opponents. The collaboration raised the overall level of play significantly.
This is the pattern that tends to emerge when a specific capability shifts from exclusively human to machine-assisted. The competitive landscape changes. The skills that matter shift. But the people who adapt — who learn to work with the new tools rather than against them — find that their own capabilities expand.
The 1997 match is sometimes told as a story about machines defeating humans. It’s more usefully read as the beginning of a new working relationship between two types of intelligence, each with strengths the other lacks.
That relationship is what you’re navigating every time you use an AI tool today.