Grace Hopper's Legendary Bug
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 6 · Section 5 of 6
Grace Hopper’s Legendary Bug
On 9 September 1947, Grace Hopper and her team at Harvard were troubleshooting a malfunctioning computer — the Mark II, a room-sized machine that worked using mechanical relay switches. The relays were flipping incorrectly and producing wrong outputs. After careful investigation they found the cause: a moth had got lodged between the contacts of one relay, preventing it from closing properly.
Hopper removed the moth and taped it into the computer’s logbook with a note: “First actual case of bug being found.”
The word “bug” had been used to describe mechanical faults since the 1870s — Thomas Edison used it in his notebooks, telegraph operators used it to describe line interference. But Hopper’s moth gave the term a concrete, literal moment in computing history, and the word stuck. Every software error has been called a bug ever since.
What the moth actually teaches
The moth story is usually told as a quirky origin story. The more useful reading is what it demonstrates about the nature of system failures.
The Mark II was not broken. It was doing exactly what it was mechanically configured to do. A relay that could not close because of a physical obstruction behaved precisely as expected for a relay that cannot close. The machine followed its instructions perfectly. The instructions were operating in conditions that were not accounted for.
Grace Hopper understood this. Her approach to debugging — and her later work designing more intuitive programming languages — was built on one core insight: computers do exactly what you tell them. Not what you mean. Not what you intended. Exactly what you specified. The gap between instruction and intent is where all bugs live.
Applying this to AI
Every bad AI output is a bug in this same sense. The model did what you told it. Not what you meant.
This reframe changes where you look when something goes wrong. If the AI gave you a formal report when you wanted a casual summary, it is not because the AI misunderstood — it is because nothing in your input specified the register you wanted. If it produced three paragraphs when you needed a bullet list, check whether you said “list” or whether you assumed “list” was obvious from context.
The moth was not a malfunction. It was a condition the system was never designed to handle. Bad AI outputs are almost always the same: you gave instructions that were clear to you but contained an ambiguity or gap that the model resolved differently than you expected.
Hopper’s habit of documentation is also worth noting. She did not just fix the problem — she recorded what had happened, with the physical evidence taped to the page. That log became a resource when similar problems arose later.
The equivalent practice with AI is keeping a record of failures and what fixed them. Not every failure needs formal documentation. But for recurring patterns — types of requests that consistently produce the wrong output — a short note about what adjustment worked is far more useful than relying on memory. It turns individual debugging sessions into a growing body of knowledge about how to work with the system effectively.