You have a plan. It feels solid. You have thought it through.
The problem: the same brain that built the plan is evaluating it. That brain has blind spots. It has assumptions baked in that feel like facts. It is motivated to find the plan reasonable because you already spent time on it.
This is not a character flaw. It is how cognition works. The solution is not to think harder. It is to use a different lens — one that is not invested in the outcome.
That is what Issue #9 is about: a 10-minute reasoning audit you can run on any plan, decision, or argument before you act on it.
What the Reasoning Audit Is Not
It is not asking AI whether your plan is good. That produces agreement and flattery 80% of the time.
It is not a brainstorm. You are not generating more ideas.
It is a structured adversarial review. You are asking the AI to find the holes — specifically, the assumptions you have not examined, the alternatives you have not considered, and the ways your reasoning could be circular.
The goal is to surface what you missed before reality surfaces it for you.
The Steel Man Test (examine your assumptions)
Run this before any significant decision — hiring, investment, strategy, launch. It does one thing well: it separates what you know from what you think you know.
Here is my plan / decision / argument: [paste your thinking] I want you to do three things: 1. List every assumption embedded in this that I am treating as a fact but have not verified. 2. For each assumption, rate it: (a) almost certainly true, (b) plausible but unconfirmed, (c) actually questionable. 3. For the (b) and (c) assumptions, tell me what would have to be true for my plan to still work if the assumption is wrong. Do not soften this. I want the uncomfortable version.
Why this works: Most plans fail not because the logic is wrong but because a foundational assumption was never tested. This prompt forces that test before you are committed.
The instruction that matters: “Do not soften this.” Without it, you get the diplomatic version. You want the honest one.
The Pre-Mortem (assume it already failed)
This technique comes from research on prospective hindsight. It is dramatically more effective than asking “what could go wrong” because it bypasses optimism bias entirely.
Assume it is 12 months from now and this plan failed completely. Not a partial failure — a full failure. The outcome was the opposite of what I wanted. Write a 200-word post-mortem from that future. What happened? What were the two or three specific things that went wrong? What did I miss or underestimate? Then: what early warning signs would have been visible at month 3 that we were headed for failure?
Why this works: You are not asking whether the plan might fail. You are asking how it already did. That framing produces different — and more useful — answers.
The early warning question: This is the most underused part. Month 3 signals give you something to watch for right now.
The Devil’s Advocate (find the strongest counterargument)
Most people ask AI for pushback and get a lukewarm “on the other hand…” The specificity of this prompt forces a real counter — the one that actually tests your reasoning.
I am about to [decision / action]. Here is my reasoning: [paste your reasoning] Your job: construct the strongest possible argument against this. Not a list of weak objections — the single most compelling case that I should not do this. Then tell me: if you were an advisor who had seen this situation before and it went badly, what is the one thing you would tell me to do differently?
Why this works: The key instruction is “strongest possible argument” not “some objections.” Specificity forces a real counter. The advisor framing at the end produces actionable advice, not analysis.
The test: If the counterargument does not make you pause, the prompt worked and your plan is solid. If it does make you pause, you needed to hear it.
How to Use These Together
You do not need all three every time. A rule of thumb:
Low stakes, reversible decisions → Skip the audit. Decide fast. Learn from the outcome. Medium stakes, moderate reversibility → Run Prompt 1 only. 5 minutes. High stakes, hard to reverse → Run all three. 10–15 minutes. Worth it.
The mistake most people make is applying the same level of scrutiny to every decision. The reasoning audit is not a daily habit — it is a tool for moments when the cost of being wrong is high.
One Pattern to Watch For
After you run the audit a few times, you will notice something: you have recurring blind spots.
Maybe you consistently underestimate implementation complexity. Maybe you overweight recent data. Maybe you have one assumption category — about people, timelines, or market behavior — that is wrong more often than the others.
Keep a log. After 10 audits, look for the pattern. That pattern is the most valuable thing the audit produces — not the individual decision, but the map of how your thinking goes wrong.
10–15 min per decision On decisions that matter
A map of how your thinking goes wrong — built from your own decisions.
Try It This Week
Pick one decision you are currently sitting on — a hire, a vendor, a strategy call, anything. Run all three prompts. Budget 15 minutes.
Notice the difference between what you knew before and what the audit surfaces. That gap is what you were about to act on without examining.
Then reply to this email and tell me what you found. I read every response.
The Overnight Run: How to build an AI loop that works while you sleep — so you wake up to results instead of a to-do list.