You have had this experience. You spend 20 minutes briefing an AI on your project — your goals, your constraints, what has already been tried, what matters. It gives you a great answer. You close the tab.
Next week you open a new conversation. You start typing the same context you typed last time. The AI has no idea who you are or what you were working on.
This is not a bug. It is how most AI tools work by default. Every conversation starts fresh. You are not building a research assistant — you are starting over every single day.
Issue #7 is about fixing that.
The Core Problem: Context Amnesia
Large language models do not have persistent memory across conversations. Each session is a blank slate. This is actually a feature for privacy — your conversations are not being stored and cross-contaminated. But it is a workflow problem for anyone doing serious research.
The good news: you can build the memory layer yourself. It is simpler than it sounds.
The Context File (set up once, use forever)
The simplest fix is a document you paste at the start of every conversation. Keep it in a place you can copy from in 5 seconds. Update it when a project status changes or a question gets answered.
Before we start, here is my context: ABOUT ME: [Your role, expertise level, what you care about] CURRENT PROJECTS: [2-3 active projects with their current status] ESTABLISHED FACTS: [Things you have already confirmed that should not be re-litigated] [e.g., “My target customer is X” or “We decided not to do Y because Z”] OPEN QUESTIONS: [What you are still figuring out] HOW I LIKE TO WORK: [Your preferences: bullet points or prose? More detail or less? Direct recommendations or options to evaluate?]
Why this works: The act of maintaining this file forces clarity. When you cannot summarize a project in 2 sentences for the context file, that tells you something.
The difference: Starting from zero vs. starting from where you left off.
The Research Log (capture as you go)
Context files work for stable background. But research is live — you are learning things in real time. The research log captures what you discovered and why it matters. Run this at the end of every research session.
At the end of this conversation, summarize: 1. The 3 most important things we established or confirmed 2. Any assumptions we made that should be validated later 3. What the next question is Format it so I can paste it into my research log.
The result: Within a month you have a structured record of what you know, how you know it, and what still needs answering. Paste the relevant entries into your next session — your AI picks up where it left off.
The compounding: Each session builds on the last instead of starting from scratch.
The Standing Brief (the version that actually works)
The above two prompts work for manual workflows. Here is the architecture that works at scale — what a persistent research assistant actually looks like. A single document your AI reads at the start of every session.
# My Research Context — [Topic / Project Name] Last updated: [date] ## What I am trying to figure out [The actual question you are researching — specific] ## What I have confirmed so far - [Fact 1] — Source: [where you learned it] - [Fact 2] — Source: [where you learned it] ## What I tried that did not work - [Approach 1] — Why it failed: [brief reason] ## Current best hypothesis [Your working answer right now, subject to revision] ## Open questions (ranked by priority) 1. [Most important unanswered question] 2. [Second most important] 3. [Third] ## Constraints and non-negotiables [What cannot change regardless of what you find]
How to use it: Each session — paste this brief, do your research, update the brief, save it. This is not magic. It is structured note-taking that happens to feed an AI.
The key insight: The AI’s job is to help you think. Your job is to maintain the record of what you have already thought.
What This Looks Like in Practice
Here is a real example. We maintain a memory file that tracks what our AI system has learned across hundreds of sessions. Each entry has a type: user (who we are, how we work), feedback (confirmed approaches), project (current state), or reference (where to find things).
An entry looks like this:
--- name: Research approach for market signals type: feedback --- Use yfinance as tertiary data source. Primary: pipeline data. Why: established and approved March 2026. How to apply: whenever pulling price data, prefer existing pipeline cache.
You do not need software to do this. A folder of markdown files works. The architecture matters more than the tool.
The Compounding Effect
After 3 months of maintaining a research context, something happens. Your AI conversations get dramatically better — not because the AI got smarter, but because you got better at briefing it.
You stop re-explaining basics. You start building on previous sessions. Your questions get more specific. Your AI gets more useful.
The constraint was never the AI’s capability. It was the quality of your context.
What NOT to Do
Do not try to build a “perfect” memory system before you start. Start with a text file with your current project context. Update it once a week. Add structure when you need it. A system you actually use is infinitely better than a perfect system you abandon.
The goal is not to give the AI perfect context. The goal is to reduce the amount of time you spend re-establishing what you already know.
20 min re-explaining context each session Replaced with a 5-second paste
AI picks up where it left off — every time.
Try It Today
Pick one project you are actively working on. Create a context file — just the basics: who you are, what you are working on, what you have already decided. Paste it at the start of your next AI conversation. Notice how much less time you spend explaining.
Then reply to this email and tell me how it went. I read every response.
The Decision Log: How to use AI to track why you made a decision, so future-you can understand it and your AI can learn from it.