Memory Management Pattern
Patterns Β· 5 min
The Problem
Every API call is an empty session. Your agent doesnt know what happened yesterday. Memory Management solves this through structured persistence.
The Three-Tier Memory System in Practice
A productive memory system solves the forgetting problem with three tiers that work together: core knowledge (always loaded), daily logs (session history), and optional vector memory (semantic search).
Tier 1: MEMORY.md β The Core Brain
The most important file in the system. Think of it as the essential briefing loaded at every single start. What goes in? Active projects and their status, important decisions, current business priorities, links to important resources. Rule: Keep it under two hundred lines β a bloated memory file wastes tokens and dilutes important information.
Tier 2: Daily Logs β The Activity Record
While MEMORY.md stores "what matters now", daily logs store "what happened each day". A work journal that writes itself: completed tasks, decisions made, content created. Daily logs enable searchable history, feed weekly review skills, and ensure continuity β the next session can pick up exactly where the last one left off.
Tier 3: Vector Memory β Semantic Search
Optional but powerful. Vector memory enables semantic search across accumulated knowledge β based on meaning, not just keywords. You can ask "What was our approach to customer onboarding?" and the system finds relevant information even when the exact words do not match.
Context Files: Permanent Foundation Knowledge
Memory stores what happens over time. Context files store the permanent foundations β the things about your business that do not change day to day:
- my-business.md β Company profile, mission, target customers, revenue model
- my-voice.md β Communication style, tone, example content
- my-products.md β Product catalog, pricing, features, differentiation
Technical Implementations
1. CLAUDE.md / PROJECT.md
The simplest method: A Markdown file in the project root containing all important info. Loaded automatically on every run.
# Project Context
- Stack: Ollama + n8n + PostgreSQL
- Target audience: SMEs in Austria
- Current Tasks: ...
# Decisions
- Docker Compose for Deployment
- PostgreSQL for data2. Topic Files
For more complex projects: Multiple Markdown files in a /docs directory. Each file covers one topic (Architecture, API, Deployment, etc.).
3. Knowledge Graphs
For knowledge bases with millions of tokens: Vector databases like pgvector or ChromaDB. Store documents, code, logs as embeddings and find them with natural language.
When to Use What?
| Method | Tokens | Latency | Complexity |
|---|---|---|---|
| CLAUDE.md | ~8K | 0ms | Minimal |
| Topic Files | ~50K | ~50ms | Low |
| Knowledge Graph | Unlimited | ~200ms | High |
Practice Tip
Start with CLAUDE.md. That works for 90% of projects. Only when you really build a knowledge base larger than 50K tokens do you need vector databases.
Keeping Memory Clean
Memory maintenance is like keeping a tidy desk. Without regular cleanup, important things get buried under piles of outdated information.
- The two-hundred-line rule for MEMORY.md is the most important β move older entries to the daily logs archive
- Keep MEMORY.md focused on current priorities, active projects, and essential reference information
- Daily logs are automatically organized by date β archive older logs once a month
- Never store secrets, credentials, or API keys in memory files
The Real Benefit: Continuity
This is what memory management looks like in practice: Monday you tell the system about a new client project β details get stored in the daily log and the project is marked active in MEMORY.md. Tuesday you ask for a proposal β the system already knows the details and writes in your brand voice. Wednesday client feedback arrives, Thursday you can ask for the project status and get a complete summary across all days.
Sources
Next step: move from knowledge to implementation
If you want more than theory: setups, workflows and templates from real operations for teams that want local, documented AI systems.
- Local and self-hosted by default
- Documented and auditable
- Built from our own runtime
- Made in Austria