CONSOLE

Memory Is Not a Database: How OpenClaw Remembers Everything with 4 Text Files

12 min read Expert February 2026

Out of the box, AI agents are amnesiac. Every conversation starts with a blank slate. This is the "Stateless" problem.

To solve this, most engineers reach for the heavy artillery: vector databases (Pinecone, Weaviate), complex RAG pipelines, expensive embeddings.

They're wrong.

OpenClaw solved the infinite memory problem without a single line of SQL and without vectors. Peter Steinberger used the oldest and most boring technology in the world: Markdown files.

If you think you need complex infrastructure to give an AI long-term memory, you haven't looked under the hood. Here's how OpenClaw turns simple text files into a working brain, and how you can copy this architecture tonight.

Source: Google Research — "Context Engineering: Sessions and Memory" (November 2025) — The white paper that theorized this approach by defining the three types of agent memory: episodic, semantic, and procedural.

I. The "Context Window" Lie

Let's start by busting a myth. Your conversation with ChatGPT is not "memory." It's a sliding context.

Imagine you're writing a novel, but you can only see the last 50 pages. Every time you write a new page, page 1 vanishes into the void. That's the Context Window.

When the window is full, the guillotine drops. This is Compaction. The system has to take your history, chop it up, and keep only the essentials to continue.

There are three ways to trigger this guillotine:

  • Count-based"You've exceeded 8,000 tokens. I'm cutting." The brute-force method.
  • Time-based"You haven't said anything for an hour. I'm archiving."
  • Semantic"We're done talking about this topic. I'm cleaning up." The smart method, but hard to code.

The problem? If you cut, you lose the details.

OpenClaw doesn't cut. It moves.

II. The Architecture: The Desk and the Cabinet

To understand OpenClaw's memory, forget about computer science. Think of a physical desk.

OPENCLAW MEMORY ARCHITECTURE Session The Desk · RAM · Ephemeral .md Files The Cabinet · Disk · Persistent
  • The Session (The Desk) — This is the current mess. Notes everywhere, sticky notes, the project in progress. This is your RAM. It's ephemeral.
  • Long-Term Memory (The Cabinet) — This is where you file things away once the project is done. It's organized, it's clean.

The genius of OpenClaw isn't in the cabinet itself (those are just files), but in the moving mechanisms that transport information from the desk to the cabinet at the right time.

III. The 3 Files That Make Up the Brain

No database. Just three Markdown files that the agent reads and writes like a human keeping a diary.

1. memory.md — Identity

This is the user's ID card. It stores stable facts and preferences.

memory.md
# memory.md Name: Manny OS: Debian Stable Preference: No light mode, Windsurf IDE. Project: Glorics

This file is injected into every prompt. It must be short — recommended under 200 lines. This is what Google calls "Semantic Memory."

2. daily_logs.md — The Logbook

This is short-term memory, organized by day.

Contents: "Today we debugged the Auth API. We fixed bug #402."

It's an append-only file. You never delete, you add to the end. The agent reads today's and yesterday's logs to know what happened recently.

3. session_snapshots/ — The Snapshot

This is raw memory. The last 15 significant messages from a session, saved before a reset.

OpenClaw doesn't save noise. Tool calls, system errors, and /slash commands are stripped out. Only the "User ↔ Assistant" conversation is kept.

IV. The 4 "Flush" Mechanisms

Having files is useless if nobody writes to them. OpenClaw uses 4 precise triggers to save memory before it's lost to compaction.

01

Bootstrap Loading

At the start of every new conversation, the system injects memory.md into the system prompt. The agent is instructed to also read today's and yesterday's daily logs.

The agent starts "warm." It knows who you are and what you did yesterday, without you having to remind it.

02

Pre-Compaction Flush

This is the most elegant mechanism. When the context window is nearly full, OpenClaw injects a system message invisible to the user:

"Warning — I'm about to forget this conversation soon. Save everything important to the daily_log now."

The agent sees this, panics (figuratively), and writes a summary to daily_logs.md. It's a Write-Ahead Log. You save the furniture before the fire.

03

Session Snapshot Hook

When you type /reset or /new, you kill the current session. Before dying, a hook fires:

  • It grabs the last 15 messages.
  • It asks a small LLM to generate a descriptive filename (e.g., debug-api-auth.md).
  • It saves the file to the snapshots folder.

The session is dead, but its ghost is neatly archived.

04

The Explicit Instruction

If you say: "Remember that I prefer Windsurf over VS Code."

The agent doesn't need magic. It has the file_writer tool. It detects this is a preference (Semantic) and writes the line IDE: Windsurf directly to memory.md. Basic routing, driven by the system prompt.

Conclusion: Complexity Is a Trap

OpenClaw proves you don't need vectors to have memory.

  • Claude Code copied it — they also use Markdown files.
  • Google theorized it — White paper "Context Engineering", Nov 2025.
  • Peter Steinberger coded it.

The architecture boils down to three questions:

  1. What? — What's worth noting?
  2. Where? — In memory.md for facts, or daily_logs for events?
  3. When? — At startup, before the cutoff, or on demand?

If you can answer that, you don't need Pinecone. You need a text file.

Build Your Own Persistent Memory

We build autonomous agent systems with persistent memory for businesses.

Talk to an Architect →