Phase 2 — Memory & Search
Your agent works. Now give it memory. This phase covers how OpenClaw stores and retrieves persistent knowledge — from automatic daily logs to semantic search across your agent’s full history.
How Memory Works
Each session, the agent wakes up fresh — continuity lives entirely in workspace files. What loads depends on session type:
Every session: AGENTS.md (operating procedures), SOUL.md (identity, values), TOOLS.md (environment notes)
Main session adds: USER.md (about the human), today + yesterday’s memory/YYYY-MM-DD.md daily notes, and MEMORY.md (curated long-term memory, if present)
Subagent sessions (groups, shared contexts) load only AGENTS.md and TOOLS.md — no personal context, no memory files.
Write it down. “Mental notes” don’t survive session restarts. If the agent needs to remember something, it must write it to a file. When told “remember this” → daily log or
MEMORY.md. When a lesson is learned →AGENTS.mdor relevant file.
Workspace Files
These files are auto-created by openclaw setup (except where noted) and collectively form the agent’s persistent identity:
SOUL.md — who the agent is. Personality, values, vibe, and boundaries. Not a rules file — it’s existential. The agent can evolve it over time and should tell the user when it does.
## Core Truths
Be genuinely helpful, not performatively helpful. Skip the "Great question!"
Have opinions. Disagree when it's warranted. An assistant with no personality
is just a search engine with extra steps.
## Boundaries
Private things stay private. Period.
When in doubt, ask before acting externally.AGENTS.md — how the agent operates. The main instruction set: startup ritual, workflows, safety rules, group chat etiquette, heartbeat procedures. This is where operational guidelines live.
## Safety
- Don't exfiltrate private data. Ever.
- Don't run destructive commands without asking.
- trash > rm (recoverable beats gone forever)USER.md — about the human. Name, timezone, pronouns, preferences, and freeform context. Built up over time as the agent learns. “You’re learning about a person, not building a dossier.”
- Name: Tobias
- What to call them: Tobi
- Timezone: Europe/Stockholm
- Notes: Prefers concise responses. Working on home automation with HA.TOOLS.md — environment-specific notes. Camera names, SSH hosts, TTS voices, device nicknames — local facts that skills need. Skills define how tools work; this file defines your specifics.
### Cameras
- living-room → Main area, 180° wide angle
- front-door → Entrance, motion-triggered
### TTS
- Preferred voice: "Nova" (warm, slightly British)IDENTITY.md — agent metadata. Name, creature type, vibe, emoji, avatar. Filled in during the first conversation via BOOTSTRAP.md.
- Name: Clawd
- Creature: AI familiar
- Vibe: sharp, warm, slightly chaotic
- Emoji: 🐾HEARTBEAT.md — proactive task checklist. Periodically updated by the agent to indicate it’s still running — useful for monitoring agent liveness. Read every ~30 minutes during heartbeat polls. The agent checks items, does useful background work (email, calendar, memory maintenance), and replies HEARTBEAT_OK if nothing needs attention. Keep it short to limit token burn.
- Check email for anything urgent
- If workspace has uncommitted changes: stage, commit, push
- Every few days: review recent daily logs, update MEMORY.mdBOOTSTRAP.md — first-run onboarding. A conversational script that guides the agent through discovering its identity and learning about the human. Creates IDENTITY.md and USER.md, guides SOUL.md editing. Self-deletes when done. Not auto-created if workspace already exists or skipBootstrap: true.
BOOT.md — startup automation hooks. Short explicit instructions executed when the agent starts (requires hooks.internal.enabled). E.g., “Check calendar and send morning briefing.”
For the full workspace files table, see Getting Started: Workspace Files .
The Memory Subsystem
Within the workspace, the two-layer memory subsystem handles what the agent knows about past events.
Layer 1: Daily Memory Files
The agent automatically writes to daily files in workspace/memory/:
workspace/memory/
├── 2026-02-10.md
├── 2026-02-11.md
└── 2026-02-12.md ← todayEach file contains notes, facts, and context the agent captured during conversations that day. The agent decides what’s worth remembering — preferences, decisions, project context, action items.
Auto-loading: Today’s and yesterday’s daily files are automatically loaded into context at session start. Older files are accessible via memory_search and memory_get.
Version note (2026.2.16): Daily memory filenames are now timezone-aware — files use the agent’s configured timezone (or system timezone) instead of UTC. Existing UTC-dated files are still loaded correctly.
Layer 2: Curated MEMORY.md
workspace/MEMORY.md is the agent’s curated long-term memory — like a human’s mental model built from experience. It holds durable facts, preferences, decisions, significant events, opinions, and lessons learned. Think of daily files as a journal and MEMORY.md as the wisdom distilled from it.
MEMORY.md is not auto-created — it’s optional. When present, it’s loaded in main sessions only. The agent creates and maintains it over time.
## Key Decisions
- Switched home automation from Z-Wave to Zigbee (better device support)
- Chose Caddy over Nginx for reverse proxying (auto-TLS, simpler config)
## Lessons Learned
- Always snapshot before upgrading Home Assistant — broke automations twice
- User prefers being told about problems early, even without a fix readyThe agent maintains MEMORY.md during heartbeat cycles: it reviews recent daily logs, extracts significant insights, updates MEMORY.md with what’s worth keeping, and prunes outdated entries. You can also ask the agent to remember or forget things explicitly.
Key difference: Daily files are raw notes within a 2-day auto-loading window. MEMORY.md is curated wisdom that persists indefinitely — daily logs feed into it over time.
Note:
MEMORY.mdis loaded in main sessions only (direct chats with the human), never in shared contexts like groups or Discord — this prevents leaking personal context to strangers. For security-critical instructions, useSOUL.mdinstead.
Memory Search
By default, the agent can only see today’s and yesterday’s memory files. Memory search lets it query the full history using semantic (vector) or hybrid (vector + keyword) search.
Search Providers
| Provider | Requires | Latency | Privacy |
|---|---|---|---|
local (GGUF) | ~600MB disk, first-run download | ~50ms | Full — nothing leaves your machine |
openai | OPENAI_API_KEY | ~200ms | Embeddings sent to OpenAI |
gemini | GEMINI_API_KEY | ~200ms | Embeddings sent to Google. gemini-embedding-2-preview available (2026.3.11+) with configurable output dimensions |
voyage | VOYAGE_API_KEY | ~200ms | Embeddings sent to Voyage AI |
Provider auto-selection: OpenClaw does not default to local. If provider is omitted, it auto-selects: local (if modelPath configured) → openai → gemini → voyage → disabled. Set provider: "local" explicitly to avoid surprises — without it, OpenClaw may silently use a remote provider if an API key is found in the environment.
Recommendation: Start with local for privacy and zero ongoing cost. Switch to a remote provider only if you need higher-quality embeddings for large memory corpora.
Version note (2026.2.16): Full-text search (BM25 component of hybrid search) is now Unicode-aware with CJK keyword tokenization. Memory files containing Chinese, Japanese, or Korean text are indexed and searched correctly without requiring a remote provider.
Basic Config
Add to openclaw.json:
{
agents: {
defaults: {
memorySearch: {
enabled: true,
provider: "local" // No API key needed
}
}
}
}For local, OpenClaw downloads a GGUF embedding model (~600MB) on first use. Approve the native build and rebuild (run from OpenClaw’s install directory — typically /opt/homebrew/lib/node_modules/openclaw on macOS or /usr/local/lib/node_modules/openclaw on Linux):
npx pnpm approve-builds # Approve node-llama-cpp native build
npx pnpm rebuild node-llama-cpp # Build native bindingsNote: OpenClaw uses pnpm internally but does not install it globally. Use
npx pnpmto run these commands. If you get “pnpm not found”, ensure you’re in the OpenClaw installation directory (not your workspace) and thatnpxis on your PATH.
Hybrid Search
Hybrid search combines vector similarity (semantic meaning) with BM25 (exact keyword matching). This catches both conceptual matches and specific terms the vector model might miss.
{
agents: {
defaults: {
memorySearch: {
enabled: true,
provider: "local",
query: {
hybrid: {
enabled: true,
vectorWeight: 0.7, // 70% semantic similarity
textWeight: 0.3 // 30% BM25 keyword matching
}
},
cache: {
enabled: true,
maxEntries: 50000 // Cache chunk embeddings in SQLite
}
}
}
}
}Tip: The default 70/30 vector/keyword split works well for most use cases. Increase
textWeightif your memory contains many specific names, codes, or identifiers that benefit from exact matching. Test withopenclaw memory search "your query"using queries with both semantic and keyword components to verify result quality — adjust weights and re-index if needed.
Search Quality Tuning
Beyond the basic vector/keyword weights, OpenClaw offers several knobs for improving search result quality.
MMR Re-ranking
Maximal Marginal Relevance deduplicates results that are semantically similar. Without it, a search across months of daily notes can return 5 nearly identical entries about the same recurring topic. MMR diversifies the result set while keeping results relevant.
query: {
hybrid: {
enabled: true,
vectorWeight: 0.7,
textWeight: 0.3,
mmr: {
enabled: true,
lambda: 0.7 // 0 = max diversity, 1 = max relevance (pure similarity)
}
}
}lambda controls the relevance/diversity trade-off. The default 0.7 is a good starting point — lower it if you’re seeing too many similar results (e.g., knowledge vault with many similarly-structured entries).
Temporal Decay
Boosts recent results over older ones. Useful for rapidly-changing topics where yesterday’s context is more relevant than last month’s.
memorySearch: {
// ...provider, query, cache...
temporalDecay: {
enabled: true,
halfLifeDays: 30 // Result score halves every 30 days
}
}Evergreen files (MEMORY.md) are not subject to decay — only dated memory files and extraPaths content are affected. A halfLifeDays of 30 means a 60-day-old result scores ~25% of an identical match from today.
Candidate Multiplier
Controls the size of the candidate pool before final ranking. The default (4) means OpenClaw fetches 4× the requested result count as candidates, then re-ranks and trims. Increase if MMR is discarding too aggressively:
query: {
hybrid: {
// ...weights...
candidateMultiplier: 6 // Default: 4
}
}Remote Provider Example
{
agents: {
defaults: {
memorySearch: {
enabled: true,
provider: "openai",
model: "text-embedding-3-small",
remote: {
apiKey: "${OPENAI_API_KEY}"
},
query: { hybrid: { enabled: true } }
}
}
}
}Note: Remote providers require a separate API key from your AI provider key. The embedding API key is used only for vectorizing memory — it’s not the same as your
ANTHROPIC_API_KEY.
Provider Behavior
Fallback Provider
By default, if the configured provider fails (e.g., local model crashes), OpenClaw falls back to the next available remote provider. For privacy-sensitive deployments, disable this:
memorySearch: {
provider: "local",
fallback: "none" // Don't fall back to remote on failure
}Without fallback: "none", a local provider failure could silently send memory content to a remote embedding API.
Local Model Customization
The default local model is EmbeddingGemma 300M (~600MB GGUF). You can override with a custom model:
memorySearch: {
provider: "local",
local: {
modelPath: "hf:user/my-embedding-model", // Hugging Face URI or absolute path to .gguf
modelCacheDir: "/path/to/model/cache" // Default: ~/.openclaw/models/
}
}Citations
Control whether search results include source file citations in the agent’s context. Note: this is a top-level memory option, not nested under memorySearch:
memory: {
citations: "auto" // "auto" (default) | "on" | "off"
}auto enables citations when the result set includes extraPaths or QMD content — helps the agent distinguish between memory sources.
Pre-Compaction Memory Flush
When the context window fills up, OpenClaw compacts the conversation to free space. Memory flush saves important context to memory before compaction, so nothing is lost. For the full picture of how compaction works (triggers, modes, reserve tokens) and how it differs from session pruning, see Session Management: Compaction .
{
agents: {
defaults: {
compaction: {
reserveTokensFloor: 20000,
memoryFlush: {
enabled: true,
softThresholdTokens: 4000,
systemPrompt: "Session nearing compaction. Store durable memories now.",
prompt: "Write any lasting notes to memory/YYYY-MM-DD.md; reply with NO_REPLY if nothing to store."
},
// Optional: choose which AGENTS.md sections are re-injected after compaction.
// Defaults to the documented pair; override only if your AGENTS.md uses custom section names.
postCompactionSections: ["Safety", "Memory Guidelines"]
}
}
}
}How it works:
- Soft threshold: flush triggers when token estimate crosses
contextWindow - reserveTokensFloor - softThresholdTokens - Silent by default: prompts include
NO_REPLYso the user never sees the flush turn - One flush per compaction cycle — tracked in session state
- Workspace must be writable: skipped if the agent runs sandboxed with
workspaceAccess: "ro"or"none"
Why this matters: Without memory flush, the agent forgets everything from the compacted portion of the conversation. With it, key information survives in memory and can be recalled via memory_search.
Version note (2026.3.12): Two new opt-in config keys control post-compaction memory reindexing.
agents.defaults.compaction.postIndexSync: truetriggers an immediate memory reindex after compaction completes, so the updated session memory is searchable in the same turn.agents.defaults.memorySearch.sync.sessions.postCompactionForce: trueforces a full session sync after compaction even if the incremental sync already ran. Both default tofalse— enable only if your agents rely on immediately querying compacted memory in the same session turn.
Memory CLI
Manage memory from the command line:
# Check memory status — index size, provider, last indexed
openclaw memory status
# Build/rebuild the search index
openclaw memory index
# Rebuild index for a specific agent
openclaw memory index --agent whatsapp
# Search memory from the terminal
openclaw memory search "home automation project"After changing search provider or adding many new memory files, rebuild the index:
openclaw memory indexTip: The index rebuilds automatically during normal operation, but a manual rebuild is faster when you’ve bulk-imported memory files or switched providers.
Multi-Agent Memory
Single agent? If you’re running a single agent, skip this section — memory isolation only matters with multiple agents. This applies after completing Phase 4: Multi-Agent Setup .
Each agent has its own workspace, which means separate memory stores. The WhatsApp agent’s memory is isolated from the Signal agent’s memory.
~/.openclaw/workspaces/
├── main/memory/ ← main agent's memory
├── whatsapp/memory/ ← whatsapp agent's memory
└── signal/memory/ ← signal agent's memoryShared Knowledge via extraPaths
To index shared files across agents without merging workspaces, use memorySearch.extraPaths:
{
agents: {
defaults: {
memorySearch: {
extraPaths: [
"/Users/openclaw/.openclaw/shared-knowledge/" // Indexed for all agents
]
}
}
}
}Paths can be absolute or workspace-relative. Directories are scanned recursively for .md files. Only Markdown is indexed by default; symlinks are ignored.
Version note (2026.3.11): Opt-in multimodal indexing for images and audio in
extraPathsis now available when using thegeminiprovider withgemini-embedding-2-preview. Reindexing is scope-based and strict fallback gating prevents silent downgrade. See official docs for the opt-in config.
Scope note:
extraPathscontent is searchable viamemory_searchbut not directly fetchable viamemory_get. Thememory_gettool only reads files in the agent’smemory/directory andMEMORY.md. To let the agent readextraPathsfiles directly, it needs filesystem tools (read) with appropriate access.
Indexing Considerations
- Each agent’s memory is indexed independently
- Rebuilding one agent’s index doesn’t affect others
- Search results are scoped to the querying agent’s workspace
- If you need cross-agent search, consider a shared workspace (trades isolation for convenience)
Security Considerations
What Not to Store in Memory
The agent writes to memory automatically. Add guidance in AGENTS.md to prevent sensitive data from being persisted:
## Memory Guidelines
Never store in memory:
- API keys, tokens, passwords, or credentials
- Full credit card or bank account numbers
- Personal identification numbers (SSN, passport)
- Private keys or certificatesFile Permissions
Memory files inherit workspace permissions. Ensure they’re restricted:
chmod -R 700 ~/.openclaw/workspaces/*/memory/For dedicated user setups (see Phase 6 ):
sudo chown -R openclaw:staff ~/.openclaw/workspaces/*/memory/ # macOS
sudo chown -R openclaw:openclaw ~/.openclaw/workspaces/*/memory/ # LinuxSession Transcript Indexing
By default, memory search indexes only memory/ files. If you enable session transcript indexing (indexing .jsonl session files), be aware that full conversation history becomes searchable — including any sensitive information exchanged in chat. Keep session transcript indexing disabled unless you have a specific need.
Git Sync
If your memory files contain sensitive data, either exclude them from git or use a private repository with appropriate access controls. See Workspace Git Backup below for full setup.
Workspace Git Backup
Your workspace — memory files, SOUL.md, AGENTS.md — is the agent’s persistent identity. Back it up to a private git repo for recovery, audit trail, and multi-device sync.
Following sequentially? This section references multi-agent concepts from Phase 4. If you’re running a single agent, the basic backup setup still applies — just skip the multi-workspace parts.
Multi-agent? This section covers single-agent git backup. For multi-agent setups where channel agents lack exec access and delegate to the main agent, see Phase 4: Workspace Git Sync .
Setup
Initialize git in the workspace and push to a private remote:
cd ~/.openclaw/workspace
git init
cat > .gitignore << 'EOF'
.DS_Store
.env
**/*.key
**/*.pem
**/secrets*
auth-profiles.json
EOF
git add .
git commit -m "Initial workspace"
# Create private repo + push (GitHub CLI)
gh repo create openclaw-workspace --private --source . --remote origin --pushPrivate repos only. Workspaces contain agent personality, user context, and memory — never use public repositories.
Automating sync
OpenClaw provides two mechanisms for periodic automation. Use one or both.
Option A: HEARTBEAT.md (simplest)
Heartbeat runs every ~30 minutes in the main session. Adding a git sync item to HEARTBEAT.md is the simplest approach — it batches with other periodic checks (inbox, calendar) at no extra API cost.
Add to ~/.openclaw/workspace/HEARTBEAT.md:
# Heartbeat checklist
- If workspace has uncommitted changes: stage, commit with descriptive message, pull --rebase, push. Abort on conflicts.The agent reads this each heartbeat and decides whether action is needed. If nothing changed, it skips the sync and replies HEARTBEAT_OK.
Trade-off: Heartbeat timing drifts slightly and can be skipped if the queue is busy. For guaranteed scheduling, add a cron job.
Option B: Cron job (precise scheduling)
Cron runs at exact times, persists across gateway restarts, and can use an isolated session to avoid cluttering main chat history.
Add to openclaw.json:
{
cron: {
jobs: [{
jobId: "workspace-git-sync",
schedule: { kind: "cron", expr: "0 */6 * * *" }, // Every 6 hours
sessionTarget: "isolated",
payload: {
kind: "agentTurn",
message: "Sync workspace to git: check for uncommitted changes, stage, commit with descriptive message, pull --rebase, push. Abort on conflicts — never force-push."
},
delivery: { mode: "none" } // Silent — no channel delivery
}]
}
}Or via CLI:
openclaw cron add \
--name "Workspace sync" \
--cron "0 */6 * * *" \
--session isolated \
--message "Sync workspace to git: check for uncommitted changes, stage, commit with descriptive message, pull --rebase, push. Abort on conflicts." \
--no-announceCombining both: Use HEARTBEAT.md for opportunistic sync during regular checks, and a cron job as a guaranteed fallback every 6-12 hours.
Safety rules
Add git safety guardrails to the appropriate workspace files:
AGENTS.md — operational rules (loaded every session):
## Git Safety
- Always pull --rebase before pushing
- Use descriptive commit messages (e.g. "Sync: memory updates, SOUL.md edits")
- If rebase conflicts occur: abort the rebase, report the conflict
- Never commit files matching: *.key, *.pem, .env, secrets*, auth-profiles.jsonSOUL.md — identity-level boundaries:
## Boundaries
- Never run git push --force or git reset --hard
- Never commit secrets or credentials to any repositoryWhy two files? AGENTS.md defines how you operate — workflows, procedures, safety guidelines. SOUL.md defines who you are — identity, personality, values, boundaries. The agent reads both every session.
GitHub authentication
Set a fine-grained PAT with minimal scope:
- GitHub → Settings → Developer settings → Fine-grained personal access tokens
- Scope to only your workspace repo(s)
- Permission: Contents: Read and write (nothing else)
- Set expiration (90 days recommended)
Make the token available to the gateway:
# In your LaunchAgent plist / systemd env file / shell profile:
export GITHUB_TOKEN=github_pat_...Both gh CLI and git push over HTTPS read from this env var. For interactive use, a personal access token works. For the gateway service (Phase 6), use a machine user token or deploy key instead. See Phase 6: GitHub Token Setup
for production deployment details.
Verification
# Check workspace git status
cd ~/.openclaw/workspace && git status && git remote -v
# Test a manual sync
git add . && git commit -m "Test sync" && git pull --rebase && git push
# If using cron, verify the job exists
openclaw cron listAfter the first automated heartbeat or cron run, check that commits appear in your private repo.
QMD Backend (Experimental)
QMD is a local-first search sidecar that combines BM25 + vectors + reranking. Markdown stays the source of truth; OpenClaw shells out to QMD for retrieval.
{
memory: {
backend: "qmd", // Default: built-in SQLite indexer
qmd: {
includeDefaultMemory: true,
update: { interval: "5m" },
limits: { maxResults: 6, timeoutMs: 4000 }
}
}
}QMD requires a separate install:
bun install -g @tobilu/qmd
brew install sqlite # macOS — needs extension supportThe qmd binary must be on the gateway’s PATH. QMD runs fully locally via Bun + node-llama-cpp and auto-downloads GGUF models on first use.
When to consider QMD:
- Large memory corpus (hundreds of daily files)
- Need BM25 + vector + reranking combined
- Want session transcript indexing (
memory.qmd.sessions.enabled) - If QMD fails or the binary is missing, OpenClaw falls back to the built-in SQLite indexer
Version note (2026.2.16): QMD now supports per-agent collection scoping — each agent’s memory is indexed into a separate collection, preventing cross-agent result contamination in multi-agent setups.
Limitations: QMD is experimental — may not survive OpenClaw updates, has no official documentation, and behavior may change without notice. Test thoroughly before relying on it in production.
For most users, the default built-in backend is sufficient.
QMD Configuration Reference
Full config with all documented options:
{
memory: {
backend: "qmd",
qmd: {
// command: "/path/to/qmd", // Override if not on gateway's PATH
includeDefaultMemory: true,
searchMode: "query", // "search" (BM25) | "vsearch" (vector) | "query" (hybrid, default)
// Extra paths — QMD equivalent of memorySearch.extraPaths
paths: [
{ name: "knowledge", pattern: "/path/to/knowledge/**/*.md" },
{ name: "docs", pattern: "/path/to/docs/**/*.md" }
],
// Session transcript indexing
sessions: {
enabled: false, // Index .jsonl session transcripts
retentionDays: 90, // Auto-prune indexed transcripts older than this
exportDir: "" // Custom export directory (default: agent's sessions dir)
},
// Index refresh cadence
update: {
interval: "5m",
debounceMs: 2000, // Debounce rapid file changes before re-indexing
onBoot: true, // Rebuild index on gateway start
waitForBootSync: false // Block agent startup until boot index completes
},
// Result limits
limits: {
maxResults: 6,
timeoutMs: 4000,
maxSnippetChars: 500, // Max chars per result snippet
maxInjectedChars: 4000 // Max total chars injected into agent context
}
}
}
}Operational notes:
- QMD state lives under
~/.openclaw/agents/<agentId>/qmd/ - Extra path results use the prefix format
qmd/<collection>/<relative-path>inmemory_searchresults memory_getunderstands theqmd/prefix — the agent can fetch extra path content directly when using QMD (unlikeextraPathswith the built-in backend)paths[].nameprovides a stable collection name — changing the pattern without changing the name preserves the index
Advanced Options Reference
Niche memorySearch options (built-in backend, not QMD) not covered in detail above. See official docs
for full descriptions.
| Option | Type | Default | Description |
|---|---|---|---|
sync.watch | bool | true | File watcher for auto-reindex on changes |
store.vector.enabled | bool | true | SQLite-vec acceleration for local vector search |
store.vector.extensionPath | string | — | Custom sqlite-vec extension path |
store.path | string | — | Custom per-agent index path ({agentId} token supported) |
remote.batch.enabled | bool | false | Batch embedding requests for remote providers |
remote.batch.concurrency | number | 2 | Parallel batch requests to remote provider |
remote.headers | object | — | Custom HTTP headers for remote embedding endpoint |
query.hybrid.candidateMultiplier | number | 4 | Candidate pool size multiplier before final ranking |
compaction.mode | string | "default" | "default" or "safeguard" (chunked summarization) |
plugins.slots.memory | string | "memory-core" | Memory plugin slot ("none" to disable memory entirely) |
Verification
After configuring memory search, verify everything works:
-
openclaw memory statusshows your provider and index state -
openclaw memory indexcompletes without errors -
openclaw memory search "test"returns results (if you have memory files) - Send a message via chat, check that
memory/YYYY-MM-DD.mdis created - Pre-compaction flush: in a long conversation, verify memory is written before compaction (check logs:
openclaw logs | grep "memory flush") - For local provider:
npx pnpm approve-buildsandnpx pnpm rebuild node-llama-cppcompleted successfully
Next Steps
Your agent now has persistent memory and semantic search.
→ Phase 3: Security — lock down your agent with secure defaults
Or jump to:
- Phase 4: Channels & Multi-Agent — connect channels, run multiple agents with different roles
- Reference — config cheat sheet, memory CLI commands