CLAWS - Clawnch Long-term Agentic Working Storage

Package: @clawnch/memory Version: 1.0.0 License: MIT

> AI Agents: For easier parsing and exact formatting, use the raw markdown version: /memory.md > > Back to main docs: /docs

CLAWS is a production-grade memory system for AI agents with Upstash Redis persistence, BM25 search, semantic embeddings, and automatic context building.

Overview

CLAWS provides persistent, searchable memory for AI agents. Unlike ephemeral conversation history, memories are:

Why it matters for agents:

Agents need memory to:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       CLAWS ARCHITECTURE                             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                     β”‚
β”‚   Your Agent                                                        β”‚
β”‚       β”‚                                                             β”‚
β”‚       β–Ό                                                             β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                              β”‚
β”‚   β”‚   AgentMemory    β”‚ ← High-level API                             β”‚
β”‚   β”‚   (agent.ts)     β”‚                                              β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                              β”‚
β”‚            β”‚                                                        β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                          β”‚
β”‚   β”‚                                      β”‚                          β”‚
β”‚   β–Ό                                      β–Ό                          β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”‚
β”‚ β”‚  QueryEngine    β”‚            β”‚  MemoryStorage  β”‚                  β”‚
β”‚ β”‚  (query.ts)     β”‚            β”‚  (storage.ts)   β”‚                  β”‚
β”‚ β”‚  ─────────────  β”‚            β”‚  ─────────────  β”‚                  β”‚
β”‚ β”‚  β€’ BM25 search  β”‚            β”‚  β€’ Redis ops    β”‚                  β”‚
β”‚ β”‚  β€’ Similarity   β”‚            β”‚  β€’ Key structureβ”‚                  β”‚
β”‚ β”‚  β€’ Recency      β”‚            β”‚  β€’ Tag index    β”‚                  β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β”‚
β”‚          β”‚                              β”‚                           β”‚
β”‚          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                           β”‚
β”‚                         β–Ό                                           β”‚
β”‚              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                    β”‚
β”‚              β”‚   Core Module   β”‚                                    β”‚
β”‚              β”‚   (core.ts)     β”‚                                    β”‚
β”‚              β”‚   ───────────   β”‚                                    β”‚
β”‚              β”‚   β€’ Tokenize    β”‚                                    β”‚
β”‚              β”‚   β€’ Chunk       β”‚                                    β”‚
β”‚              β”‚   β€’ BM25 math   β”‚                                    β”‚
β”‚              β”‚   β€’ Episode     β”‚                                    β”‚
β”‚              β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                    β”‚
β”‚                       β”‚                                             β”‚
β”‚          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                β”‚
β”‚          β”‚            β”‚            β”‚                                β”‚
β”‚          β–Ό            β–Ό            β–Ό                                β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                         β”‚
β”‚   β”‚Embeddings β”‚ β”‚Importance β”‚ β”‚Threading  β”‚                         β”‚
β”‚   β”‚OpenAI/    β”‚ β”‚Scoring    β”‚ β”‚& Linking  β”‚                         β”‚
β”‚   β”‚Cohere     β”‚ β”‚           β”‚ β”‚           β”‚                         β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                         β”‚
β”‚                       β”‚                                             β”‚
β”‚                       β–Ό                                             β”‚
β”‚              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                    β”‚
β”‚              β”‚  Summarization  β”‚                                    β”‚
β”‚              β”‚  (compression)  β”‚                                    β”‚
β”‚              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                    β”‚
β”‚                       β”‚                                             β”‚
β”‚                       β–Ό                                             β”‚
β”‚              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                    β”‚
β”‚              β”‚  Upstash Redis  β”‚                                    β”‚
β”‚              β”‚  (persistence)  β”‚                                    β”‚
β”‚              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                    β”‚
β”‚                                                                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Installation

npm install @clawnch/memory

Quick Start

import { createAgentMemory } from '@clawnch/memory';

const memory = createAgentMemory('my-agent', { redisUrl: process.env.KV_REST_API_URL!, redisToken: process.env.KV_REST_API_TOKEN! });

// Store a memory await memory.remember('The user prefers dark mode and TypeScript', { type: 'fact', tags: ['preferences', 'user'] });

// Recall memories const result = await memory.recall('user preferences', { formatForLLM: true }); console.log(result.context); // ## Relevant Memories // [fact] The user prefers dark mode and TypeScript

AgentMemory Class

The main interface for agent memory operations.

##### remember(text, options?)

Store text in memory. Automatically chunks long text.

async remember(text: string, options?: RememberOptions): Promise<Episode>

RememberOptions:

interface RememberOptions {
  /** Episode type */
  type?: 'conversation' | 'document' | 'fact' | 'event' | 'custom';
  /** Tags for filtering */
  tags?: string[];
  /** Custom metadata */
  metadata?: Record<string, unknown>;
  /** Chunking options */
  chunking?: {
    maxTokens?: number;        // Default: 200
    overlap?: number;          // Default: 50
    splitOn?: 'sentence' | 'paragraph' | 'fixed';
  };
}

Example:

// Store a simple fact
await memory.remember('User prefers dark mode', {
  type: 'fact',
  tags: ['preferences']
});

// Store a long document with custom chunking await memory.remember(longDocument, { type: 'document', tags: ['technical', 'reference'], chunking: { splitOn: 'paragraph', maxTokens: 500 } });

##### rememberFact(text, tags?, metadata?)

Store a single fact without chunking.

async rememberFact(
  text: string,
  tags?: string[],
  metadata?: Record<string, unknown>
): Promise<Episode>

Example:

await memory.rememberFact(
  'API key expires on 2026-03-01',
  ['credentials', 'important'],
  { source: 'settings' }
);

##### rememberConversation(messages, tags?, metadata?)

Store a conversation as a single episode.

async rememberConversation(
  messages: Array<{ role: string; content: string }>,
  tags?: string[],
  metadata?: Record<string, unknown>
): Promise<Episode>

Example:

await memory.rememberConversation([
  { role: 'user', content: 'How do I deploy to Vercel?' },
  { role: 'assistant', content: 'Run `vercel` in your project directory.' }
], ['support', 'deployment']);

##### recall(query, options?)

Search memories by text query. Uses BM25 ranking with optional recency boosting.

async recall(query: string, options?: RecallOptions): Promise<RecallResult>

RecallOptions:

interface RecallOptions {
  /** Maximum results to return (default: 10) */
  limit?: number;
  /** Filter by episode types */
  types?: EpisodeType[];
  /** Filter by tags (AND logic) */
  tags?: string[];
  /** Filter by time range */
  after?: number;
  before?: number;
  /** Recency weight 0-1 (default: 0.2) */
  recencyWeight?: number;
  /** Minimum relevance score */
  minScore?: number;
  /** Format output for LLM context */
  formatForLLM?: boolean;
  /** Max tokens for LLM context (default: 2000) */
  maxContextTokens?: number;
}

RecallResult:

interface RecallResult {
  results: SearchResult[];
  context?: string;      // Formatted for LLM if requested
  totalMatches: number;
}

interface SearchResult { chunk: Chunk; episode: Episode; score: number; matchedTerms: string[]; highlights: string[]; }

Example:

// Basic search
const { results } = await memory.recall('deployment settings');
results.forEach(r => {
  console.log(`[${r.score.toFixed(2)}] ${r.highlights[0]}`);
});

// Search with LLM formatting const { context } = await memory.recall('user preferences', { formatForLLM: true, maxContextTokens: 1500, tags: ['preferences'] }); // Use `context` directly in your LLM prompt

##### getRecent(limit?, options?)

Get the most recent memories.

async getRecent(
  limit?: number,
  options?: { types?: EpisodeType[]; tags?: string[] }
): Promise<Episode[]>

Example:

const recent = await memory.getRecent(5, { tags: ['important'] });
recent.forEach(ep => {
  console.log(`[${ep.type}] ${ep.chunks[0].text.slice(0, 50)}...`);
});

##### findSimilar(text, options?)

Find memories similar to given text using cosine similarity.

async findSimilar(text: string, options?: SearchOptions): Promise<SearchResult[]>

Example:

const similar = await memory.findSimilar(
  'How do I configure the database connection?',
  { limit: 3, minScore: 0.3 }
);

##### getByTag(tag, limit?)

Get all memories with a specific tag.

async getByTag(tag: string, limit?: number): Promise<Episode[]>

Example:

const preferences = await memory.getByTag('preferences', 20);

##### getEpisode(episodeId)

Get a specific episode by ID.

async getEpisode(episodeId: string): Promise<Episode | null>

##### forget(episodeId)

Delete a specific episode.

async forget(episodeId: string): Promise<boolean>

Example:

const deleted = await memory.forget('ep_my-agent_1706789123_abc123');
console.log(deleted ? 'Deleted' : 'Not found');

##### addTags(episodeId, tags) / removeTags(episodeId, tags)

Manage tags on an episode.

async addTags(episodeId: string, tags: string[]): Promise<void>
async removeTags(episodeId: string, tags: string[]): Promise<void>

##### getStats()

Get memory statistics for the agent.

async getStats(): Promise<MemoryStats>

MemoryStats:

interface MemoryStats {
  totalEpisodes: number;
  totalChunks: number;
  uniqueWords: number;
  oldestMemory: number;
  newestMemory: number;
  byType: Record<EpisodeType, number>;
}

Example:

const stats = await memory.getStats();
console.log(`Total: ${stats.totalEpisodes} episodes, ${stats.totalChunks} chunks`);
console.log(`Facts: ${stats.byType.fact}, Conversations: ${stats.byType.conversation}`);

##### listTags()

Get all tags used by this agent.

async listTags(): Promise<string[]>

##### extractTopics(topN?)

Extract key topics from all memories using IDF weighting.

async extractTopics(topN?: number): Promise<string[]>

Example:

const topics = await memory.extractTopics(10);
console.log('Top topics:', topics.join(', '));

##### buildContext(query, options?)

Build an LLM-ready context string from recent and relevant memories.

async buildContext(
  query: string,
  options?: {
    maxTokens?: number;
    includeRecent?: number;
    includeSimilar?: boolean;
    tags?: string[];
  }
): Promise<string>

Example:

const context = await memory.buildContext('user preferences for dark mode', {
  maxTokens: 3000,
  includeRecent: 3,
  tags: ['preferences']
});

// Use in LLM prompt const prompt = `Given the following context:\n${context}\n\nAnswer: ...`;

##### Maintenance Methods

// Prune old episodes, keeping only the N most recent
async prune(keepCount: number): Promise<number>

// Prune episodes older than a date async pruneOlderThan(date: Date): Promise<number>

// Clear all memories for this agent async clear(): Promise<void>


Embeddings

The memory system supports semantic search using vector embeddings from OpenAI or Cohere.

Configuration

import {
  createOpenAIEmbeddings,
  createCohereEmbeddings,
  createEmbeddingsFromEnv
} from '@clawnch/memory';

// OpenAI embeddings const openai = createOpenAIEmbeddings(process.env.OPENAI_API_KEY!, { model: 'text-embedding-3-small', // or 'text-embedding-3-large' dimensions: 1536 // Can reduce for v3 models });

// Cohere embeddings const cohere = createCohereEmbeddings(process.env.COHERE_API_KEY!, { model: 'embed-english-v3.0', inputType: 'search_document' });

// Auto-detect from environment const provider = createEmbeddingsFromEnv({ preferredProvider: 'openai' });

EmbeddingProvider Interface

interface EmbeddingProvider {
  /** Generate embedding for a single text */
  embed(text: string): Promise<number[]>;
  
  /** Generate embeddings for multiple texts (more efficient) */
  embedBatch(texts: string[]): Promise<number[][]>;
  
  /** Provider name */
  readonly name: string;
  
  /** Model being used */
  readonly model: string;
  
  /** Embedding dimensions */
  readonly dimensions: number;
}

OpenAI Models

ModelDimensionsCostNotes
text-embedding-3-small1536$0.02/1M tokensRecommended
text-embedding-3-large3072$0.13/1M tokensHigher quality
text-embedding-ada-0021536$0.10/1M tokensLegacy

Cohere Models

ModelDimensionsNotes
embed-english-v3.01024Best for English
embed-multilingual-v3.01024100+ languages
embed-english-light-v3.0384Faster, smaller
embed-multilingual-light-v3.0384Fast multilingual

Vector Operations

import {
  cosineSimilarity,
  euclideanDistance,
  dotProduct,
  normalizeVector,
  findSimilar,
  findSimilarWithThreshold
} from '@clawnch/memory';

// Compute similarity between vectors const sim = cosineSimilarity(vectorA, vectorB); // -1 to 1

// Find top K similar vectors const results = findSimilar(queryVector, documentVectors, 5); // [{ index: 3, score: 0.92 }, { index: 7, score: 0.85 }, ...]

// Find all vectors above threshold const matches = findSimilarWithThreshold(queryVector, documentVectors, 0.8);

Custom Embeddings

import { createCustomEmbeddings } from '@clawnch/memory';

const custom = createCustomEmbeddings({ name: 'local-model', model: 'my-model', dimensions: 768, embedFn: async (texts: string[]) => { // Your embedding logic here return texts.map(t => generateEmbedding(t)); } });


Importance Scoring

The importance module scores memories by salience to help prioritize during retrieval and compression.

scoreImportance(text, metadata?)

Score the importance of text content.

import { scoreImportance } from '@clawnch/memory';

const score = scoreImportance('Remember to always use TypeScript'); console.log(score); // { // level: 'high', // score: 0.75, // reasons: ['Contains explicit importance markers (1)', 'Contains instructions (1)'], // keywords: ['typescript'] // }

ImportanceScore:

interface ImportanceScore {
  level: 'critical' | 'high' | 'normal' | 'low' | 'trivial';
  score: number;      // 0-1
  reasons: string[];  // Why this score
  keywords: string[]; // Salient terms extracted
}

Importance Levels:

LevelScore RangeExamples
critical0.9 - 1.0Explicit importance markers, personal info, preferences
high0.7 - 0.9Decisions, instructions, temporal references
normal0.4 - 0.7Factual content, emotional content
low0.2 - 0.4Uncertain statements, hedging
trivial0.0 - 0.2Greetings, filler words

Detection Functions

import {
  detectKeywords,
  hasActionableContent,
  hasEmotionalContent,
  hasFactualContent,
  shouldRetain,
  boostImportance
} from '@clawnch/memory';

// Extract salient keywords const keywords = detectKeywords('Contact john@example.com about the deadline'); // ['deadline', 'john@example.com']

// Check content types hasActionableContent('Please create a new file'); // true hasEmotionalContent('I love this feature!'); // true hasFactualContent('The price is $50'); // true

// Check if memory should be retained shouldRetain(score, 0.3); // true if score.score >= 0.3

// Boost based on access patterns const boosted = boostImportance(score, { accessCount: 10, daysSinceLastAccess: 2 });


Threading

Threading provides conversation tracking and memory linking for agents.

Thread Management

import {
  createThread,
  addToThread,
  removeFromThread,
  getThreadContext,
  generateThreadTitle,
  mergeThreads
} from '@clawnch/memory';

// Create a new thread const thread = createThread('my-agent', 'User Onboarding'); // { id: 'thread_my-agent_1706789123_0', agentId: 'my-agent', title: 'User Onboarding', ... }

// Add episodes to thread addToThread(thread, 'ep_my-agent_1706789123_abc'); addToThread(thread, 'ep_my-agent_1706789456_def');

// Get context (episodes in chronological order) const episodes = getThreadContext(thread, allEpisodes, 5);

// Auto-generate title from content const title = generateThreadTitle(episodes); // 'Configuration, deployment, settings'

// Merge two threads const merged = mergeThreads(thread1, thread2);

Memory Linking

import {
  createLink,
  findRelatedMemories,
  getLinksFor,
  findStrongestLink
} from '@clawnch/memory';

// Create links between memories const link = createLink( 'ep_agent_123', 'ep_agent_456', 'references', // 'follows' | 'references' | 'contradicts' | 'supports' | 'related' 0.8 // strength 0-1 );

// Find all related memories (BFS traversal) const related = findRelatedMemories('ep_agent_123', allLinks, { minStrength: 0.5, linkTypes: ['references', 'supports'], maxDepth: 2 });

// Get links for a specific memory const links = getLinksFor('ep_agent_123', allLinks, 'both');

Contradiction Detection

import { detectContradictions } from '@clawnch/memory';

const contradictions = detectContradictions( 'The user prefers light mode', existingChunks ); // [{ chunkId: 'chunk_xyz', reason: 'Potential contradiction about: mode, prefers' }]

Context Building

import { buildThreadContext, buildLinkedContext } from '@clawnch/memory';

// Build LLM context from a thread const threadContext = buildThreadContext(thread, episodes, 4000);

// Build context from linked memories const linkedContext = buildLinkedContext( sourceEpisode, relatedEpisodes, links, 2000 );


Summarization

Memory compression through summarization helps manage long-term memory growth.

Configuration

import { DEFAULT_COMPRESSION_CONFIG } from '@clawnch/memory';

const config = { maxEpisodes: 100, // Trigger compression at 100 recentToKeep: 10, // Always keep 10 most recent protectedTags: new Set(['important', 'critical']), // Never compress these importanceThreshold: 0.7 // Keep high-importance intact };

Local Compression (No LLM)

import { compressMemoriesLocal, estimateImportance } from '@clawnch/memory';

// Build importance map const importance = new Map<string, number>(); for (const ep of episodes) { importance.set(ep.id, estimateImportance(ep, { accessCount: accessCounts.get(ep.id), lastAccessed: lastAccess.get(ep.id) })); }

// Compress (heuristic extraction) const result = compressMemoriesLocal('my-agent', episodes, importance, config); if (result) { console.log(`Compressed ${result.episodesRemoved} episodes`); console.log(`Reduced ${result.tokensReduced} tokens`); console.log(`Summary: ${result.summary.text}`); console.log(`Key facts: ${result.summary.keyFacts.join(', ')}`); }

LLM-Based Compression

import { compressMemories, createSummaryPrompt, finalizeCompression } from '@clawnch/memory';

// Step 1: Get compression candidates and prompt const { toCompress, retained, prompt } = compressMemories( 'my-agent', episodes, importance, config );

if (toCompress.length > 0) { // Step 2: Call your LLM with the prompt const llmResponse = await callLLM(prompt); // Step 3: Finalize compression const result = finalizeCompression('my-agent', toCompress, retained, llmResponse); // Store summary, delete compressed episodes await storeSummary(result.summary); for (const ep of toCompress) { await memory.forget(ep.id); } }

Summary Type

interface Summary {
  id: string;
  agentId: string;
  sourceEpisodeIds: string[];  // Episodes that were summarized
  text: string;                 // The summary text
  keyFacts: string[];           // Extracted key facts
  keyEntities: string[];        // People, places, things
  timeRange: { start: number; end: number };
  createdAt: number;
}

Extraction Functions

import { extractKeyFacts, extractEntities } from '@clawnch/memory';

const facts = extractKeyFacts(text); // ['User prefers TypeScript.', 'API key expires on 2026-03-01.']

const entities = extractEntities(text); // ['John Smith', 'Vercel', 'Base Network']


MCP Server

Package: @clawnch/memory-server Protocol: Model Context Protocol

MCP server providing memory tools for AI agents.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    MCP MEMORY SERVER                            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                β”‚
β”‚   AI Client (Claude Desktop, OpenClaw, etc)                    β”‚
β”‚         β”‚                                                      β”‚
β”‚         β–Ό                                                      β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                          β”‚
β”‚   β”‚ MCP Protocol    β”‚                                          β”‚
β”‚   β”‚ (stdio)         β”‚                                          β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                          β”‚
β”‚            β”‚                                                   β”‚
β”‚            β–Ό                                                   β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”‚
β”‚   β”‚ clawnch-memory-server                   β”‚                  β”‚
β”‚   β”‚ ───────────────────────────             β”‚                  β”‚
β”‚   β”‚ Tools:                                  β”‚                  β”‚
β”‚   β”‚  β€’ memory_remember   - Store memories   β”‚                  β”‚
β”‚   β”‚  β€’ memory_recall     - Search memories  β”‚                  β”‚
β”‚   β”‚  β€’ memory_recent     - Get recent       β”‚                  β”‚
β”‚   β”‚  β€’ memory_forget     - Delete memory    β”‚                  β”‚
β”‚   β”‚  β€’ memory_tag        - Manage tags      β”‚                  β”‚
β”‚   β”‚  β€’ memory_stats      - Get statistics   β”‚                  β”‚
β”‚   β”‚  β€’ memory_context    - Build LLM contextβ”‚                  β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β”‚
β”‚            β”‚                                                   β”‚
β”‚            β–Ό                                                   β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                                          β”‚
β”‚   β”‚  Upstash Redis  β”‚                                          β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                          β”‚
β”‚                                                                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Installation

npm install -g @clawnch/memory-server

Configuration

Add to your MCP settings file (e.g., claude_desktop_config.json):

{
  "mcpServers": {
    "memory": {
      "command": "clawnch-memory",
      "env": {
        "KV_REST_API_URL": "your_upstash_redis_url",
        "KV_REST_API_TOKEN": "your_upstash_redis_token"
      }
    }
  }
}

Tools

##### memory_remember

Store text in memory.

Input Schema:

{
  agent_id: string;   // Agent identifier
  text: string;       // Text to remember
  type?: 'conversation' | 'document' | 'fact' | 'event';
  tags?: string[];    // Tags for categorizing
}

Output:

{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc123",
  "chunks": 3,
  "tags": ["preferences"]
}

Example:

{
  "name": "memory_remember",
  "arguments": {
    "agent_id": "clawnch-bot",
    "text": "User prefers dark mode and TypeScript for all projects",
    "type": "fact",
    "tags": ["preferences", "user"]
  }
}

##### memory_recall

Search memories by query.

Input Schema:

{
  agent_id: string;   // Agent identifier
  query: string;      // Search query
  limit?: number;     // Max results (default: 5)
  tags?: string[];    // Filter by tags
  type?: string;      // Filter by type
}

Output:

{
  "success": true,
  "count": 2,
  "results": [
    {
      "episode_id": "ep_my-agent_1706789123_abc",
      "type": "fact",
      "score": "0.850",
      "tags": ["preferences"],
      "snippet": "User prefers dark mode and TypeScript...",
      "created": "2026-02-01T12:00:00.000Z"
    }
  ]
}

##### memory_recent

Get the most recent memories.

Input Schema:

{
  agent_id: string;   // Agent identifier
  limit?: number;     // Number to return (default: 5)
}

Output:

{
  "success": true,
  "count": 3,
  "episodes": [
    {
      "episode_id": "ep_my-agent_1706789456_def",
      "type": "conversation",
      "tags": ["support"],
      "preview": "How do I configure the database...",
      "created": "2026-02-01T14:30:00.000Z"
    }
  ]
}

##### memory_forget

Delete a specific memory.

Input Schema:

{
  agent_id: string;     // Agent identifier
  episode_id: string;   // Episode ID to delete
}

Output:

{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc"
}

##### memory_tag

Add tags to a memory episode.

Input Schema:

{
  agent_id: string;     // Agent identifier
  episode_id: string;   // Episode ID to tag
  tags: string[];       // Tags to add
}

Output:

{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc",
  "tags_added": ["important", "reference"]
}

##### memory_stats

Get memory statistics for an agent.

Input Schema:

{
  agent_id: string;   // Agent identifier
}

Output:

{
  "success": true,
  "agent_id": "clawnch-bot",
  "episodes": 47,
  "tags": ["preferences", "support", "technical", "important"]
}

##### memory_context

Build an LLM-ready context string from relevant memories.

Input Schema:

{
  agent_id: string;     // Agent identifier
  query: string;        // Query to find relevant memories
  max_tokens?: number;  // Maximum tokens (default: 2000)
}

Output:

[fact] User prefers dark mode and TypeScript for all projects.

[conversation] How do I configure the database connection? Use the DATABASE_URL environment variable...


HTTP API

Base URL: https://clawn.ch/api/memory

Unified POST endpoint for all memory operations.

Request Format

POST /api/memory
Content-Type: application/json

{ "action": "remember" | "recall" | "recent" | "forget" | "tag" | "stats" | "context", "agent_id": string, // ...action-specific fields }

Actions

##### remember

Store text in memory.

curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "remember",
    "agent_id": "my-agent",
    "text": "User prefers dark mode",
    "type": "fact",
    "tags": ["preferences"]
  }'

Response:

{
  "success": true,
  "episode_id": "ep_my-agent_1706789123_abc123",
  "chunks": 1,
  "tags": ["preferences"]
}

##### recall

Search memories.

curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "recall",
    "agent_id": "my-agent",
    "query": "user preferences",
    "limit": 5
  }'

Response:

{
  "success": true,
  "count": 2,
  "results": [
    {
      "episode_id": "ep_my-agent_1706789123_abc",
      "type": "fact",
      "score": 0.85,
      "tags": ["preferences"],
      "snippet": "User prefers dark mode...",
      "created": "2026-02-01T12:00:00.000Z"
    }
  ]
}

##### recent

Get recent memories.

curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "recent",
    "agent_id": "my-agent",
    "limit": 5
  }'

##### forget

Delete a memory.

curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "forget",
    "agent_id": "my-agent",
    "episode_id": "ep_my-agent_1706789123_abc"
  }'

##### tag

Add tags to a memory.

curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "tag",
    "agent_id": "my-agent",
    "episode_id": "ep_my-agent_1706789123_abc",
    "tags": ["important", "reference"]
  }'

##### stats

Get memory statistics.

curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "stats",
    "agent_id": "my-agent"
  }'

Response:

{
  "success": true,
  "agent_id": "my-agent",
  "episodes": 47,
  "tags": ["preferences", "support", "technical"]
}

##### context

Build LLM context.

curl -X POST https://clawn.ch/api/memory \
  -H "Content-Type: application/json" \
  -d '{
    "action": "context",
    "agent_id": "my-agent",
    "query": "user preferences",
    "max_tokens": 2000
  }'

Response:

{
  "success": true,
  "context": "[fact] User prefers dark mode...\n\n---\n\n[conversation] ..."
}

Redis Key Structure

The memory system uses a consistent key structure for Redis storage, enabling agent isolation and efficient queries.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    REDIS KEY STRUCTURE                            β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚  mem:{agentId}:episodes         β†’ Set of all episode IDs         β”‚
β”‚  mem:{agentId}:ep:{episodeId}   β†’ Episode JSON data              β”‚
β”‚  mem:{agentId}:words            β†’ Hash: word β†’ WordStats JSON    β”‚
β”‚  mem:{agentId}:tags             β†’ Set of all tag names           β”‚
β”‚  mem:{agentId}:tag:{tagName}    β†’ Set of episode IDs with tag    β”‚
β”‚  mem:{agentId}:meta             β†’ Agent metadata JSON            β”‚
β”‚  mem:{agentId}:recent           β†’ Sorted set (score=timestamp)   β”‚
β”‚                                                                  β”‚
β”‚  Example for agent "clawnch-bot":                                β”‚
β”‚  ──────────────────────────────────────────────────────          β”‚
β”‚  mem:clawnch-bot:episodes       β†’ {"ep_clawnch-bot_123_abc", ...}β”‚
β”‚  mem:clawnch-bot:ep:ep_..._abc  β†’ {"id":"ep_...", "chunks":[...]}β”‚
β”‚  mem:clawnch-bot:words          β†’ {"user": "{\"idf\":2.3,...}"}  β”‚
β”‚  mem:clawnch-bot:tags           β†’ {"preferences", "support"}     β”‚
│  mem:clawnch-bot:tag:preferences→ {"ep_clawnch-bot_123_abc"}     │
β”‚  mem:clawnch-bot:meta           β†’ {"totalEpisodes":47, ...}      β”‚
β”‚  mem:clawnch-bot:recent         β†’ [(1706789123, "ep_..._abc")]   β”‚
β”‚                                                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Types

Key PatternRedis TypeDescription
mem:{agentId}:episodesSetAll episode IDs for the agent
mem:{agentId}:ep:{id}StringEpisode JSON (serialized)
mem:{agentId}:wordsHashWord statistics for BM25
mem:{agentId}:tagsSetAll tag names used
mem:{agentId}:tag:{tag}SetEpisode IDs with this tag
mem:{agentId}:metaStringAgent metadata JSON
mem:{agentId}:recentSorted SetEpisode IDs sorted by timestamp

Episode JSON Structure

{
  "id": "ep_clawnch-bot_1706789123_abc123",
  "agentId": "clawnch-bot",
  "chunks": [
    {
      "id": "chunk_ep_clawnch-bot_1706789123_abc123_0",
      "text": "User prefers dark mode and TypeScript",
      "tokens": ["user", "prefers", "dark", "mode", "typescript"],
      "tokenFrequency": {"user": 1, "prefers": 1, "dark": 1, "mode": 1, "typescript": 1},
      "timestamp": 1706789123000,
      "episodeId": "ep_clawnch-bot_1706789123_abc123",
      "index": 0
    }
  ],
  "tags": ["preferences", "user"],
  "type": "fact",
  "createdAt": 1706789123000,
  "updatedAt": 1706789123000,
  "metadata": {"source": "conversation"}
}

WordStats JSON Structure

{
  "word": "typescript",
  "documentFrequency": 12,
  "totalOccurrences": 34,
  "idf": 2.31,
  "lastSeen": 1706789123000
}

Agent Metadata JSON Structure

{
  "agentId": "clawnch-bot",
  "totalEpisodes": 47,
  "totalChunks": 156,
  "totalWords": 892,
  "avgChunkLength": 45.2,
  "createdAt": 1706700000000,
  "updatedAt": 1706789123000
}

Support


Last Updated: February 3, 2026 SDK Version: 1.0.4 MCP Version: 1.0.4 API Version: v1