Skip to content

GaosCode/memE

Repository files navigation

memE

中文文档

Long-term memory engine for multi-session AI agents.

memE gives an agent a durable memory layer: it extracts long-term facts from conversations, keeps summaries fresh, retrieves relevant history, assembles prompt-ready context, and exposes adapter contracts for host applications such as OpenClaw.

It is designed for assistants that need to remember across sessions without dumping every old message back into the prompt.

Why memE

LLMs do not naturally keep stable memory across sessions. In real agent products this creates predictable failures:

  • user preferences, relationships, project decisions, and task context are forgotten;
  • stale facts remain in circulation after the user changes them;
  • long prompts and coarse summaries accumulate noise and cost;
  • host applications need a clean memory contract instead of custom glue code.

memE treats memory as a runtime system: write, update, retrieve, select, inject, audit, and benchmark.

What It Provides

  • MemoryOSKernel: the main runtime boundary for refresh and retrieval.
  • Write path: semantic / state / temporal extraction, write gates, normalization, deduplication, mutable fact updates, and buffered semantic flush.
  • Read path: retrieval planning, long-term recall, candidate selection, sufficiency checks, rewrite, and prompt block assembly.
  • Context assembly: recent window + conversation summary + long-term memory.
  • Policy compiler: scene goals, important entities, slots, and constraints compiled into runtime policy.
  • Audit traces: structured gate / extract / selection / sufficiency / rewrite events for replay and debugging.
  • Postgres / pgvector adapter for durable storage.
  • Benchmark runner for memory quality, prompt recall, token cost, and latency.
  • OpenClaw adapter API and external memory plugin.

Install

memE uses uv for local development.

Minimal setup:

uv sync

Development setup:

uv sync --extra dev
uv run pytest -q

With Postgres / pgvector:

uv sync --extra dev --extra postgres
docker compose up -d

With model adapters:

uv sync --extra models-openai
uv sync --extra models-dashscope

Quick Start

Run the test suite:

uv run pytest -q

Start a local Postgres-backed control plane:

docker compose up -d

DATABASE_URL=postgresql+psycopg://meme:meme@127.0.0.1:55432/meme \
uv run meme-control-plane --host 127.0.0.1 --port 8000

Validate a benchmark case:

uv run meme-benchmark \
  --case benchmarks/cases/longmemeval/longmemeval_852ce960_retrieval_pure.json \
  --validate-only

Benchmark

memE benchmarks measure:

  • answer accuracy: whether the final answer is correct;
  • long-memory recall: whether required facts are retrieved from long-term memory;
  • prompt recall: whether required facts actually enter the final prompt;
  • token cost and latency across refresh and retrieval.

Public benchmark numbers and cross-project comparisons are not published yet. The runner and internal cases are available for development and regression checks; calibrated results will be added after the benchmark setup is documented and reproducible.

See benchmarks/README.md for runner usage.

OpenClaw Integration

memE includes an OpenClaw-specific adapter layer:

  • adapter API: /api/openclaw/*
  • external memory plugin: integrations/openclaw_plugin

Covered paths:

  • recall / read: OpenClaw can search and read memE memory through memory tools;
  • capture / write: OpenClaw can refresh long-term memory after a conversation ends;
  • status / health: the plugin can probe backend state and capabilities.

Integration docs:

  • docs/openclaw_integration_runbook.md

Project Scope

  • memE is a memory engine and adapter layer, not a full chat product.
  • memory_studio is a local debugging and control-plane UI.
  • OpenClaw integration focuses on memory backend behavior, not every OpenClaw advanced feature.
  • Benchmark data currently focuses on internal cases; public reproducibility material is still being organized.

License

MIT. See LICENSE.

About

An agent long-term memory engine for adaptive retrieval, writing, auditing, and controlled growth.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors