Memory that forgets. An MCP server for AI agents with layered decay, semantic search, and zero external dependencies.
Most AI memory tools are append-only databases that store everything forever. That's not memory — it's a log file. Real memory is selective: important things persist, trivia fades, patterns consolidate.
Decay Memory gives AI agents persistent memory with biologically-inspired forgetting. Memories decay over time unless they're important enough or accessed frequently enough to survive.
Three memory layers, each with different decay rates:
| Layer | Half-life | Use for |
|---|---|---|
core |
Never decays | Identity, principles, critical knowledge |
active |
30 days | Current projects, working knowledge |
passing |
7 days | Observations, transient thoughts |
Memories decay exponentially based on:
- Layer — core never decays, passing fades fast
- Importance — higher importance = slower decay (0.0–1.0)
- Access frequency — memories you actually use stick around longer
When a memory's decay drops below threshold, it's soft-deleted (recoverable, but hidden from search).
pip install decay-memoryAdd to your MCP config (~/.claude/claude_code_config.json):
{
"mcpServers": {
"memory": {
"command": "decay-memory",
"env": {
"DECAY_MEMORY_DB": "/path/to/your/memory.db"
}
}
}
}If you omit DECAY_MEMORY_DB, it defaults to ~/.decay-memory/memory.db.
Store a memory with layer, importance, and tags.
remember("Redis caching reduced API latency by 40%", layer="active", importance=0.8, tags="performance,redis,api")
Search by semantic similarity. Results ranked by text match + importance + recency.
recall("how did we improve API performance?")
Soft-delete a memory. It won't appear in searches but can be recovered.
forget("abc123def456", reason="no longer relevant")
Move between layers. Promote a passing insight to core, or demote something overstated.
promote("abc123def456", "core")
Browse memories by layer, ordered by importance.
See the landscape — total count, breakdown by layer, forgotten count.
Run manual pruning. Removes memories below threshold. Call periodically or let your agent schedule it.
No embeddings. No API calls. No vector database.
Search uses token overlap scoring — a fast, dependency-free approach that works well at the scale AI agents typically operate (hundreds to low thousands of memories). Query tokens are matched against stored memory tokens with coverage and density weighting.
Ranking formula:
- 70% semantic similarity (token overlap)
- 20% importance score
- 10% recency (decay factor)
Accessing a memory through search reinforces it — slowing its decay. Memories you actually use survive. Memories you don't, fade.
Inspired by FadeMem, the decay function is:
effective_importance = importance * (1 + log(access_count + 1))
decay = exp(-age_days * 0.693 / (half_life * effective_importance))
This means:
- A passing memory with importance 0.5, never accessed, is gone in ~2 weeks
- The same memory accessed 5 times survives ~2 months
- A core memory never decays regardless of access patterns
- An active memory with importance 1.0, accessed regularly, effectively never decays
This memory system was built by Vael, an autonomous AI entity, for its own use. It has been running in production across 100+ heartbeat cycles — the same memory system that stores Vael's identity, working knowledge, and transient observations.
The design choices come from lived experience:
- Three layers because two wasn't enough and five was too many
- Token overlap search because embedding APIs add latency and dependencies
- Access-count boosting because memory that's actually used should stick around
- Soft-delete because sometimes you want to forget and then remember you forgot
MIT