Global, real-time memory management for Claude Code, Codex, and other LLMs.
A persistent, cross-session memory system that automatically logs your work, tracks decisions, and syncs to a private GitHub repo. Works from any directory. Zero friction.
LLMs lose context between sessions. Agent teams forget prior decisions. Work gets scattered across different conversations. This system:
- Persists context across sessions, machines, and LLM changes
- Auto-logs everything without friction
- Orchestrates agent teams with full historical access
- Prevents knowledge loss through structured archiving
- Works globally from any directory
Perfect for: Deep work, multi-agent analysis, complex projects, security research, data science, anything requiring continuity.
✅ Real-time auto-logging — Every git commit auto-appended to daily logs
✅ Hourly GitHub sync — Background process keeps memory repo up-to-date
✅ Agent orchestration — Deploy parallel AI teams with full context persistence
✅ Session-end summaries — Automatic work summaries when you stop
✅ Global scope — Works from any directory on your machine
✅ Decision tracking — Structured format for capturing architectural choices
✅ Project templates — Copy-paste state, decisions, architecture, risks, research
✅ Easy migration — Export/import to move to new computers or LLMs
✅ Zero setup required — One command setup script does everything
git clone https://github.com/c0vertbyte/llm-memory-system.git
cd llm-memory-systembash setup.shThis does everything:
- Creates private GitHub repo
- Clones to ~/.claude/claude-memory
- Sets up global git hooks
- Configures Claude Code settings
- Starts hourly sync loop
# Just code normally
cd ~/my-project
git commit -m "Implement feature X"
# ✓ Automatically logged to ~/.claude/daily-logs/YYYY-MM-DD.md
# ✓ GitHub synced every hourDone. It's automatic from here.
1. You make a commit (any directory)
↓
2. post-commit hook fires automatically
↓
3. Logged to ~/.claude/daily-logs/YYYY-MM-DD.md
↓
4. Every 60 minutes: hourly-sync loop detects changes → commits → pushes
↓
5. Session ends → session-end summary appended
- Commit hash, message, files changed
- Project name and timestamp
- Session summaries (auto-appended on exit)
- Hourly background process checks for changes
- If changes exist:
git add -A, commit, push - Handles offline gracefully (retries next hour)
- View sync activity in
~/.claude/daily-logs/.sync.log
Deploy multiple AI agents that work in parallel with full context persistence.
Single-agent workflows lose context between tasks. Multi-agent systems require manual context passing. This system automatically maintains context across agent lifecycles via remote memory.
1. You trigger: "I want you to deploy an agent team"
↓
2. Lead agent creates context files:
- context.md (what's being asked, scope)
- project-context.md (overview, known issues, prior work)
↓
3. Lead spawns 2-3 specialist agents in parallel
↓
4. Agents claim tasks from shared task list
↓
5. Each agent works independently:
- Pulls recent debates & findings from remote repo
- Analyzes with full historical context
- Writes reasoning logs (forced by hook)
↓
6. Lead synthesizes findings:
- Reads all agent reasoning logs
- Writes synthesis.md
- Extracts patterns → knowledge/patterns/
↓
7. Findings promoted to long-term memory
- Debate archived with full context
- Next agent team can reference it
- Zero context loss across sessions
↓
8. Agents shutdown gracefully
- Parallel execution — Multiple agents work simultaneously on different aspects
- Full context persistence — Agents pull prior debates, findings, decisions from remote repo
- Forced reasoning — Two-pass hook system ensures agents write reasoning logs before idle
- Debate archiving — All agent discussions captured in
knowledge/debates/YYYY-MM-DD-topic/ - Knowledge promotion — 9-point quality gate ensures only high-quality findings enter long-term memory
- Zero setup per task — Context files created once, agents access automatically
# Start in your project
cd ~/my-complex-system
# Ask for an agent team
"I want you to deploy an agent team to audit this authentication system"
# System automatically:
# 1. Creates context files with project history
# 2. Spawns secure-code-reviewer, threat-modeler, architecture-reviewer agents
# 3. Agents pull prior security reviews from knowledge/debates/
# 4. Each agent analyzes from their specialty
# 5. All findings logged with reasoning
# 6. Lead synthesizes findings → knowledge base
# 7. Next team can reference this work immediately
# Result: Full audit with complete context trailTraditional multi-agent systems:
- Agents start fresh each session (context loss)
- Manual context passing between agents
- Findings scattered across different sessions
- No debate archive (why decisions were made)
- No quality gate (low-signal information in memory)
This system:
- ✅ Automatic context loading from remote repo
- ✅ Shared task list and findings across agents
- ✅ Full debate archives (reasoning preserved)
- ✅ 9-point quality gate (high signal, low noise)
- ✅ Next agent team builds on prior work
- ✅ Institutional memory that grows over time
~/.claude/
claude-memory/ ← Your memory repo (private or public)
profile/
identity.md ← Who you are
preferences.md ← How you work
workflow.md ← Current setup
projects/
project-name/
state.md ← Current status
decisions.md ← Append-only decisions log
next.md ← Next actions
architecture.md ← Systems overview
research.md ← Distilled research
risks.md ← Blockers and tradeoffs
_template/ ← Copy for new projects
knowledge/
index.md ← Compressed patterns & lessons
patterns/ ← Design patterns
lessons/ ← What you learned
reviews/ ← Deep architectural reviews
debates/ ← Design decisions debated
daily-logs/ ← Auto-populated work logs
hooks/ ← Automation scripts
daily-logs/ ← Symbolic link to memory repo logs
settings.json ← Claude Code hooks configured
CLAUDE.md ← Local runtime rules
GLOBAL.md ← Automation details
- Creates private GitHub repo
{username}/llm-memory - Clones to
~/.claude/claude-memory - Creates
~/.claude/hooks/with scripts - Configures global git hooks (
~/.git-hooks/) - Updates
~/.claude/settings.jsonwith hooks - Starts hourly sync loop (
nohup)
See SETUP_MANUAL.md
bash export.sh
# Creates: llm-memory-backup-YYYY-MM-DD.tar.gz
# Size: ~10MB (includes full history)# Copy the backup file, then:
bash import.sh llm-memory-backup-YYYY-MM-DD.tar.gzDone. All your projects, decisions, history restored. Sync loop starts automatically.
# Edit ~/.claude/hooks/hourly-sync.sh
SYNC_INTERVAL=1800 # 30 minutes instead of 60Comment out git push in hourly-sync.sh for manual-only sync.
Edit ~/.claude/claude-memory/profile/ to reflect your working style.
Modify post-commit hook to skip logging in certain directories.
pgrep -f "hourly-sync.sh" || \
nohup ~/.claude/hooks/hourly-sync.sh > ~/.claude/daily-logs/.sync.log 2>&1 &git config --global core.hooksPath ~/.git-hooks
chmod +x ~/.git-hooks/post-commit
# Make a test commitcat ~/.claude/daily-logs/$(date +%Y-%m-%d).md
# Check for errors:
bash ~/.git-hooks/post-committail ~/.claude/daily-logs/.sync.log
# Check git credentials:
gh auth statusThe system automatically persists:
- ✅ Daily logs (appended to memory repo)
- ✅ Project state (synced hourly)
- ✅ Decisions and architecture (tracked in decisions.md)
- ✅ Session history (via git log)
To switch to a new Claude instance or LLM:
- Run
export.shon current machine - Move backup to new machine/instance
- Run
import.sh - Everything is restored with full history
This system is tool-agnostic. It works with:
- Claude Code (primary)
- Codex / GitHub Copilot
- Claude API (custom applications)
- Any future LLM with file access
Just export your memory repo and import into the new environment.
- setup.sh — One-command setup
- export.sh — Backup for migration
- import.sh — Restore from backup
- SETUP_MANUAL.md — Manual setup instructions
- MIGRATION.md — Detailed migration guide
- CLAUDE.md.template — Template for local rules
- GLOBAL.md.template — Template for automation docs
- hooks/ — All automation scripts
- profiles/ — Example profiles
MIT — Use, modify, share freely.
See the included documentation:
- SETUP_MANUAL.md — Step-by-step setup
- MIGRATION.md — Moving to new computers
- CUSTOMIZATION.md — Adjusting for your workflow
- TROUBLESHOOTING.md — Common issues
- README.md — Overview and quick start
- CLAUDE.md — Master instruction template (customize for your domain)
- MIGRATION.md — How to move between computers
- SETUP_MANUAL.md — Step-by-step setup guide
- setup.sh — One-command setup (creates repo, clones, configures hooks)
- export.sh — Backup your setup for migration
- import.sh — Restore from backup on new machine
- hooks/ — 9 automation scripts ready to use
- rules/ — 2 generic rule templates (quality standards, memory gates)
- agents/ — Agent template for defining specialist roles
- .gitignore — Exclude session state and credentials
- profiles/ — Example user profile structure
- projects/_template/ — Template for new projects
Replace the generic rules with security-specific ones. Examples:
- Source grounding for fact verification
- Dual-use boundaries (offensive/defensive)
- Data redaction (PII, secrets)
- Incident response protocols
Adapt rules for:
- Model validation standards
- Data privacy and handling
- Experimental methodology
- Reproducibility gates
Customize for:
- Code review standards
- Testing requirements
- Performance benchmarks
- Deployment safety
Ready? → bash setup.sh