-
-
Notifications
You must be signed in to change notification settings - Fork 3
FAQ
Frequently asked questions about SuperLocalMemory V3.
SuperLocalMemory is a persistent memory system for AI assistants. It stores your decisions, bug fixes, project context, and preferences locally, then automatically provides them to your AI in future sessions. Your AI stops forgetting you.
Yes. SuperLocalMemory is open-source (MIT license) and completely free. No usage limits, no credit system, no subscription. Forever.
All data is stored locally in a SQLite database at ~/.superlocalmemory/memory.db. In Mode A and Mode B, your data never leaves your machine. In Mode C, query data is sent to your configured cloud LLM provider.
17+ IDEs including Claude Code, Cursor, VS Code (with MCP extension), Windsurf, Gemini CLI, JetBrains IDEs (IntelliJ, PyCharm, WebStorm), Continue.dev, Zed, and any IDE that supports the Model Context Protocol.
Mode A and Mode B work fully offline. Mode C requires internet for the cloud LLM.
- Python 3.11+ (required for V3 engine)
- Node.js 14+ (if installing via npm)
- Any supported IDE
- For Mode B: Ollama with a pulled model
- For Mode C: API key for your cloud LLM provider
# npm (recommended)
npm install -g superlocalmemory
slm setup
slm warmup # Optional — pre-download embedding model
# or pip
pip install superlocalmemory
slm setupnpm install -g superlocalmemory@latest
# or: pip install --upgrade superlocalmemoryNo. Run slm migrate after updating. All memories, profiles, and settings are preserved. A backup is created automatically. See Migration from V2 for details.
When you start a conversation in your IDE, SuperLocalMemory automatically retrieves relevant memories and injects them into your AI's context. You do not need to call "recall" explicitly — it happens in the background via the MCP server.
slm remember "The deploy script needs AWS_REGION set to us-east-1"slm recall "deploy configuration"slm trace "deploy configuration"This shows per-channel scores (Semantic, BM25, Entity Graph, Temporal) for each result.
slm forget "search query" # Delete matching memories (with confirmation)- Mode A if you need privacy, compliance, or offline operation
- Mode B if you want composed answers and have a capable machine (16GB+ RAM)
- Mode C if you want maximum accuracy and cloud access is acceptable
Yes: slm mode a, slm mode b, or slm mode c. Your memories are shared across all modes.
On the LoCoMo benchmark:
- Mode A: 74.8% retrieval accuracy (zero cloud, highest local-first score reported)
- Mode C: 87.7% (cloud LLM, competitive with funded systems)
- Mathematical layers contribute +12.7pp average improvement
No. Your database is a local file on your machine. It is not synced, uploaded, or shared with anyone — including us.
Mode A and Mode B are compliant by architecture — data never leaves your device during any memory operation. Mode C requires a Data Processing Agreement with your cloud provider.
The database is a standard SQLite file at ~/.superlocalmemory/memory.db. You can copy it, back it up, or query it directly.
slm forget "query" deletes matching memories. To delete everything, remove the database: rm ~/.superlocalmemory/memory.db.
- Check that SuperLocalMemory is running:
slm status - Check that you have stored memories:
slm recall "test" - Verify your IDE connection: restart the IDE after configuring MCP
- Check the active profile:
slm profile list
Try more specific queries. Use slm trace "query" to see which channels contribute — this helps diagnose whether the issue is semantic, keyword, or entity matching.
Use manual configuration. See IDE Setup for per-IDE config paths.
Open an issue at github.com/qualixar/superlocalmemory/issues.
Part of Qualixar | Created by Varun Pratap Bhardwaj
SuperLocalMemory V3 — Your AI Finally Remembers You. 100% local. 100% private. 100% free.
Part of Qualixar | Created by Varun Pratap Bhardwaj | GitHub
SuperLocalMemory V3
Getting Started
Reference
Architecture
Enterprise
V2 Documentation