The machine that knows the answers to all questions, using Interlaced Thinking (Kimi-k2) and a local knowledge base.
- Interlaced Thinking: Uses the Moonshot Kimi-k2-thinking model to reason about problems.
- Local Knowledge Base: Integrates with a local LLM (e.g., via Ollama/LocalAI) to query private data.
- Rich Terminal UI: Displays the thinking process and final answer beautifully in the terminal.
pip install .- Moonshot API Key: Set
MOONSHOT_API_KEYenvironment variable. - Local LLM: Ensure a local LLM is running (e.g., Ollama on port 11434).
Run the machine from the command line:
python -m knowing_machine "What is the secret contained in my local notes?" --kb-model llama3import asyncio
from knowing_machine import KimiClient, LocalLMKnowledgeBaseTool, InterlacedThinker
async def main():
client = KimiClient() # Reads MOONSHOT_API_KEY env var
kb = LocalLMKnowledgeBaseTool(model="llama3")
machine = InterlacedThinker(client, [kb])
await machine.think("Your complex query here")
if __name__ == "__main__":
asyncio.run(main())MOONSHOT_API_KEY: Required.- Local Tool default base URL:
http://localhost:11434/v1(Compatible with Ollama).