A production-style reflection agent backend built with LangGraph StateGraph, FastAPI streaming, Tavily search, and remote PostgreSQL memory.
It is designed for multi-turn research conversations where the agent searches, critiques, and improves answers before responding.
- Reflection-first graph workflow with iterative loop:
researcher -> tools -> critique -> researcher/END - Tavily-powered
web_searchtool integrated throughToolNodefor real-time web context. - Streaming chat endpoint (
/chat/stream) using SSE for tool events and final response delivery. - Remote PostgreSQL checkpointer for thread-level LangGraph state persistence.
- Async PostgreSQL connection pooling for efficient DB usage across concurrent requests.
- Conversation history persistence into
conversation_historytable. - Session-state persistence into
session_storetable via sync hooks. - Built-in browser UI at
/webfor quick end-to-end testing without a separate frontend. - Health endpoint (
/health) for runtime checks and container health probes.
- API layer (
reflection/server.py): Owns app startup/lifespan, request normalization, SSE response streaming, web UI endpoint, and DB write orchestration per turn. - Graph layer (
reflection/agents/reflection_analyst.py,reflection/reflection_agent.py): DefinesReflectionState, node logic, conditional routing, and graph compilation with checkpoint/store support. - Tools layer (
reflection/tools/web_search_tools.py): Exposes Tavily search as a LangChain tool used by the researcher node during evidence gathering. - Persistence layer (
reflection/state/*,reflection/db/*,reflection/services/postgres_db_service.py): Handles checkpointer setup, session syncing, connection pooling, and persistence of conversation/session records.
- Real-time research assistant where latest web information is required before answering.
- Backend service for reflective QA pipelines that need self-critique before response finalization.
- Multi-session conversational apps that need persistent thread memory in remote PostgreSQL.
- API-first demo system for FastAPI + LangGraph + tool-calling + SSE streaming patterns.
- Create environment file:
cp .env.example .env-
Set required variables in
.env:OPENAI_API_KEY,TAVILY_API_KEY,POSTGRESQL_URL -
Start dev stack (build image + run app):
docker-compose -f docker-compose.dev.yml up --build- Use the app:
Open
http://localhost:8000/webfor chat UI. Checkhttp://localhost:8000/healthfor health status.
- Stop containers:
docker-compose -f docker-compose.dev.yml down- Start again after
down(without rebuild):
docker-compose -f docker-compose.dev.yml up- Rebuild and start (when dependencies or Dockerfile changed):
docker-compose -f docker-compose.dev.yml up --build- Optional full cleanup (remove containers + volumes):
docker-compose -f docker-compose.dev.yml down -v