Skip to content

Architectshwet/reflection-agent-using-langgraph

Repository files navigation

Reflection Agent (LangGraph StateGraph)

A production-style reflection agent backend built with LangGraph StateGraph, FastAPI streaming, Tavily search, and remote PostgreSQL memory. It is designed for multi-turn research conversations where the agent searches, critiques, and improves answers before responding.

Features

  • Reflection-first graph workflow with iterative loop: researcher -> tools -> critique -> researcher/END
  • Tavily-powered web_search tool integrated through ToolNode for real-time web context.
  • Streaming chat endpoint (/chat/stream) using SSE for tool events and final response delivery.
  • Remote PostgreSQL checkpointer for thread-level LangGraph state persistence.
  • Async PostgreSQL connection pooling for efficient DB usage across concurrent requests.
  • Conversation history persistence into conversation_history table.
  • Session-state persistence into session_store table via sync hooks.
  • Built-in browser UI at /web for quick end-to-end testing without a separate frontend.
  • Health endpoint (/health) for runtime checks and container health probes.

Architecture

  1. API layer (reflection/server.py): Owns app startup/lifespan, request normalization, SSE response streaming, web UI endpoint, and DB write orchestration per turn.
  2. Graph layer (reflection/agents/reflection_analyst.py, reflection/reflection_agent.py): Defines ReflectionState, node logic, conditional routing, and graph compilation with checkpoint/store support.
  3. Tools layer (reflection/tools/web_search_tools.py): Exposes Tavily search as a LangChain tool used by the researcher node during evidence gathering.
  4. Persistence layer (reflection/state/*, reflection/db/*, reflection/services/postgres_db_service.py): Handles checkpointer setup, session syncing, connection pooling, and persistence of conversation/session records.

Use Cases

  • Real-time research assistant where latest web information is required before answering.
  • Backend service for reflective QA pipelines that need self-critique before response finalization.
  • Multi-session conversational apps that need persistent thread memory in remote PostgreSQL.
  • API-first demo system for FastAPI + LangGraph + tool-calling + SSE streaming patterns.

Run (Docker)

  1. Create environment file:
cp .env.example .env
  1. Set required variables in .env: OPENAI_API_KEY, TAVILY_API_KEY, POSTGRESQL_URL

  2. Start dev stack (build image + run app):

docker-compose -f docker-compose.dev.yml up --build
  1. Use the app: Open http://localhost:8000/web for chat UI. Check http://localhost:8000/health for health status.

Docker Lifecycle Commands

  1. Stop containers:
docker-compose -f docker-compose.dev.yml down
  1. Start again after down (without rebuild):
docker-compose -f docker-compose.dev.yml up
  1. Rebuild and start (when dependencies or Dockerfile changed):
docker-compose -f docker-compose.dev.yml up --build
  1. Optional full cleanup (remove containers + volumes):
docker-compose -f docker-compose.dev.yml down -v

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors