Skip to content

Latest commit

 

History

History
211 lines (150 loc) · 5.41 KB

File metadata and controls

211 lines (150 loc) · 5.41 KB

Frontend-Backend Integration Guide

Overview

This document describes the integration between the ExploitRAG frontend (Next.js) and backend (FastAPI).

Configuration

Environment Variables

Frontend (.env.local)

NEXT_PUBLIC_API_URL=http://localhost:8000

Backend (.env)

ALLOWED_ORIGINS=http://localhost:3000,http://localhost:8000

Architecture

Authentication Flow

  1. User registers/logs in via /api/auth/register or /api/auth/login
  2. Backend returns JWT access token and refresh token
  3. Frontend stores tokens in localStorage
  4. All subsequent requests include Authorization: Bearer <token> header
  5. On 401 errors, frontend automatically attempts token refresh
  6. If refresh fails, user is redirected to login

API Client (lib/api-client.ts)

Centralized API client with:

  • Automatic token injection
  • Automatic token refresh on 401
  • Error handling
  • Helper functions (apiGet, apiPost, apiDelete)

Context Providers

AuthContext (lib/auth-context.tsx)

  • Manages user authentication state
  • Provides login, register, logout functions
  • Handles token storage and refresh
  • Exposes current user information

ChatContext (lib/chat-context.tsx)

  • Manages conversations and messages
  • Handles streaming and synchronous chat queries
  • Provides conversation CRUD operations
  • Manages chat settings

API Endpoints

Authentication

  • POST /api/auth/register - Register new user
  • POST /api/auth/login - Login user
  • POST /api/auth/refresh - Refresh access token
  • GET /api/auth/me - Get current user info

Chat

  • POST /api/chat/query - Streaming chat (SSE)
  • POST /api/chat/query-sync - Synchronous chat

Conversations

  • GET /api/conversations - List all conversations
  • POST /api/conversations - Create new conversation
  • GET /api/conversations/{id} - Get conversation details
  • DELETE /api/conversations/{id} - Delete conversation
  • GET /api/conversations/{id}/export/{format} - Export conversation

Streaming Implementation

The frontend uses Server-Sent Events (SSE) for streaming chat responses:

const response = await fetch(`${API_BASE_URL}/api/chat/query`, {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    Authorization: `Bearer ${accessToken}`,
  },
  body: JSON.stringify({
    conversation_id: conversationId,
    message: content,
  }),
});

const reader = response.body?.getReader();
const decoder = new TextDecoder();

// Process SSE stream
while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  // Parse SSE format: "data: {json}\n\n"
  // Update UI with delta content
}

CORS Configuration

The backend is configured to allow requests from the frontend origin:

app.add_middleware(
    CORSMiddleware,
    allow_origins=settings.allowed_origins_list,  # ["http://localhost:3000"]
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

Testing Integration

Manual Testing

  1. Start backend: cd backend && uvicorn app.main:app --reload
  2. Start frontend: cd frontend && npm run dev
  3. Open browser to http://localhost:3000
  4. Register a new account
  5. Send a chat message
  6. Verify streaming response

Integration Tests

Use the integration test utility:

import { runIntegrationTests } from "@/lib/integration-test";

const { overall, results } = await runIntegrationTests();
console.log("Integration tests:", results);

Common Issues

CORS Errors

Problem: Browser shows CORS policy errors Solution: Ensure ALLOWED_ORIGINS in backend .env includes frontend URL

401 Unauthorized

Problem: Requests fail with 401 even after login Solution: Check that tokens are being stored in localStorage and included in headers

Streaming Not Working

Problem: Chat responses don't stream Solution:

  • Verify SSE parsing logic
  • Check browser network tab for event stream
  • Ensure nginx/proxy isn't buffering responses

Token Refresh Loop

Problem: Infinite token refresh attempts Solution: Check token expiration times and refresh logic

Development Tips

  1. Use React DevTools to inspect AuthContext and ChatContext state
  2. Check Network Tab to see actual API requests/responses
  3. Monitor Backend Logs for server-side errors
  4. Use Integration Tests to verify connectivity before debugging UI

Production Considerations

  1. Environment Variables: Use proper production URLs
  2. Token Storage: Consider using httpOnly cookies instead of localStorage
  3. HTTPS: Ensure all communication uses HTTPS
  4. Rate Limiting: Backend has rate limits (20 req/min for chat)
  5. Error Handling: Implement proper error boundaries and user feedback
  6. Token Refresh: Consider implementing silent refresh before token expiry

API Response Types

All TypeScript types are defined in lib/types.ts:

  • TokenResponse - Authentication tokens
  • UserResponse - User information
  • ConversationResponse - Conversation metadata
  • MessageResponse - Message data
  • ContextSource - RAG context sources
  • ChatQueryResponse - Chat response with sources

Next Steps

  1. ✅ Environment configuration
  2. ✅ API client implementation
  3. ✅ Authentication context
  4. ✅ Chat context with streaming
  5. ✅ Integration tests
  6. 🔄 UI components (in progress)
  7. ⏳ Error boundaries
  8. ⏳ Loading states
  9. ⏳ Production deployment