Skip to content

[BUG]: Synchronous LLM requests block FastAPI server during form filling #277

@Mahendrareddy2006

Description

@Mahendrareddy2006

name: 🐛 Bug Report
about: Create a report to help us improve FireForm.
title: "[BUG]: Synchronous LLM requests block FastAPI server during form filling"
labels: bug
assignees: ''

⚡ Describe the Bug

The form filling API endpoint blocks the entire FastAPI server when processing LLM requests.
Since the LLM calls in llm.py use synchronous requests.post() and the route handler in forms.py is synchronous, any delay or failure in Ollama causes the server to hang indefinitely.

👣 Steps to Reproduce

  1. Start the FastAPI server
    uvicorn api.main:app --reload

  2. Send a POST request to /forms/fill

  3. If Ollama is slow or unresponsive, the request hangs.

  4. Try making another request to any endpoint.

Result:
The server becomes unresponsive until the first request finishes.

📉 Expected Behavior

The API should process LLM requests asynchronously so that multiple requests can be handled concurrently without blocking the server.

🖥️ Environment

OS: Windows
Python: 3.13.1
FastAPI: 0.104+
Ollama: latest
Model: mistral

🕵️ Possible Fix

The issue appears to be caused by synchronous HTTP requests in llm.py:

response = requests.post(OLLAMA_URL, json=payload)

Potential solution:

Replace requests with httpx.AsyncClient
Convert LLM processing functions to async
Use await for LLM requests
Update FastAPI route handlers accordingly

Example:

import httpx

async with httpx.AsyncClient() as client:
response = await client.post(OLLAMA_URL, json=payload)

This would prevent the FastAPI event loop from being blocked during LLM processing.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions