Skip to content

fix: resume processing automatically after container restart#18

Open
martindell wants to merge 1 commit intoSpaceinvaderOne:mainfrom
martindell:fix/startup-recovery-stalled-queue
Open

fix: resume processing automatically after container restart#18
martindell wants to merge 1 commit intoSpaceinvaderOne:mainfrom
martindell:fix/startup-recovery-stalled-queue

Conversation

@martindell
Copy link
Copy Markdown

Problem

After a container restart (whether from a crash, manual restart, or Unraid maintenance), a-eye silently stalls and never resumes processing. Two separate bugs combine to cause this:

Bug 1 - Stuck "processing" items block the queue

If the container stops while images are being processed, those images are left in status = 'processing' in the database permanently. On the next startup, these items are never recovered, permanently blocking the worker queue.

Bug 2 - Pending items are never re-enqueued

The worker queue is in-memory only. On startup it is always empty, regardless of how many status = 'pending' items exist in the database. There is no mechanism to reload them — meaning after any restart, processing simply never resumes unless the user manually triggers it through the UI (and knows they need to).

In practice both bugs together mean: restart the container → processing stops forever with no error message.

Fix

Two small additions to the startup lifespan() function in backend/main.py:

  1. Immediately after DB init - reset any stuck processing items back to pending, with a log message counting how many were recovered.
  2. Immediately after the worker starts - load all pending items from the database into the worker queue, with a log message counting how many were enqueued.

Both operations are cheap (single SQL query each), happen at startup only, and are no-ops on a clean first run.

Testing

  1. Start a-eye with a backlog of pending images and let it begin processing
  2. Restart the container mid-processing
  3. Before this fix: processing stops permanently with no error
  4. After this fix: processing resumes automatically within seconds, and the logs show:
Startup recovery: reset N stuck 'processing' image(s) to 'pending'
Startup recovery: enqueued N pending image(s) for processing

Two bugs combined to cause processing to stall permanently after any
container restart:

1. Images left in 'processing' status from an unclean shutdown were
   never recovered, permanently blocking the worker queue.

2. The worker queue is in-memory only — existing 'pending' items in the
   database were never re-enqueued on startup, so processing never
   resumed without manual user intervention.

Fix: add two startup recovery steps in lifespan():
- Reset any stuck 'processing' images back to 'pending' immediately
  after DB init.
- Re-enqueue all 'pending' images immediately after the worker starts.

Both are single SQL queries, cheap at startup, and no-ops on a clean
first run.
@ishemes
Copy link
Copy Markdown

ishemes commented Apr 18, 2026

Nice! Thanks for working this out @martindell. I made the changes inside the container, rebooted and it is now processing again. Only 89717 XMP descriptions to go! :D

Edit: A second reboot removed my patch, so I ended up adding the main.py on my Unraid server in appdata and adding another path to the container replacing the main.py inside the container, with the patched version from appdata. Now it works like a charm through reboots. Thanks again!

@iwantatr8
Copy link
Copy Markdown

Thanks @martindell, worked great for a stuck process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants