Skip to content

fix(drizzle): process bulk deletes sequentially to prevent silent data loss#16076

Open
razmenaia wants to merge 1 commit intopayloadcms:mainfrom
razmenaia:fix/delete-concurrent-race-condition
Open

fix(drizzle): process bulk deletes sequentially to prevent silent data loss#16076
razmenaia wants to merge 1 commit intopayloadcms:mainfrom
razmenaia:fix/delete-concurrent-race-condition

Conversation

@razmenaia
Copy link
Copy Markdown

Summary

Fixes #16075

payload.delete({ where }) silently fails to delete 1+ records. Payload reports success and returns the correct IDs, but the records remain in the database.

Root Cause

The bulk delete operation in delete.ts uses .map(async ...) to create per-document promises, then either Promise.all or for...of await to process them. However, .map(async ...) eagerly starts all async callbacks immediately — for...of await only controls the order results are collected, not execution order.

This means all deleteOne calls run concurrently on a shared transaction connection. Each deleteOne performs 3 queries (selectDistinctfindFirstdeleteWhere) that interleave and silently drop deletes.

bulkOperationsSingleTransaction: true does not fix it because of the same .map() antipattern.

Fix

Replace .map(async ...) + for...of await with a proper sequential for...of loop:

const processDoc = async (doc) => { /* existing body */ }

const awaitedDocs = []
for (const doc of docs) {
  awaitedDocs.push(await processDoc(doc))
}

Reproduction

  • Reproduces within 1-3 iterations on Payload 3.80.0 with @payloadcms/db-postgres
  • Verified not present in 3.41.0 (regression)
  • Drizzle direct db.delete().where() works perfectly — confirms the bug is in Payload's abstraction layer, not Drizzle or Postgres

See #16075 for full details, POC scripts, and proof.

Made with Cursor

… loss

The bulk delete operation used .map(async ...) to create promises for
each document, then either Promise.all or for...of await. However,
.map(async ...) eagerly starts all callbacks immediately -- the
for...of await loop only controls the order results are collected,
not execution order. This caused concurrent deleteOne calls on a
shared transaction connection, where interleaved queries silently
dropped deletes.

Replace with a proper sequential for...of loop that awaits each
document deletion before starting the next.

Reproduces consistently on Payload 3.80.0 with @payloadcms/db-postgres.
Verified not present in 3.41.0 (regression).

Fixes payloadcms#16075
Related: payloadcms#15100

Made-with: Cursor
@razmenaia razmenaia changed the title fix(delete): process bulk deletes sequentially to prevent silent data loss fix(drizzle): process bulk deletes sequentially to prevent silent data loss Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

payload.delete({ where }) silently fails to delete records — concurrent deleteOne race condition (regression since 3.41.0)

2 participants