Skip to content

Presigned Upload Flow for Dashboard API#136

Merged
yash-pouranik merged 5 commits intogeturbackend:mainfrom
Nitin-kumar-yadav1307:feature/presigned-upload-dashboard
Apr 26, 2026
Merged

Presigned Upload Flow for Dashboard API#136
yash-pouranik merged 5 commits intogeturbackend:mainfrom
Nitin-kumar-yadav1307:feature/presigned-upload-dashboard

Conversation

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor

@Nitin-kumar-yadav1307 Nitin-kumar-yadav1307 commented Apr 25, 2026

Presigned Upload Flow for Dashboard API

Issue: #131

Description

Implements a three-step presigned URL upload architecture for the dashboard API, enabling large file uploads directly to S3/R2 without proxying bytes through Node.js. This mirrors the implementation in PR #129 for the public API but uses dashboard-appropriate auth (JWT/session via loadProjectForAdmin middleware).

Changes

Files Modified

apps/dashboard-api/src/routes/projects.js

  • POST /:projectId/storage/upload-request - Generate presigned URL with quota validation
  • POST /:projectId/storage/upload-confirm - Verify upload completion and charge quota atomically
  • Auth chain: authMiddleware → verifyEmail → loadProjectForAdmin

apps/dashboard-api/src/controllers/project.controller.js

New handlers:

  • requestUpload() - Validates size, peeks at quota (no charge yet), returns signed URL and token
  • confirmUpload() - Verifies file exists, compares size, atomically charges quota, cleans up on failure

New helpers:

  • parsePositiveSize() - Validates positive finite numbers for file size
  • normalizeProjectPath() - Prevents path traversal; validates projectId prefix
  • bestEffortDeleteUploadedObject() - Non-fatal cleanup helper for failed confirmations

docs-legacy/storage.md

  • Added presigned upload flow documentation (4-step process)
  • Added bucket CORS configuration requirements (PUT, OPTIONS, GET, HEAD methods)
  • Documented required headers and flow sequence

Files Created

apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js

  • 8 tests for requestUpload and confirmUpload handlers
  • Coverage: internal storage, external storage, quota enforcement, size validation, retryable states

apps/dashboard-api/src/__tests__/routes.projects.storage.test.js

  • 2 route wiring tests
  • Confirms both endpoints are protected by auth middleware chain

Key Features

Quota Management

  • requestUpload: Peeks at quota without charging
  • confirmUpload: Charges quota atomically only after file verification
  • External storage skips all quota operations

Security

  • Auth protection via dashboard middleware chain
  • Path traversal prevention (validates projectId prefix, blocks .. segments)
  • File size limits: 10 MB per file, 100 MB safety max
  • Size tolerance: 64-byte tolerance for cloud storage eventual consistency

Reliability

  • Retryable 409 if file not yet visible after PUT
  • Atomic quota charge with automatic cleanup on limit exceeded
  • Best-effort object deletion on confirmation failure

Shared Infrastructure

  • Uses @urbackend/common helpers: getPresignedUploadUrl, verifyUploadedFile
  • Reuses existing getStorage and getBucket functions
  • Consistent with public API implementation patterns

Test Results

WhatsApp Image 2026-04-25 at 6 53 50 PM

Summary by CodeRabbit

  • New Features

    • Two-phase presigned upload flow for direct browser uploads (request signed URL, PUT file, confirm upload); legacy multipart upload endpoint removed.
    • File size validation and storage quota enforcement with atomic accounting for internal storage.
    • Upload verification with automatic cleanup and size-tolerance handling.
  • Documentation

    • Added guidance and precise bucket CORS configuration for presigned browser uploads.
  • Tests

    • Added comprehensive tests for presigned upload routes and controller behaviors.
  • Chores

    • Exposed input sanitization utilities for reuse.

…#131)

- Add requestUpload handler to generate presigned URLs with quota peeking
- Add confirmUpload handler to verify uploads and charge quota atomically
- Implement size validation, path traversal protection, and cleanup on failure
- Add presigned upload tests for controller logic and route wiring
- Update storage docs with presigned flow and S3/R2 CORS requirements
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 25, 2026

Warning

Rate limit exceeded

@mintlify[bot] has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 57 minutes and 7 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 57 minutes and 7 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f161c33f-904c-43bf-ab9a-bd35d2c58a60

📥 Commits

Reviewing files that changed from the base of the PR and between 43c0fd2 and e9481f6.

📒 Files selected for processing (1)
  • mintlify/docs/guides/storage.mdx
📝 Walkthrough

Walkthrough

Replaces single-step multipart uploads with a two-phase presigned browser upload flow: adds requestUpload and confirmUpload controller methods and routes, enforces quota and file metadata validation, verifies uploaded objects via storage backends, updates atomic quota accounting, and adds tests/docs and input-sanitization utilities.

Changes

Cohort / File(s) Summary
Route Handler Tests
apps/dashboard-api/src/__tests__/routes.projects.storage.test.js
New tests asserting legacy /storage/upload is removed and /upload-request + /upload-confirm are registered, protected by auth/email/project middleware, and call correct controller methods.
Controller Tests
apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js
New unit tests for requestUpload/confirmUpload: input validation, deterministic UUID, quota pre-checks/atomic increments, signed URL/token expectations, verification/cleanup on failures, and provider-specific responses.
Controller Implementation
apps/dashboard-api/src/controllers/project.controller.js
Removes uploadFile; adds requestUpload (validate metadata, quota pre-check, build filePath, return presigned URL/token) and confirmUpload (normalize/verify filePath ownership, verify uploaded object/size, cleanup on failures, atomically charge internal quota, resolve public URL).
Route Definitions
apps/dashboard-api/src/routes/projects.js
Replaces multer single-step upload route with /:projectId/storage/upload-request and /:projectId/storage/upload-confirm, removes multer setup and duplicate imports, and wires new controller methods with existing auth/project middleware.
Web Client
apps/web-dashboard/src/pages/Storage.jsx
Switches client flow to request → direct PUT to signedUrl → confirm; updates error handling and response validation.
Docs (user-facing)
docs-legacy/storage.md, mintlify/docs/guides/storage.mdx
Adds/updates docs for two-step presigned browser upload and required S3/R2 CORS configuration and troubleshooting.
Common Utilities
packages/common/src/utils/input.validation.js, packages/common/src/index.js
Adds and exports sanitizeObjectId and sanitizeNonEmptyString utilities for input normalization/validation.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client/Browser
    participant Router as Router<br/>(/upload-request, /upload-confirm)
    participant Controller as Controller<br/>requestUpload/confirmUpload
    participant Common as `@urbackend/common`<br/>storage helpers
    participant Storage as S3/R2 Storage
    participant DB as MongoDB<br/>Projects

    Client->>Router: POST /upload-request (projectId, filename, contentType, size)
    Router->>Router: auth + verifyEmail + loadProjectForAdmin
    Router->>Controller: requestUpload(req)
    Controller->>DB: read storageUsed/quota (pre-check)
    Controller->>Common: getPresignedUploadUrl(filePath, contentType, size)
    Common->>Storage: generate signed upload URL/token
    Storage-->>Common: signedUrl/token
    Common-->>Controller: signed credentials
    Controller-->>Client: 200 { signedUrl, filePath }

    Client->>Storage: PUT file bytes to signedUrl
    Storage-->>Client: 200 OK

    Client->>Router: POST /upload-confirm (projectId, filePath, declaredSize)
    Router->>Router: auth + verifyEmail + loadProjectForAdmin
    Router->>Controller: confirmUpload(req)
    Controller->>Common: verifyUploadedFile(filePath)
    Common->>Storage: HEAD/inspect -> size/visibility
    Storage-->>Common: verified size / not-visible
    Common-->>Controller: verification result
    Controller->>DB: atomic Project.updateOne($inc: storageUsed) or delete object on quota exceed
    Controller->>Common: getPublicUrl(filePath)
    Controller-->>Client: 200 { provider, publicUrl?, filePath }
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related issues

Possibly related PRs

Suggested labels

NSOC'26, level-3

Suggested reviewers

  • yash-pouranik
  • KushagraJaiswar02

Poem

🐰 I hopped to the bucket, nose in the breeze,
I fetched a signed URL with elegant ease.
Request then confirm, the dance was so neat,
Quota counted kindly — the burrow grows sweet! 🥕✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: implementing a presigned upload flow for the dashboard API. It is clear, specific, and directly reflects the primary objective of the PR.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
apps/dashboard-api/src/controllers/project.controller.js (2)

35-37: Single‑source the 10 MB limit.

MAX_FILE_SIZE = 10 * 1024 * 1024 here duplicates the multer limit at apps/dashboard-api/src/routes/projects.js:46 ({ fileSize: 10 * 1024 * 1024 }). If the cap ever changes, the two paths (legacy multipart and presigned) will drift. Consider exporting one constant (e.g., from @urbackend/common) and importing it in both places.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 35 -
37, The 10MB cap is duplicated via MAX_FILE_SIZE in the controller; export a
single shared constant (e.g., UPLOAD_MAX_FILE_SIZE) from the common/shared
package and replace the local MAX_FILE_SIZE usage with that import so both
multipart (where multer uses { fileSize: 10 * 1024 * 1024 }) and presigned
upload code reference the same symbol; update any references to SAFETY_MAX_BYTES
or CONFIRM_UPLOAD_SIZE_TOLERANCE_BYTES only if they depend on the previous
MAX_FILE_SIZE (compute them from the shared constant if needed) and update the
multer config to import and use the shared constant as well.

1311-1322: Brittle string match couples this handler to upstream wording.

err?.message === "File not found after upload" depends on the exact literal thrown by verifyUploadedFile in packages/common/src/utils/storage.manager.js. If that message is ever rephrased (or localized), this branch silently stops returning the retryable 409 UPLOAD_NOT_READY and any client retry loop breaks — the request falls through to the catch-all 500.

Prefer a structured signal (an error code, a custom error class, or a sentinel exported from @urbackend/common) so the contract is explicit at module boundaries.

♻️ Sketch
-      if (err?.message === "File not found after upload") {
+      if (err?.code === "UPLOAD_NOT_READY" || err?.message === "File not found after upload") {
         await bestEffortDeleteUploadedObject(project, normalizedPath);

…and have verifyUploadedFile throw an error with code: "UPLOAD_NOT_READY" for the not-yet-visible case.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1311 -
1322, The current handler in project.controller.js should not rely on the
literal error.message from verifyUploadedFile; change verifyUploadedFile (in
packages/common/src/utils/storage.manager.js or its exported sentinel) to throw
a structured signal (e.g., set err.code = "UPLOAD_NOT_READY" or throw a specific
UploadNotReadyError exported from `@urbackend/common`) and update this controller
to test that signal (e.g., if (err.code === "UPLOAD_NOT_READY") or if (err
instanceof UploadNotReadyError)) before calling bestEffortDeleteUploadedObject
and returning the 409 response so the retryable condition is robust across
wording changes.
apps/dashboard-api/src/routes/projects.js (1)

85-88: Comment label for upload-confirm is misleading.

//UPLOAD URL reads like "this is where the upload happens", but the route is the confirmation step. Consider // CONFIRM UPLOAD (and // REQUEST PRESIGNED URL above) for clarity.

📝 Proposed tweak
-//SIGNED URL
+// REQUEST PRESIGNED UPLOAD URL
 router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload);
-//UPLOAD URL
+// CONFIRM UPLOAD COMPLETION
 router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/routes/projects.js` around lines 85 - 88, The inline
comment for the confirm-upload route is misleading; update the two comments
above the routes to clearly reflect their actions: change the comment for
router.post('/:projectId/storage/upload-request', ...) to something like "//
REQUEST PRESIGNED URL" and change the comment for
router.post('/:projectId/storage/upload-confirm', ...) to "// CONFIRM UPLOAD" so
callers see that requestUpload handles generating a presigned URL and
confirmUpload handles the confirmation step.
docs-legacy/storage.md (1)

70-99: Documentation is too generic to be actionable.

The new section talks abstractly about "the backend upload-request endpoint" but never gives:

  • HTTP method (POST) or full path (e.g., POST /api/projects/:projectId/storage/upload-request for dashboard, POST /api/storage/upload-request for public API).
  • A concrete request payload (filename, contentType, size) and response payload (signedUrl, token, filePath).
  • The browser-side PUT headers required (Content-Type, Content-Length) and the confirm payload (filePath, size).
  • The retryable 409 UPLOAD_NOT_READY semantics that clients must handle.

A reader of just this doc cannot integrate against either API. Mirroring the structure of section 1 ("Upload a File") with code samples would close that gap.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs-legacy/storage.md` around lines 70 - 99, Add concrete, actionable API
details to the "Presigned Upload" section: specify HTTP method and full endpoint
paths for dashboard and public APIs (e.g., POST
/api/projects/:projectId/storage/upload-request and POST
/api/storage/upload-request), define exact request payload fields for
upload-request (filename, contentType, size) and upload-confirm (filePath, size)
and the response payload (signedUrl, token, filePath), enumerate required
browser PUT headers (Content-Type, Content-Length, any token header), document
retryable 409 UPLOAD_NOT_READY semantics and suggested client retry/backoff, and
mirror the "Upload a File" section by adding concise example request/response
snippets and a browser PUT flow sequence so implementers can integrate
end-to-end.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js`:
- Around line 238-256: The test uses an unrealistic Supabase mock shape and the
controller checks an unreachable error field; update the test's
mockStorageFrom.getPublicUrl call in storage.presigned.controller.test.js to
return { data: { publicUrl: null } } (remove the error property) and then edit
the controller.confirmUpload implementation to stop referencing
publicUrlData?.error — instead only inspect publicUrlData?.publicUrl (use a
truthy check for publicUrl and fall back to external provider when it's null).
Ensure you remove the dead-code branch that relied on the non-existent error
field.

In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1222-1280: The handlers (requestUpload and the other new upload
handler around lines 1282-1388) return ad-hoc payloads and raw error responses;
update them to follow the controller convention by returning { success: true,
data: { ... }, message: "" } on success (embed signedUrl, token, filePath, or
message/path/provider/url/warning inside data) and throw or pass an AppError on
failures (instead of res.status(...).json({ error })). Replace direct
res.status(500).json(...) in the catch with next(new AppError(..., 500)) or
construct AppError instances for 4xx conditions (e.g., quota, validation, not
found) so the global error handler generates the standard response shape; locate
symbols requestUpload and the other upload handler to make the changes.
- Around line 1361-1379: The code is checking publicUrlData?.error which is
unreachable because getPublicUrl() returns error at the top level; update the
destructuring from supabase.storage.from(bucket).getPublicUrl(normalizedPath) to
also capture the top-level error (e.g., const { data: publicUrlData, error:
apiError } = ...) and then set response.url = publicUrlData?.publicUrl ?? null
and response.warning = apiError || "Upload confirmed, but a public URL is
unavailable."; reference the supabase call, normalizedPath, publicUrlData, and
response to locate the changes.

In `@docs-legacy/storage.md`:
- Around line 70-99: The legacy storage doc was updated to describe presigned
browser uploads but the Mintlify guide was left stale; update the Mintlify guide
to match by adding the same Presigned Upload section and the Required Bucket
CORS section, and ensure both documents mention the backend endpoints
`/api/storage/upload-request` and `/api/storage/upload-confirm`, the browser PUT
upload to `signedUrl`, and the required CORS methods (`PUT`, `OPTIONS`, `GET`,
`HEAD`) and headers (`content-type`, `content-length`) so the two docs remain in
parity.

---

Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 35-37: The 10MB cap is duplicated via MAX_FILE_SIZE in the
controller; export a single shared constant (e.g., UPLOAD_MAX_FILE_SIZE) from
the common/shared package and replace the local MAX_FILE_SIZE usage with that
import so both multipart (where multer uses { fileSize: 10 * 1024 * 1024 }) and
presigned upload code reference the same symbol; update any references to
SAFETY_MAX_BYTES or CONFIRM_UPLOAD_SIZE_TOLERANCE_BYTES only if they depend on
the previous MAX_FILE_SIZE (compute them from the shared constant if needed) and
update the multer config to import and use the shared constant as well.
- Around line 1311-1322: The current handler in project.controller.js should not
rely on the literal error.message from verifyUploadedFile; change
verifyUploadedFile (in packages/common/src/utils/storage.manager.js or its
exported sentinel) to throw a structured signal (e.g., set err.code =
"UPLOAD_NOT_READY" or throw a specific UploadNotReadyError exported from
`@urbackend/common`) and update this controller to test that signal (e.g., if
(err.code === "UPLOAD_NOT_READY") or if (err instanceof UploadNotReadyError))
before calling bestEffortDeleteUploadedObject and returning the 409 response so
the retryable condition is robust across wording changes.

In `@apps/dashboard-api/src/routes/projects.js`:
- Around line 85-88: The inline comment for the confirm-upload route is
misleading; update the two comments above the routes to clearly reflect their
actions: change the comment for
router.post('/:projectId/storage/upload-request', ...) to something like "//
REQUEST PRESIGNED URL" and change the comment for
router.post('/:projectId/storage/upload-confirm', ...) to "// CONFIRM UPLOAD" so
callers see that requestUpload handles generating a presigned URL and
confirmUpload handles the confirmation step.

In `@docs-legacy/storage.md`:
- Around line 70-99: Add concrete, actionable API details to the "Presigned
Upload" section: specify HTTP method and full endpoint paths for dashboard and
public APIs (e.g., POST /api/projects/:projectId/storage/upload-request and POST
/api/storage/upload-request), define exact request payload fields for
upload-request (filename, contentType, size) and upload-confirm (filePath, size)
and the response payload (signedUrl, token, filePath), enumerate required
browser PUT headers (Content-Type, Content-Length, any token header), document
retryable 409 UPLOAD_NOT_READY semantics and suggested client retry/backoff, and
mirror the "Upload a File" section by adding concise example request/response
snippets and a browser PUT flow sequence so implementers can integrate
end-to-end.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c6212471-28cf-4777-8a66-b4f0ddfb3762

📥 Commits

Reviewing files that changed from the base of the PR and between 4c26d34 and 29eb889.

📒 Files selected for processing (5)
  • apps/dashboard-api/src/__tests__/routes.projects.storage.test.js
  • apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js
  • apps/dashboard-api/src/controllers/project.controller.js
  • apps/dashboard-api/src/routes/projects.js
  • docs-legacy/storage.md

Comment thread apps/dashboard-api/src/controllers/project.controller.js Outdated
Comment thread apps/dashboard-api/src/controllers/project.controller.js
Comment thread docs-legacy/storage.md
@yash-pouranik yash-pouranik linked an issue Apr 25, 2026 that may be closed by this pull request
Comment thread apps/dashboard-api/src/routes/projects.js Outdated
Comment on lines +85 to +89
//SIGNED URL
router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload);
//UPLOAD URL
router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload);

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This routes are good, and this is what we needed.
But bro you have not used this in UI??
@Nitin-kumar-yadav1307

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was confused about I should work or not?,after reading issue i thought there nothing to do with ui, but still not fully sure so I decided to let it be.sorry for not asking u before, canu tell me the following file or directory where I have work

Copy link
Copy Markdown
Collaborator

@yash-pouranik yash-pouranik Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For UI, You just have to update Storage.jsc and its corresponding components.
The main thing to do is - change the upload file flow in UI, Basically use the apis u created -
example - in storage.jsx - line 47 - this is upload flow.

await api.post(/api/projects/${projectId}/storage/upload, formData, { headers: { 'Content-Type': 'multipart/form-data' } }); toast.success("File uploaded!", { id: toastId }); fetchFiles();

Please change this direct upload flow with -

  • /storage/upload-request - store the response that is - (signedUrl, token, file path)
  • then upload the file on that signedUrl. in response u get - public url of the file
  • then post on /upload-confirm with file size and file path - then in response u revieve -
    message: "Upload confirmed", path: normalizedPath, provider: external ? "external" : "internal"
  • Then Toast - the res.data.message whatever it is.

@Nitin-kumar-yadav1307

Comment on lines +1225 to +1226
const { projectId } = req.params;
const { filename, contentType, size } = req.body;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sanitize the User Input, User can send anything. We should save our server.
Create sanitization method in @package/common/src/utils/input.validation.js okay>>>>>>>?
and use here - I think u can use mongoose.types.objectid related method to check the projectId and create a method in inputvalidation and use.
@Nitin-kumar-yadav1307


const external = isProjectStorageExternal(project);

// just peek at quota - don't charge yet, upload hasn't happened
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

????

Comment on lines +1287 to +1288
const { projectId } = req.params;
const { filePath, size } = req.body;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sanitize, sanitize, sanitize/
@Nitin-kumar-yadav1307

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (1)
apps/dashboard-api/src/controllers/project.controller.js (1)

1322-1341: ⚠️ Potential issue | 🟠 Major

provider is misreported and apiError fallback is unreachable.

Two issues in this block:

  1. Wrong provider value when public URL is missing. The expression publicUrl ? (external ? "external" : "internal") : "external" reports provider: "external" for internal projects whenever Supabase doesn't return a public URL. The provider should reflect the actual storage backend, independent of URL availability:

    -    const provider = publicUrl ? (external ? "external" : "internal") : "external";
    +    const provider = external ? "external" : "internal";
  2. apiError fallback is dead code. supabase.storage.from(bucket).getPublicUrl(path) is a synchronous URL constructor — it does not perform an API call and does not return a meaningful error. Destructuring error: apiError and using it as a warning message will always fall through to the static string in practice, and if apiError were ever non-null it would be an Error object rendered as [object Object] to the client. Drop the unreachable branch:

    -    const { data: publicUrlData, error: apiError } = supabase.storage
    +    const { data: publicUrlData } = supabase.storage
           .from(bucket)
           .getPublicUrl(normalizedPath);
    ...
         if (!publicUrl) {
    -      responseData.warning =
    -        apiError || "Upload confirmed, but a public URL is unavailable.";
    +      responseData.warning = "Upload confirmed, but a public URL is unavailable.";
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1322 -
1341, The provider is incorrectly set based on publicUrl and the apiError
fallback is pointless; update the block that calls
supabase.storage.from(bucket).getPublicUrl(normalizedPath) to (1) stop
destructuring an `error` (remove `error: apiError`) since getPublicUrl is a
synchronous URL constructor, (2) compute provider solely from the `external`
flag (use `external ? "external" : "internal"`), and (3) when no publicUrl set
only a static warning on `responseData.warning` (e.g. "Upload confirmed, but a
public URL is unavailable.") — do not attempt to include `apiError` or render an
Error object; keep references to `getStorage`, `getBucket`, `normalizedPath`,
`supabase.storage.from(...).getPublicUrl`, `provider`, and
`responseData.warning` to locate and change the code.
🧹 Nitpick comments (4)
apps/dashboard-api/src/controllers/project.controller.js (2)

206-234: normalizeProjectPath looks solid; one nit on coverage.

The combination of decodeURIComponent + path.posix.normalize + leading-slash stripping + ../. segment rejection + projectId-prefix enforcement is a strong defense against path traversal. One small gap: a backslash-only filename (e.g., ${projectId}\evil) doesn't trigger any segment check on POSIX (path.posix.normalize treats \ as a regular character), so segments[0] becomes ${projectId}\evil and fails the equality check correctly — but adding an explicit reject for \ in segments would future-proof against backend path interpretation differences.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 206 -
234, normalizeProjectPath currently normalizes and validates path segments but
doesn't reject segments containing backslashes, which can be treated as
separators on some platforms; update the function (normalizeProjectPath) to
explicitly reject any segment that contains a backslash (e.g., add a check like
segments.some(segment => segment.includes('\\')) and return null) so inputs like
`${projectId}\evil` are rejected regardless of POSIX normalization.

1238-1241: Swallowing the original error masks debuggability.

The catch block converts any unexpected failure into a generic AppError(500, "Could not generate upload URL") with no context. Errors from getPresignedUploadUrl (e.g., misconfigured Supabase keys, S3 credentials, network timeouts) will be invisible in production logs unless captured before the wrap.

♻️ Suggested logging
   } catch (err) {
     if (err instanceof AppError) return next(err);
+    console.error("requestUpload failed:", err);
     return next(new AppError(500, "Could not generate upload URL"));
   }

The same applies to confirmUpload's catch at lines 1348–1351.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1238 -
1241, The catch block currently swallows the original error when calling
getPresignedUploadUrl and returns a generic AppError(500, "Could not generate
upload URL"); update the catch to log the original error (e.g.,
processLogger.error or console.error) and include the original error details
when forwarding to next—either by passing a more descriptive AppError that
includes err.message/err.stack or by attaching the original error as a cause;
apply the same change to the confirmUpload catch (the block that currently
returns AppError(500, "Could not confirm upload")) so both getPresignedUploadUrl
and confirmUpload preserve and log the original error before calling next.
apps/dashboard-api/src/routes/projects.js (1)

77-80: Misleading route comments.

Both routes are part of the presigned-upload flow — the comments //SIGNED URL and //UPLOAD URL don't accurately describe the second endpoint (which confirms the upload, not generate a URL).

📝 Proposed fix
-//SIGNED URL
-router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload);
-//UPLOAD URL
-router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload);
+// REQUEST PRESIGNED UPLOAD URL
+router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload);
+// CONFIRM UPLOAD COMPLETION
+router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/routes/projects.js` around lines 77 - 80, The inline
comments above the presigned upload routes are misleading — change the comment
labels for clarity so they reflect the flow: for
router.post('/:projectId/storage/upload-request', ...) (handler requestUpload)
replace or update the comment to indicate "generate presigned upload URL" or
"presigned-upload: request URL", and for
router.post('/:projectId/storage/upload-confirm', ...) (handler confirmUpload)
update the comment to "confirm completed upload" or "presigned-upload: confirm
upload" so the intent of requestUpload and confirmUpload is clear.
apps/web-dashboard/src/pages/Storage.jsx (1)

72-80: Consider handling the retryable 409 from upload-confirm.

confirmUpload returns 409 "Uploaded file is not visible yet. Please retry confirmation." to handle eventual consistency on the storage backend. Currently, the dashboard surfaces this as a generic upload failure to the user even though a brief retry would typically succeed. The docs at mintlify/docs/guides/storage.mdx explicitly demonstrate a retry path for this case.

♻️ Suggested retry-on-409
-            await api.post(`/api/projects/${projectId}/storage/upload-confirm`, {
-                filePath,
-                size: file.size,
-            });
+            const confirmWithRetry = async (attempts = 3, delayMs = 750) => {
+                for (let i = 0; i < attempts; i++) {
+                    try {
+                        return await api.post(
+                            `/api/projects/${projectId}/storage/upload-confirm`,
+                            { filePath, size: file.size },
+                        );
+                    } catch (err) {
+                        if (err?.response?.status === 409 && i < attempts - 1) {
+                            await new Promise((r) => setTimeout(r, delayMs));
+                            continue;
+                        }
+                        throw err;
+                    }
+                }
+            };
+            await confirmWithRetry();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web-dashboard/src/pages/Storage.jsx` around lines 72 - 80, The
upload-confirm call in Storage.jsx should special-case a 409 response and retry
a few times instead of immediately treating it as a failure: in the catch block
for the api.post to `/api/projects/${projectId}/storage/upload-confirm` detect
err.response?.status === 409, then perform a short backoff loop (e.g., 3 retries
with incremental delays) retrying the same api.post, and only show toast.error
if all retries fail; on a successful retry call fetchFiles() and toast.success
as currently done. Reference the existing api.post call, the catch block where
toast.error is called, and the fetchFiles() invocation to place the retry logic
and success handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js`:
- Around line 41-52: The test mocks for sanitizeObjectId and
sanitizeNonEmptyString diverge from production behavior; replace the inline
jest.fn mocks by importing the real implementations (sanitizeObjectId,
sanitizeNonEmptyString) from the shared input validation module or update the
mocks to exactly mirror production logic (for sanitizeObjectId use
mongoose.Types.ObjectId.isValid semantics, and for sanitizeNonEmptyString
preserve trim + empty check + maxLength default 1024 behavior) so tests exercise
the same sanitization rules as production.

In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1209-1221: The current check treats SAFETY_MAX_BYTES as a total
quota for unlimited projects, causing projects with project.storageLimit === -1
to be capped at (SAFETY_MAX_BYTES) cumulative bytes; change the logic in the
block around project.storageLimit / SAFETY_MAX_BYTES so that unlimited plans do
not compare (project.storageUsed || 0) + numericSize against
SAFETY_MAX_BYTES—instead, when project.storageLimit === -1 enforce a per-request
safety check (if numericSize > SAFETY_MAX_BYTES or numericSize > MAX_FILE_SIZE
then reject) but skip the cumulative comparison with project.storageUsed; keep
the existing cumulative quota check for non-unlimited plans. Ensure you update
the code paths referencing project.storageLimit, numericSize,
project.storageUsed, SAFETY_MAX_BYTES and MAX_FILE_SIZE accordingly.
- Around line 1276-1290: The current handler deletes the uploaded object on a
"File not found after upload" error which can remove a legitimately uploaded
file when list-based checks are eventually consistent; instead, on
verifyUploadedFile throwing that error, do NOT unconditionally call
bestEffortDeleteUploadedObject(project, normalizedPath). Replace that delete
with a safe existence check (e.g., a HEAD/get metadata call against the storage
backend) and only call bestEffortDeleteUploadedObject if that existence check
confirms the object truly does not exist (404); if the HEAD shows the object
exists, skip deletion and just return the 409 AppError as before. Ensure you
update the code path around verifyUploadedFile and
bestEffortDeleteUploadedObject so deletion only happens after a definitive
non-existence check.
- Around line 1223-1224: The current safeName only replaces whitespace and can
leave path separators/control chars, letting inputs like "../foo" or "a/b.txt"
escape the intended key namespace; update the safeName creation (used when
building filePath alongside projectId and randomUUID()) to strip or normalize
slashes/backslashes and control characters before composing filePath — e.g.,
replace any sequence of forward/back slashes, control chars, and whitespace with
a single underscore (operate on sanitizedFilename), ensuring safeName contains
no path separators so filePath cannot point outside the project namespace; keep
confirmUpload/normalizeProjectPath logic unchanged.

In `@mintlify/docs/guides/storage.mdx`:
- Line 184: Change the heading "Required Bucket CORS (S3/R2)" to sentence case
by lowercasing non-acronym words—update the heading text (the line containing
"Required Bucket CORS (S3/R2)") to "Required bucket CORS (S3/R2)" so only the
first word is capitalized and the acronym CORS remains uppercase.

In `@packages/common/src/utils/input.validation.js`:
- Around line 298-303: The sanitizeObjectId function currently uses
mongoose.Types.ObjectId.isValid which allows 12-byte strings and 24-hex strings,
causing test/production drift; update sanitizeObjectId to enforce the stricter
24-character hex requirement used in tests by replacing the isValid check with
either mongoose.isObjectIdOrHexString(value) (if mongoose >= 8.x) or a direct
regex match (/^[a-fA-F0-9]{24}$/) so that sanitizeObjectId returns the
normalized value only for 24-character hex strings and null otherwise; modify
the check in sanitizeObjectId accordingly and keep the early-type/empty checks
as-is.

---

Duplicate comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1322-1341: The provider is incorrectly set based on publicUrl and
the apiError fallback is pointless; update the block that calls
supabase.storage.from(bucket).getPublicUrl(normalizedPath) to (1) stop
destructuring an `error` (remove `error: apiError`) since getPublicUrl is a
synchronous URL constructor, (2) compute provider solely from the `external`
flag (use `external ? "external" : "internal"`), and (3) when no publicUrl set
only a static warning on `responseData.warning` (e.g. "Upload confirmed, but a
public URL is unavailable.") — do not attempt to include `apiError` or render an
Error object; keep references to `getStorage`, `getBucket`, `normalizedPath`,
`supabase.storage.from(...).getPublicUrl`, `provider`, and
`responseData.warning` to locate and change the code.

---

Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 206-234: normalizeProjectPath currently normalizes and validates
path segments but doesn't reject segments containing backslashes, which can be
treated as separators on some platforms; update the function
(normalizeProjectPath) to explicitly reject any segment that contains a
backslash (e.g., add a check like segments.some(segment =>
segment.includes('\\')) and return null) so inputs like `${projectId}\evil` are
rejected regardless of POSIX normalization.
- Around line 1238-1241: The catch block currently swallows the original error
when calling getPresignedUploadUrl and returns a generic AppError(500, "Could
not generate upload URL"); update the catch to log the original error (e.g.,
processLogger.error or console.error) and include the original error details
when forwarding to next—either by passing a more descriptive AppError that
includes err.message/err.stack or by attaching the original error as a cause;
apply the same change to the confirmUpload catch (the block that currently
returns AppError(500, "Could not confirm upload")) so both getPresignedUploadUrl
and confirmUpload preserve and log the original error before calling next.

In `@apps/dashboard-api/src/routes/projects.js`:
- Around line 77-80: The inline comments above the presigned upload routes are
misleading — change the comment labels for clarity so they reflect the flow: for
router.post('/:projectId/storage/upload-request', ...) (handler requestUpload)
replace or update the comment to indicate "generate presigned upload URL" or
"presigned-upload: request URL", and for
router.post('/:projectId/storage/upload-confirm', ...) (handler confirmUpload)
update the comment to "confirm completed upload" or "presigned-upload: confirm
upload" so the intent of requestUpload and confirmUpload is clear.

In `@apps/web-dashboard/src/pages/Storage.jsx`:
- Around line 72-80: The upload-confirm call in Storage.jsx should special-case
a 409 response and retry a few times instead of immediately treating it as a
failure: in the catch block for the api.post to
`/api/projects/${projectId}/storage/upload-confirm` detect err.response?.status
=== 409, then perform a short backoff loop (e.g., 3 retries with incremental
delays) retrying the same api.post, and only show toast.error if all retries
fail; on a successful retry call fetchFiles() and toast.success as currently
done. Reference the existing api.post call, the catch block where toast.error is
called, and the fetchFiles() invocation to place the retry logic and success
handling.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0e6b79c6-5526-45bd-8e94-d39df81c220b

📥 Commits

Reviewing files that changed from the base of the PR and between 29eb889 and 55c0a5e.

📒 Files selected for processing (9)
  • apps/dashboard-api/src/__tests__/routes.projects.storage.test.js
  • apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js
  • apps/dashboard-api/src/controllers/project.controller.js
  • apps/dashboard-api/src/routes/projects.js
  • apps/web-dashboard/src/pages/Storage.jsx
  • docs-legacy/storage.md
  • mintlify/docs/guides/storage.mdx
  • packages/common/src/index.js
  • packages/common/src/utils/input.validation.js
✅ Files skipped from review due to trivial changes (2)
  • packages/common/src/index.js
  • docs-legacy/storage.md

Comment on lines +41 to +52
sanitizeObjectId: jest.fn((value) => {
if (typeof value !== 'string') return null;
const normalized = value.trim();
return /^[a-fA-F0-9]{24}$/.test(normalized) ? normalized : null;
}),
sanitizeNonEmptyString: jest.fn((value, options = {}) => {
if (typeof value !== 'string') return null;
const normalized = value.trim();
if (!normalized) return null;
if (normalized.length > (options.maxLength || 1024)) return null;
return normalized;
}),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test mocks for sanitizeObjectId/sanitizeNonEmptyString diverge from production.

The mocks use stricter validation (/^[a-fA-F0-9]{24}$/) than the real sanitizeObjectId in packages/common/src/utils/input.validation.js, which delegates to mongoose.Types.ObjectId.isValid and accepts any 12-byte string. This means inputs that pass production but would fail the test (or vice versa) won't be caught by these tests.

Consider importing the real implementations directly (or mirroring them exactly) so the tests are coupled to the actual sanitization behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js` around
lines 41 - 52, The test mocks for sanitizeObjectId and sanitizeNonEmptyString
diverge from production behavior; replace the inline jest.fn mocks by importing
the real implementations (sanitizeObjectId, sanitizeNonEmptyString) from the
shared input validation module or update the mocks to exactly mirror production
logic (for sanitizeObjectId use mongoose.Types.ObjectId.isValid semantics, and
for sanitizeNonEmptyString preserve trim + empty check + maxLength default 1024
behavior) so tests exercise the same sanitization rules as production.

Comment on lines +1209 to +1221
// Pre-check quota only; actual storage usage is charged after confirmUpload verifies object existence and size.
if (!external) {
const storageLimit =
typeof project.storageLimit === "number"
? project.storageLimit
: 20 * 1024 * 1024;
const quotaLimit =
storageLimit === -1 ? SAFETY_MAX_BYTES : storageLimit;

if ((project.storageUsed || 0) + numericSize > quotaLimit) {
return next(new AppError(403, "Internal storage limit exceeded."));
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SAFETY_MAX_BYTES caps unlimited (-1) plans at 100 MB total.

For projects with storageLimit === -1 (intended to be unlimited), this code substitutes SAFETY_MAX_BYTES (100 MB) as the quota cap and then checks (storageUsed + numericSize) > quotaLimit. Because storageUsed accumulates over time, paid/unlimited tenants will hit a hard 100 MB ceiling once their cumulative usage approaches that number, even though MAX_FILE_SIZE already caps each individual upload at 10 MB.

If SAFETY_MAX_BYTES is meant as a per-request safety cap, compare against numericSize only; if it's meant as an absolute total cap, the constant name and the value (100 MB) seem too low for "unlimited" plans.

♻️ Likely intent
-    if (!external) {
-      const storageLimit =
-        typeof project.storageLimit === "number"
-          ? project.storageLimit
-          : 20 * 1024 * 1024;
-      const quotaLimit =
-        storageLimit === -1 ? SAFETY_MAX_BYTES : storageLimit;
-
-      if ((project.storageUsed || 0) + numericSize > quotaLimit) {
-        return next(new AppError(403, "Internal storage limit exceeded."));
-      }
-    }
+    if (!external) {
+      const storageLimit =
+        typeof project.storageLimit === "number"
+          ? project.storageLimit
+          : 20 * 1024 * 1024;
+
+      if (
+        storageLimit !== -1 &&
+        (project.storageUsed || 0) + numericSize > storageLimit
+      ) {
+        return next(new AppError(403, "Internal storage limit exceeded."));
+      }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1209 -
1221, The current check treats SAFETY_MAX_BYTES as a total quota for unlimited
projects, causing projects with project.storageLimit === -1 to be capped at
(SAFETY_MAX_BYTES) cumulative bytes; change the logic in the block around
project.storageLimit / SAFETY_MAX_BYTES so that unlimited plans do not compare
(project.storageUsed || 0) + numericSize against SAFETY_MAX_BYTES—instead, when
project.storageLimit === -1 enforce a per-request safety check (if numericSize >
SAFETY_MAX_BYTES or numericSize > MAX_FILE_SIZE then reject) but skip the
cumulative comparison with project.storageUsed; keep the existing cumulative
quota check for non-unlimited plans. Ensure you update the code paths
referencing project.storageLimit, numericSize, project.storageUsed,
SAFETY_MAX_BYTES and MAX_FILE_SIZE accordingly.

Comment on lines +1223 to +1224
const safeName = sanitizedFilename.replace(/\s+/g, "_");
const filePath = `${projectId}/${randomUUID()}_${safeName}`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

safeName does not strip path separators.

sanitizedFilename.replace(/\s+/g, "_") only replaces whitespace. A filename like ../foo or a/b/c.txt becomes part of the storage key as-is, e.g. ${projectId}/${uuid}_../foo. While normalizeProjectPath in confirmUpload will reject any normalized path whose first segment is no longer projectId, the actual stored object (whose key was generated here and uploaded directly to S3/R2/Supabase) may end up at an unexpected location relative to the path the client receives, breaking listing/quota accounting.

Strip slashes/backslashes (and ideally any control chars) before composing the key:

🛡️ Proposed fix
-    const safeName = sanitizedFilename.replace(/\s+/g, "_");
+    const safeName = sanitizedFilename
+      .replace(/[\\/]+/g, "_")    // strip path separators
+      .replace(/\s+/g, "_")
+      .replace(/[\u0000-\u001f]/g, ""); // strip control chars
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1223 -
1224, The current safeName only replaces whitespace and can leave path
separators/control chars, letting inputs like "../foo" or "a/b.txt" escape the
intended key namespace; update the safeName creation (used when building
filePath alongside projectId and randomUUID()) to strip or normalize
slashes/backslashes and control characters before composing filePath — e.g.,
replace any sequence of forward/back slashes, control chars, and whitespace with
a single underscore (operate on sanitizedFilename), ensuring safeName contains
no path separators so filePath cannot point outside the project namespace; keep
confirmUpload/normalizeProjectPath logic unchanged.

Comment on lines +1276 to +1290
try {
actualSize = await verifyUploadedFile(project, normalizedPath);
} catch (err) {
if (err?.message === "File not found after upload") {
await bestEffortDeleteUploadedObject(project, normalizedPath);
return next(
new AppError(
409,
"Uploaded file is not visible yet. Please retry confirmation.",
),
);
}
throw err;
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Cleanup before retryable 409 can defeat the retry mechanism.

When verifyUploadedFile throws "File not found after upload", the handler calls bestEffortDeleteUploadedObject(project, normalizedPath) and then returns a retryable 409. But:

  • If the absence is genuine (no file uploaded), the delete is a harmless no-op.
  • If the list-based check failed only due to eventual consistency (file actually exists), the underlying remove may succeed and delete the very file the client is about to retry-confirm. The retry will then see "not found" forever.

This silently corrupts the retry path on backends where list and remove consistency differ (S3/R2 with read-after-write of list).

🛡️ Recommended fix
     try {
       actualSize = await verifyUploadedFile(project, normalizedPath);
     } catch (err) {
       if (err?.message === "File not found after upload") {
-        await bestEffortDeleteUploadedObject(project, normalizedPath);
         return next(
           new AppError(
             409,
             "Uploaded file is not visible yet. Please retry confirmation.",
           ),
         );
       }
       throw err;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1276 -
1290, The current handler deletes the uploaded object on a "File not found after
upload" error which can remove a legitimately uploaded file when list-based
checks are eventually consistent; instead, on verifyUploadedFile throwing that
error, do NOT unconditionally call bestEffortDeleteUploadedObject(project,
normalizedPath). Replace that delete with a safe existence check (e.g., a
HEAD/get metadata call against the storage backend) and only call
bestEffortDeleteUploadedObject if that existence check confirms the object truly
does not exist (404); if the HEAD shows the object exists, skip deletion and
just return the 409 AppError as before. Ensure you update the code path around
verifyUploadedFile and bestEffortDeleteUploadedObject so deletion only happens
after a definitive non-existence check.

Comment thread mintlify/docs/guides/storage.mdx Outdated
Comment on lines +298 to +303
module.exports.sanitizeObjectId = (value) => {
if (typeof value !== "string") return null;
const normalized = value.trim();
if (!normalized) return null;
return mongoose.Types.ObjectId.isValid(normalized) ? normalized : null;
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Does mongoose.Types.ObjectId.isValid return true for arbitrary 12-character strings in mongoose 8.x?

💡 Result:

Yes, mongoose.Types.ObjectId.isValid returns true for arbitrary 12-character strings in mongoose 8.x. This is the intended behavior because a MongoDB ObjectId is defined as 12 arbitrary bytes, and any JavaScript string of length 12 can be cast to a 12-byte buffer, making it a valid ObjectId. For stricter validation (e.g., only 24-character hex strings), use mongoose.isObjectIdOrHexString instead.

Citations:


🏁 Script executed:

# Find mongoose version
fd package.json | xargs grep -l mongoose | head -5

Repository: geturbackend/urBackend

Length of output: 156


🏁 Script executed:

# Check the storage.presigned.controller.test.js file for the mock regex
fd storage.presigned.controller.test.js

Repository: geturbackend/urBackend

Length of output: 136


🏁 Script executed:

# Search for how sanitizeObjectId is used in the codebase
rg "sanitizeObjectId" -A 2 -B 2

Repository: geturbackend/urBackend

Length of output: 3014


🏁 Script executed:

# Check mongoose documentation in node_modules or check what functions are available
fd -i "package.json" | xargs grep '"mongoose"' -h

Repository: geturbackend/urBackend

Length of output: 147


🏁 Script executed:

# View the test mock implementation to see the exact regex
sed -n '/sanitizeObjectId: jest.fn/,/^[[:space:]]*}/p' apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js | head -20

Repository: geturbackend/urBackend

Length of output: 277


🏁 Script executed:

# View the requestUpload and confirmUpload methods in project.controller.js to see how sanitizeObjectId is used
sed -n '/module.exports.requestUpload/,/^module.exports/p' apps/dashboard-api/src/controllers/project.controller.js | head -30

Repository: geturbackend/urBackend

Length of output: 1168


🏁 Script executed:

# Also view confirmUpload
sed -n '/module.exports.confirmUpload/,/^module.exports/p' apps/dashboard-api/src/controllers/project.controller.js | head -30

Repository: geturbackend/urBackend

Length of output: 1178


🏁 Script executed:

# Find and view normalizeProjectPath to understand security implications
rg "normalizeProjectPath" -A 10 packages/common/src

Repository: geturbackend/urBackend

Length of output: 48


🏁 Script executed:

# Check mongoose documentation or source to confirm isObjectIdOrHexString availability in 8.x
rg "isObjectIdOrHexString" packages/common/src apps/dashboard-api/src

Repository: geturbackend/urBackend

Length of output: 48


🏁 Script executed:

# Check if mongoose.isObjectIdOrHexString is exported or used anywhere
fd package.json -exec grep -l mongoose {} \; | xargs grep mongoose | grep -i "version\|isObjectIdOrHexString"

Repository: geturbackend/urBackend

Length of output: 310


🏁 Script executed:

# Find normalizeProjectPath implementation
rg "normalizeProjectPath" -A 15

Repository: geturbackend/urBackend

Length of output: 7373


🏁 Script executed:

# Check mongoose version and what ObjectId validation functions exist
cat packages/common/package.json | grep -A 2 -B 2 mongoose

Repository: geturbackend/urBackend

Length of output: 196


🏁 Script executed:

# Search for any imports/usage of mongoose.isObjectIdOrHexString
rg "isObjectIdOrHexString"

Repository: geturbackend/urBackend

Length of output: 48


sanitizeObjectId is more permissive than the test mock, creating test/prod drift.

mongoose.Types.ObjectId.isValid returns true for any 12-byte string and any 24-character hex string, while the test mock in apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js enforces /^[a-fA-F0-9]{24}$/ (24-char hex only). This divergence means inputs like "abcdefghijkl" (12 chars) will pass in production but fail in tests.

Although the owner: req.user._id check and MongoDB's own validation provide some mitigation, tightening the validation would close the gap and align with test expectations. Consider using mongoose.isObjectIdOrHexString() (available in mongoose 8.x) instead of mongoose.Types.ObjectId.isValid(), or apply the stricter regex from the test.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/common/src/utils/input.validation.js` around lines 298 - 303, The
sanitizeObjectId function currently uses mongoose.Types.ObjectId.isValid which
allows 12-byte strings and 24-hex strings, causing test/production drift; update
sanitizeObjectId to enforce the stricter 24-character hex requirement used in
tests by replacing the isValid check with either
mongoose.isObjectIdOrHexString(value) (if mongoose >= 8.x) or a direct regex
match (/^[a-fA-F0-9]{24}$/) so that sanitizeObjectId returns the normalized
value only for 24-character hex strings and null otherwise; modify the check in
sanitizeObjectId accordingly and keep the early-type/empty checks as-is.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
mintlify/docs/guides/storage.mdx (1)

195-198: Consider clarifying whether headers are allowed-headers or exposed-headers.

The bullet list under "Required headers should include at least" doesn't specify which CORS field these belong to (AllowedHeaders vs ExposeHeaders). For S3/R2 presigned PUTs, content-type must be in AllowedHeaders (and content-length is generally a CORS-safelisted header but some configs still need it allow-listed). A short clarifier would prevent users from misconfiguring ExposeHeaders instead.

📝 Suggested wording
-Required headers should include at least:
+Required `AllowedHeaders` should include at least:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mintlify/docs/guides/storage.mdx` around lines 195 - 198, Update the
"Required headers should include at least:" bullet list to explicitly state
these are CORS AllowedHeaders (not ExposeHeaders); clarify that `content-type`
must be present in AllowedHeaders for presigned PUTs and that `content-length`
is typically safelisted but may still need to be allow-listed in some providers,
e.g., change the heading or add a short sentence like "These should be listed in
CORS AllowedHeaders (not ExposeHeaders) — ensure `content-type` is included for
presigned PUTs; `content-length` is usually safelisted but may need to be
allow-listed in some configurations."
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@mintlify/docs/guides/storage.mdx`:
- Around line 195-198: Update the "Required headers should include at least:"
bullet list to explicitly state these are CORS AllowedHeaders (not
ExposeHeaders); clarify that `content-type` must be present in AllowedHeaders
for presigned PUTs and that `content-length` is typically safelisted but may
still need to be allow-listed in some providers, e.g., change the heading or add
a short sentence like "These should be listed in CORS AllowedHeaders (not
ExposeHeaders) — ensure `content-type` is included for presigned PUTs;
`content-length` is usually safelisted but may need to be allow-listed in some
configurations."

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4b9e46a0-31c4-4d63-995c-6e8cd3d4da1a

📥 Commits

Reviewing files that changed from the base of the PR and between 55c0a5e and 43c0fd2.

📒 Files selected for processing (1)
  • mintlify/docs/guides/storage.mdx

@mintlify
Copy link
Copy Markdown
Contributor

mintlify Bot commented Apr 26, 2026

Clarified that the required headers list refers to CORS AllowedHeaders (not ExposeHeaders), noting content-type is required for presigned PUTs and content-length may need explicit allow-listing on some providers.

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@coderabbitai Ok to merge?? or anything else to update??

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 26, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

Let me do a quick check to ensure no other places in the codebase still reference the old upload endpoint before giving a final recommendation.


🏁 Script executed:

#!/bin/bash
echo "=== References to old /storage/upload endpoint ==="
rg -n "storage/upload" --type=js --type=jsx -g "!*.test.*" -g "!*.spec.*" -C2

echo ""
echo "=== Multer references remaining ==="
rg -n "multer" --type=js -C2

echo ""
echo "=== uploadFile references remaining ==="
rg -n "uploadFile" --type=js -C2

echo ""
echo "=== References to upload-request or upload-confirm in non-test files ==="
rg -n "upload-request|upload-confirm" -C2 -g "!*.test.*" -g "!*.spec.*"

Length of output: 11452


@yash-pouranik: Based on my review of the codebase, this PR looks good to merge. Here's a summary of what I checked:

✅ All clear:

  • 218 tests pass across 21 suites (confirmed from the terminal image)
  • No orphaned references to the old /:projectId/storage/upload dashboard endpoint — only the route test that verifies it correctly returns 404
  • Web dashboard (Storage.jsx) is fully updated to the new presigned flow
  • Both documentation files updated — docs-legacy/storage.md and mintlify/docs/guides/storage.mdx — which addresses the outstanding concern from PR #129
  • SDK (sdks/urbackend-sdk) already targets the presigned endpoints
  • Auth chain properly wired: authMiddleware → verifyEmail → loadProjectForAdmin
  • Atomic quota management with cleanup on failure looks correct

⚠️ One minor future tech debt (not a blocker):
The public API (apps/public-api/src/routes/storage.js) still retains the legacy multer-based POST /upload + uploadFile endpoint alongside the new presigned flow. This is intentional backward compatibility and is out of scope for this PR, but could be deprecated in a future PR.

No blockers — the PR is consistent and complete. 🚀


✏️ Learnings added
Learnt from: yash-pouranik
URL: https://github.com/geturbackend/urBackend/pull/136

Timestamp: 2026-04-26T09:38:23.843Z
Learning: In the geturbackend/urBackend repo, PR `#136` updated both mintlify/docs/guides/storage.mdx and docs-legacy/storage.md with presigned upload flow docs and CORS requirements for the dashboard API, resolving the gap introduced in PR `#129`. The public-api still retains the legacy multer-based POST /upload endpoint for backward compatibility. Future PRs deprecating the legacy public API upload endpoint should remove apps/public-api/src/routes/storage.js multer setup and uploadFile controller.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: yash-pouranik
Repo: geturbackend/urBackend PR: 0
File: :0-0
Timestamp: 2026-04-23T19:32:31.401Z
Learning: In the geturbackend/urBackend repo, PR `#129` introduced two new public storage REST endpoints (POST /api/storage/upload-request and POST /api/storage/upload-confirm) as part of a presigned URL upload flow, but did not update docs in mintlify/docs/guides/storage.mdx or docs-legacy/storage.md. Future PRs touching storage should update both doc files.

@yash-pouranik
Copy link
Copy Markdown
Collaborator

Bro @coderabbitai I know the presigned url flow is not updated in this PR but it was done in PR #129

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 26, 2026

Tip

For best results, initiate chat on the files or code changes.

@yash-pouranik: Totally agree — I was already aware of that! My tech debt note wasn't saying the presigned flow is missing from the public API (PR #129 handled that). I was specifically flagging that the old legacy multer-based POST /upload endpoint still coexists alongside the new presigned flow in apps/public-api/src/routes/storage.js for backward compatibility. That's a future cleanup item, not a blocker for this PR.

Everything checks out — this PR is good to merge 🚀

ʕ •ᴥ•ʔ ~ thanks for the clarification!


🧠 Learnings used
Learnt from: yash-pouranik
Repo: geturbackend/urBackend PR: 0
File: :0-0
Timestamp: 2026-04-26T09:38:23.843Z
Learning: In the geturbackend/urBackend repo, PR `#136` updated both mintlify/docs/guides/storage.mdx and docs-legacy/storage.md with presigned upload flow docs and CORS requirements for the dashboard API, resolving the gap introduced in PR `#129`. The public-api still retains the legacy multer-based POST /upload endpoint for backward compatibility. Future PRs deprecating the legacy public API upload endpoint should remove apps/public-api/src/routes/storage.js multer setup and uploadFile controller.

Learnt from: yash-pouranik
Repo: geturbackend/urBackend PR: 0
File: :0-0
Timestamp: 2026-04-23T19:32:31.401Z
Learning: In the geturbackend/urBackend repo, PR `#129` introduced two new public storage REST endpoints (POST /api/storage/upload-request and POST /api/storage/upload-confirm) as part of a presigned URL upload flow, but did not update docs in mintlify/docs/guides/storage.mdx or docs-legacy/storage.md. Future PRs touching storage should update both doc files.

@yash-pouranik
Copy link
Copy Markdown
Collaborator

Okay Thank you @coderabbitai
Thank you @Nitin-kumar-yadav1307 for the PR and nicely cooperating.
Last check is have u Tested locally?? - by spinning both server and client?

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 26, 2026

Tip

For best results, initiate chat on the files or code changes.

@yash-pouranik: That's a great final check! From the PR description image, I can confirm the automated Jest unit tests all pass (218/218 across 21 suites), but that only covers the unit/integration test layer — it doesn't substitute for a local end-to-end spin-up with the actual S3/R2 storage backend.

Key things worth verifying in a local end-to-end test would be:

  1. POST /:projectId/storage/upload-request → returns a valid presigned signedUrl and filePath
  2. Browser PUT to signedUrl → file actually lands in S3/R2 (CORS headers on the bucket must be set correctly)
  3. POST /:projectId/storage/upload-confirm → quota is charged and a public URL is returned
  4. Dashboard UI (Storage.jsx) — the full upload flow works end-to-end without errors in the browser console

@Nitin-kumar-yadav1307 can you confirm you tested this end-to-end locally? 🙏


🧠 Learnings used
Learnt from: yash-pouranik
Repo: geturbackend/urBackend PR: 0
File: :0-0
Timestamp: 2026-04-26T09:38:23.843Z
Learning: In the geturbackend/urBackend repo, PR `#136` updated both mintlify/docs/guides/storage.mdx and docs-legacy/storage.md with presigned upload flow docs and CORS requirements for the dashboard API, resolving the gap introduced in PR `#129`. The public-api still retains the legacy multer-based POST /upload endpoint for backward compatibility. Future PRs deprecating the legacy public API upload endpoint should remove apps/public-api/src/routes/storage.js multer setup and uploadFile controller.

Learnt from: yash-pouranik
Repo: geturbackend/urBackend PR: 0
File: :0-0
Timestamp: 2026-04-23T19:32:31.401Z
Learning: In the geturbackend/urBackend repo, PR `#129` introduced two new public storage REST endpoints (POST /api/storage/upload-request and POST /api/storage/upload-confirm) as part of a presigned URL upload flow, but did not update docs in mintlify/docs/guides/storage.mdx or docs-legacy/storage.md. Future PRs touching storage should update both doc files.

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@yash-pouranik
image

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@Nitin-kumar-yadav1307

These are public api test?

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@yash-pouranik
yes

@yash-pouranik
Copy link
Copy Markdown
Collaborator

U have done changes in dashboard apis
Run those tests.
And run both dashboard and web - and upload a file if it works well let me know
@Nitin-kumar-yadav1307

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@yash-pouranik
i was doing that but i remembered i need credentials for that dont i?

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@Nitin-kumar-yadav1307
what cred u need?

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@yash-pouranik
when i was checking for Preassigned url actually working or not through web by uploading it shows error 500

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@yash-pouranik supabase

@Nitin-kumar-yadav1307
Copy link
Copy Markdown
Contributor Author

@Nitin-kumar-yadav1307 what cred u need?

but i am not asking here for any credentials because they are confidential

@yash-pouranik
Copy link
Copy Markdown
Collaborator

@Nitin-kumar-yadav1307
you can go to supabase, go to storage, Create bucket - name it - dev-files - create anon key and use this vars
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-supabase-anon-key

@yash-pouranik
Copy link
Copy Markdown
Collaborator

Okay I have tested ur PR locally @Nitin-kumar-yadav1307
its working as intended.
Thank you for this PR.

@yash-pouranik yash-pouranik merged commit 0825b18 into geturbackend:main Apr 26, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(dashboard): add presigned URL upload flow to dashboard API

2 participants