Presigned Upload Flow for Dashboard API#136
Conversation
…#131) - Add requestUpload handler to generate presigned URLs with quota peeking - Add confirmUpload handler to verify uploads and charge quota atomically - Implement size validation, path traversal protection, and cleanup on failure - Add presigned upload tests for controller logic and route wiring - Update storage docs with presigned flow and S3/R2 CORS requirements
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 57 minutes and 7 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughReplaces single-step multipart uploads with a two-phase presigned browser upload flow: adds Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client/Browser
participant Router as Router<br/>(/upload-request, /upload-confirm)
participant Controller as Controller<br/>requestUpload/confirmUpload
participant Common as `@urbackend/common`<br/>storage helpers
participant Storage as S3/R2 Storage
participant DB as MongoDB<br/>Projects
Client->>Router: POST /upload-request (projectId, filename, contentType, size)
Router->>Router: auth + verifyEmail + loadProjectForAdmin
Router->>Controller: requestUpload(req)
Controller->>DB: read storageUsed/quota (pre-check)
Controller->>Common: getPresignedUploadUrl(filePath, contentType, size)
Common->>Storage: generate signed upload URL/token
Storage-->>Common: signedUrl/token
Common-->>Controller: signed credentials
Controller-->>Client: 200 { signedUrl, filePath }
Client->>Storage: PUT file bytes to signedUrl
Storage-->>Client: 200 OK
Client->>Router: POST /upload-confirm (projectId, filePath, declaredSize)
Router->>Router: auth + verifyEmail + loadProjectForAdmin
Router->>Controller: confirmUpload(req)
Controller->>Common: verifyUploadedFile(filePath)
Common->>Storage: HEAD/inspect -> size/visibility
Storage-->>Common: verified size / not-visible
Common-->>Controller: verification result
Controller->>DB: atomic Project.updateOne($inc: storageUsed) or delete object on quota exceed
Controller->>Common: getPublicUrl(filePath)
Controller-->>Client: 200 { provider, publicUrl?, filePath }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (4)
apps/dashboard-api/src/controllers/project.controller.js (2)
35-37: Single‑source the 10 MB limit.
MAX_FILE_SIZE = 10 * 1024 * 1024here duplicates the multer limit atapps/dashboard-api/src/routes/projects.js:46({ fileSize: 10 * 1024 * 1024 }). If the cap ever changes, the two paths (legacy multipart and presigned) will drift. Consider exporting one constant (e.g., from@urbackend/common) and importing it in both places.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 35 - 37, The 10MB cap is duplicated via MAX_FILE_SIZE in the controller; export a single shared constant (e.g., UPLOAD_MAX_FILE_SIZE) from the common/shared package and replace the local MAX_FILE_SIZE usage with that import so both multipart (where multer uses { fileSize: 10 * 1024 * 1024 }) and presigned upload code reference the same symbol; update any references to SAFETY_MAX_BYTES or CONFIRM_UPLOAD_SIZE_TOLERANCE_BYTES only if they depend on the previous MAX_FILE_SIZE (compute them from the shared constant if needed) and update the multer config to import and use the shared constant as well.
1311-1322: Brittle string match couples this handler to upstream wording.
err?.message === "File not found after upload"depends on the exact literal thrown byverifyUploadedFileinpackages/common/src/utils/storage.manager.js. If that message is ever rephrased (or localized), this branch silently stops returning the retryable409 UPLOAD_NOT_READYand any client retry loop breaks — the request falls through to the catch-all500.Prefer a structured signal (an error code, a custom error class, or a sentinel exported from
@urbackend/common) so the contract is explicit at module boundaries.♻️ Sketch
- if (err?.message === "File not found after upload") { + if (err?.code === "UPLOAD_NOT_READY" || err?.message === "File not found after upload") { await bestEffortDeleteUploadedObject(project, normalizedPath);…and have
verifyUploadedFilethrow an error withcode: "UPLOAD_NOT_READY"for the not-yet-visible case.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1311 - 1322, The current handler in project.controller.js should not rely on the literal error.message from verifyUploadedFile; change verifyUploadedFile (in packages/common/src/utils/storage.manager.js or its exported sentinel) to throw a structured signal (e.g., set err.code = "UPLOAD_NOT_READY" or throw a specific UploadNotReadyError exported from `@urbackend/common`) and update this controller to test that signal (e.g., if (err.code === "UPLOAD_NOT_READY") or if (err instanceof UploadNotReadyError)) before calling bestEffortDeleteUploadedObject and returning the 409 response so the retryable condition is robust across wording changes.apps/dashboard-api/src/routes/projects.js (1)
85-88: Comment label forupload-confirmis misleading.
//UPLOAD URLreads like "this is where the upload happens", but the route is the confirmation step. Consider// CONFIRM UPLOAD(and// REQUEST PRESIGNED URLabove) for clarity.📝 Proposed tweak
-//SIGNED URL +// REQUEST PRESIGNED UPLOAD URL router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload); -//UPLOAD URL +// CONFIRM UPLOAD COMPLETION router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/routes/projects.js` around lines 85 - 88, The inline comment for the confirm-upload route is misleading; update the two comments above the routes to clearly reflect their actions: change the comment for router.post('/:projectId/storage/upload-request', ...) to something like "// REQUEST PRESIGNED URL" and change the comment for router.post('/:projectId/storage/upload-confirm', ...) to "// CONFIRM UPLOAD" so callers see that requestUpload handles generating a presigned URL and confirmUpload handles the confirmation step.docs-legacy/storage.md (1)
70-99: Documentation is too generic to be actionable.The new section talks abstractly about "the backend
upload-requestendpoint" but never gives:
- HTTP method (
POST) or full path (e.g.,POST /api/projects/:projectId/storage/upload-requestfor dashboard,POST /api/storage/upload-requestfor public API).- A concrete request payload (
filename,contentType,size) and response payload (signedUrl,token,filePath).- The browser-side
PUTheaders required (Content-Type,Content-Length) and theconfirmpayload (filePath,size).- The retryable
409 UPLOAD_NOT_READYsemantics that clients must handle.A reader of just this doc cannot integrate against either API. Mirroring the structure of section 1 ("Upload a File") with code samples would close that gap.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs-legacy/storage.md` around lines 70 - 99, Add concrete, actionable API details to the "Presigned Upload" section: specify HTTP method and full endpoint paths for dashboard and public APIs (e.g., POST /api/projects/:projectId/storage/upload-request and POST /api/storage/upload-request), define exact request payload fields for upload-request (filename, contentType, size) and upload-confirm (filePath, size) and the response payload (signedUrl, token, filePath), enumerate required browser PUT headers (Content-Type, Content-Length, any token header), document retryable 409 UPLOAD_NOT_READY semantics and suggested client retry/backoff, and mirror the "Upload a File" section by adding concise example request/response snippets and a browser PUT flow sequence so implementers can integrate end-to-end.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js`:
- Around line 238-256: The test uses an unrealistic Supabase mock shape and the
controller checks an unreachable error field; update the test's
mockStorageFrom.getPublicUrl call in storage.presigned.controller.test.js to
return { data: { publicUrl: null } } (remove the error property) and then edit
the controller.confirmUpload implementation to stop referencing
publicUrlData?.error — instead only inspect publicUrlData?.publicUrl (use a
truthy check for publicUrl and fall back to external provider when it's null).
Ensure you remove the dead-code branch that relied on the non-existent error
field.
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1222-1280: The handlers (requestUpload and the other new upload
handler around lines 1282-1388) return ad-hoc payloads and raw error responses;
update them to follow the controller convention by returning { success: true,
data: { ... }, message: "" } on success (embed signedUrl, token, filePath, or
message/path/provider/url/warning inside data) and throw or pass an AppError on
failures (instead of res.status(...).json({ error })). Replace direct
res.status(500).json(...) in the catch with next(new AppError(..., 500)) or
construct AppError instances for 4xx conditions (e.g., quota, validation, not
found) so the global error handler generates the standard response shape; locate
symbols requestUpload and the other upload handler to make the changes.
- Around line 1361-1379: The code is checking publicUrlData?.error which is
unreachable because getPublicUrl() returns error at the top level; update the
destructuring from supabase.storage.from(bucket).getPublicUrl(normalizedPath) to
also capture the top-level error (e.g., const { data: publicUrlData, error:
apiError } = ...) and then set response.url = publicUrlData?.publicUrl ?? null
and response.warning = apiError || "Upload confirmed, but a public URL is
unavailable."; reference the supabase call, normalizedPath, publicUrlData, and
response to locate the changes.
In `@docs-legacy/storage.md`:
- Around line 70-99: The legacy storage doc was updated to describe presigned
browser uploads but the Mintlify guide was left stale; update the Mintlify guide
to match by adding the same Presigned Upload section and the Required Bucket
CORS section, and ensure both documents mention the backend endpoints
`/api/storage/upload-request` and `/api/storage/upload-confirm`, the browser PUT
upload to `signedUrl`, and the required CORS methods (`PUT`, `OPTIONS`, `GET`,
`HEAD`) and headers (`content-type`, `content-length`) so the two docs remain in
parity.
---
Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 35-37: The 10MB cap is duplicated via MAX_FILE_SIZE in the
controller; export a single shared constant (e.g., UPLOAD_MAX_FILE_SIZE) from
the common/shared package and replace the local MAX_FILE_SIZE usage with that
import so both multipart (where multer uses { fileSize: 10 * 1024 * 1024 }) and
presigned upload code reference the same symbol; update any references to
SAFETY_MAX_BYTES or CONFIRM_UPLOAD_SIZE_TOLERANCE_BYTES only if they depend on
the previous MAX_FILE_SIZE (compute them from the shared constant if needed) and
update the multer config to import and use the shared constant as well.
- Around line 1311-1322: The current handler in project.controller.js should not
rely on the literal error.message from verifyUploadedFile; change
verifyUploadedFile (in packages/common/src/utils/storage.manager.js or its
exported sentinel) to throw a structured signal (e.g., set err.code =
"UPLOAD_NOT_READY" or throw a specific UploadNotReadyError exported from
`@urbackend/common`) and update this controller to test that signal (e.g., if
(err.code === "UPLOAD_NOT_READY") or if (err instanceof UploadNotReadyError))
before calling bestEffortDeleteUploadedObject and returning the 409 response so
the retryable condition is robust across wording changes.
In `@apps/dashboard-api/src/routes/projects.js`:
- Around line 85-88: The inline comment for the confirm-upload route is
misleading; update the two comments above the routes to clearly reflect their
actions: change the comment for
router.post('/:projectId/storage/upload-request', ...) to something like "//
REQUEST PRESIGNED URL" and change the comment for
router.post('/:projectId/storage/upload-confirm', ...) to "// CONFIRM UPLOAD" so
callers see that requestUpload handles generating a presigned URL and
confirmUpload handles the confirmation step.
In `@docs-legacy/storage.md`:
- Around line 70-99: Add concrete, actionable API details to the "Presigned
Upload" section: specify HTTP method and full endpoint paths for dashboard and
public APIs (e.g., POST /api/projects/:projectId/storage/upload-request and POST
/api/storage/upload-request), define exact request payload fields for
upload-request (filename, contentType, size) and upload-confirm (filePath, size)
and the response payload (signedUrl, token, filePath), enumerate required
browser PUT headers (Content-Type, Content-Length, any token header), document
retryable 409 UPLOAD_NOT_READY semantics and suggested client retry/backoff, and
mirror the "Upload a File" section by adding concise example request/response
snippets and a browser PUT flow sequence so implementers can integrate
end-to-end.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c6212471-28cf-4777-8a66-b4f0ddfb3762
📒 Files selected for processing (5)
apps/dashboard-api/src/__tests__/routes.projects.storage.test.jsapps/dashboard-api/src/__tests__/storage.presigned.controller.test.jsapps/dashboard-api/src/controllers/project.controller.jsapps/dashboard-api/src/routes/projects.jsdocs-legacy/storage.md
| //SIGNED URL | ||
| router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload); | ||
| //UPLOAD URL | ||
| router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload); | ||
|
|
There was a problem hiding this comment.
This routes are good, and this is what we needed.
But bro you have not used this in UI??
@Nitin-kumar-yadav1307
There was a problem hiding this comment.
Yeah, I was confused about I should work or not?,after reading issue i thought there nothing to do with ui, but still not fully sure so I decided to let it be.sorry for not asking u before, canu tell me the following file or directory where I have work
There was a problem hiding this comment.
For UI, You just have to update Storage.jsc and its corresponding components.
The main thing to do is - change the upload file flow in UI, Basically use the apis u created -
example - in storage.jsx - line 47 - this is upload flow.
await api.post(/api/projects/${projectId}/storage/upload, formData, { headers: { 'Content-Type': 'multipart/form-data' } }); toast.success("File uploaded!", { id: toastId }); fetchFiles();
Please change this direct upload flow with -
- /storage/upload-request - store the response that is - (signedUrl, token, file path)
- then upload the file on that signedUrl. in response u get - public url of the file
- then post on /upload-confirm with file size and file path - then in response u revieve -
message: "Upload confirmed", path: normalizedPath, provider: external ? "external" : "internal" - Then Toast - the res.data.message whatever it is.
| const { projectId } = req.params; | ||
| const { filename, contentType, size } = req.body; |
There was a problem hiding this comment.
Please sanitize the User Input, User can send anything. We should save our server.
Create sanitization method in @package/common/src/utils/input.validation.js okay>>>>>>>?
and use here - I think u can use mongoose.types.objectid related method to check the projectId and create a method in inputvalidation and use.
@Nitin-kumar-yadav1307
|
|
||
| const external = isProjectStorageExternal(project); | ||
|
|
||
| // just peek at quota - don't charge yet, upload hasn't happened |
| const { projectId } = req.params; | ||
| const { filePath, size } = req.body; |
There was a problem hiding this comment.
sanitize, sanitize, sanitize/
@Nitin-kumar-yadav1307
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
apps/dashboard-api/src/controllers/project.controller.js (1)
1322-1341:⚠️ Potential issue | 🟠 Major
provideris misreported andapiErrorfallback is unreachable.Two issues in this block:
Wrong
providervalue when public URL is missing. The expressionpublicUrl ? (external ? "external" : "internal") : "external"reportsprovider: "external"for internal projects whenever Supabase doesn't return a public URL. The provider should reflect the actual storage backend, independent of URL availability:- const provider = publicUrl ? (external ? "external" : "internal") : "external"; + const provider = external ? "external" : "internal";
apiErrorfallback is dead code.supabase.storage.from(bucket).getPublicUrl(path)is a synchronous URL constructor — it does not perform an API call and does not return a meaningfulerror. Destructuringerror: apiErrorand using it as a warning message will always fall through to the static string in practice, and ifapiErrorwere ever non-null it would be anErrorobject rendered as[object Object]to the client. Drop the unreachable branch:- const { data: publicUrlData, error: apiError } = supabase.storage + const { data: publicUrlData } = supabase.storage .from(bucket) .getPublicUrl(normalizedPath); ... if (!publicUrl) { - responseData.warning = - apiError || "Upload confirmed, but a public URL is unavailable."; + responseData.warning = "Upload confirmed, but a public URL is unavailable."; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1322 - 1341, The provider is incorrectly set based on publicUrl and the apiError fallback is pointless; update the block that calls supabase.storage.from(bucket).getPublicUrl(normalizedPath) to (1) stop destructuring an `error` (remove `error: apiError`) since getPublicUrl is a synchronous URL constructor, (2) compute provider solely from the `external` flag (use `external ? "external" : "internal"`), and (3) when no publicUrl set only a static warning on `responseData.warning` (e.g. "Upload confirmed, but a public URL is unavailable.") — do not attempt to include `apiError` or render an Error object; keep references to `getStorage`, `getBucket`, `normalizedPath`, `supabase.storage.from(...).getPublicUrl`, `provider`, and `responseData.warning` to locate and change the code.
🧹 Nitpick comments (4)
apps/dashboard-api/src/controllers/project.controller.js (2)
206-234:normalizeProjectPathlooks solid; one nit on coverage.The combination of
decodeURIComponent+path.posix.normalize+ leading-slash stripping +../.segment rejection + projectId-prefix enforcement is a strong defense against path traversal. One small gap: a backslash-only filename (e.g.,${projectId}\evil) doesn't trigger any segment check on POSIX (path.posix.normalizetreats\as a regular character), sosegments[0]becomes${projectId}\eviland fails the equality check correctly — but adding an explicit reject for\in segments would future-proof against backend path interpretation differences.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 206 - 234, normalizeProjectPath currently normalizes and validates path segments but doesn't reject segments containing backslashes, which can be treated as separators on some platforms; update the function (normalizeProjectPath) to explicitly reject any segment that contains a backslash (e.g., add a check like segments.some(segment => segment.includes('\\')) and return null) so inputs like `${projectId}\evil` are rejected regardless of POSIX normalization.
1238-1241: Swallowing the original error masks debuggability.The catch block converts any unexpected failure into a generic
AppError(500, "Could not generate upload URL")with no context. Errors fromgetPresignedUploadUrl(e.g., misconfigured Supabase keys, S3 credentials, network timeouts) will be invisible in production logs unless captured before the wrap.♻️ Suggested logging
} catch (err) { if (err instanceof AppError) return next(err); + console.error("requestUpload failed:", err); return next(new AppError(500, "Could not generate upload URL")); }The same applies to
confirmUpload's catch at lines 1348–1351.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1238 - 1241, The catch block currently swallows the original error when calling getPresignedUploadUrl and returns a generic AppError(500, "Could not generate upload URL"); update the catch to log the original error (e.g., processLogger.error or console.error) and include the original error details when forwarding to next—either by passing a more descriptive AppError that includes err.message/err.stack or by attaching the original error as a cause; apply the same change to the confirmUpload catch (the block that currently returns AppError(500, "Could not confirm upload")) so both getPresignedUploadUrl and confirmUpload preserve and log the original error before calling next.apps/dashboard-api/src/routes/projects.js (1)
77-80: Misleading route comments.Both routes are part of the presigned-upload flow — the comments
//SIGNED URLand//UPLOAD URLdon't accurately describe the second endpoint (which confirms the upload, not generate a URL).📝 Proposed fix
-//SIGNED URL -router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload); -//UPLOAD URL -router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload); +// REQUEST PRESIGNED UPLOAD URL +router.post('/:projectId/storage/upload-request', authMiddleware, verifyEmail, loadProjectForAdmin, requestUpload); +// CONFIRM UPLOAD COMPLETION +router.post('/:projectId/storage/upload-confirm', authMiddleware, verifyEmail, loadProjectForAdmin, confirmUpload);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard-api/src/routes/projects.js` around lines 77 - 80, The inline comments above the presigned upload routes are misleading — change the comment labels for clarity so they reflect the flow: for router.post('/:projectId/storage/upload-request', ...) (handler requestUpload) replace or update the comment to indicate "generate presigned upload URL" or "presigned-upload: request URL", and for router.post('/:projectId/storage/upload-confirm', ...) (handler confirmUpload) update the comment to "confirm completed upload" or "presigned-upload: confirm upload" so the intent of requestUpload and confirmUpload is clear.apps/web-dashboard/src/pages/Storage.jsx (1)
72-80: Consider handling the retryable 409 fromupload-confirm.
confirmUploadreturns409 "Uploaded file is not visible yet. Please retry confirmation."to handle eventual consistency on the storage backend. Currently, the dashboard surfaces this as a generic upload failure to the user even though a brief retry would typically succeed. The docs atmintlify/docs/guides/storage.mdxexplicitly demonstrate a retry path for this case.♻️ Suggested retry-on-409
- await api.post(`/api/projects/${projectId}/storage/upload-confirm`, { - filePath, - size: file.size, - }); + const confirmWithRetry = async (attempts = 3, delayMs = 750) => { + for (let i = 0; i < attempts; i++) { + try { + return await api.post( + `/api/projects/${projectId}/storage/upload-confirm`, + { filePath, size: file.size }, + ); + } catch (err) { + if (err?.response?.status === 409 && i < attempts - 1) { + await new Promise((r) => setTimeout(r, delayMs)); + continue; + } + throw err; + } + } + }; + await confirmWithRetry();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web-dashboard/src/pages/Storage.jsx` around lines 72 - 80, The upload-confirm call in Storage.jsx should special-case a 409 response and retry a few times instead of immediately treating it as a failure: in the catch block for the api.post to `/api/projects/${projectId}/storage/upload-confirm` detect err.response?.status === 409, then perform a short backoff loop (e.g., 3 retries with incremental delays) retrying the same api.post, and only show toast.error if all retries fail; on a successful retry call fetchFiles() and toast.success as currently done. Reference the existing api.post call, the catch block where toast.error is called, and the fetchFiles() invocation to place the retry logic and success handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js`:
- Around line 41-52: The test mocks for sanitizeObjectId and
sanitizeNonEmptyString diverge from production behavior; replace the inline
jest.fn mocks by importing the real implementations (sanitizeObjectId,
sanitizeNonEmptyString) from the shared input validation module or update the
mocks to exactly mirror production logic (for sanitizeObjectId use
mongoose.Types.ObjectId.isValid semantics, and for sanitizeNonEmptyString
preserve trim + empty check + maxLength default 1024 behavior) so tests exercise
the same sanitization rules as production.
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1209-1221: The current check treats SAFETY_MAX_BYTES as a total
quota for unlimited projects, causing projects with project.storageLimit === -1
to be capped at (SAFETY_MAX_BYTES) cumulative bytes; change the logic in the
block around project.storageLimit / SAFETY_MAX_BYTES so that unlimited plans do
not compare (project.storageUsed || 0) + numericSize against
SAFETY_MAX_BYTES—instead, when project.storageLimit === -1 enforce a per-request
safety check (if numericSize > SAFETY_MAX_BYTES or numericSize > MAX_FILE_SIZE
then reject) but skip the cumulative comparison with project.storageUsed; keep
the existing cumulative quota check for non-unlimited plans. Ensure you update
the code paths referencing project.storageLimit, numericSize,
project.storageUsed, SAFETY_MAX_BYTES and MAX_FILE_SIZE accordingly.
- Around line 1276-1290: The current handler deletes the uploaded object on a
"File not found after upload" error which can remove a legitimately uploaded
file when list-based checks are eventually consistent; instead, on
verifyUploadedFile throwing that error, do NOT unconditionally call
bestEffortDeleteUploadedObject(project, normalizedPath). Replace that delete
with a safe existence check (e.g., a HEAD/get metadata call against the storage
backend) and only call bestEffortDeleteUploadedObject if that existence check
confirms the object truly does not exist (404); if the HEAD shows the object
exists, skip deletion and just return the 409 AppError as before. Ensure you
update the code path around verifyUploadedFile and
bestEffortDeleteUploadedObject so deletion only happens after a definitive
non-existence check.
- Around line 1223-1224: The current safeName only replaces whitespace and can
leave path separators/control chars, letting inputs like "../foo" or "a/b.txt"
escape the intended key namespace; update the safeName creation (used when
building filePath alongside projectId and randomUUID()) to strip or normalize
slashes/backslashes and control characters before composing filePath — e.g.,
replace any sequence of forward/back slashes, control chars, and whitespace with
a single underscore (operate on sanitizedFilename), ensuring safeName contains
no path separators so filePath cannot point outside the project namespace; keep
confirmUpload/normalizeProjectPath logic unchanged.
In `@mintlify/docs/guides/storage.mdx`:
- Line 184: Change the heading "Required Bucket CORS (S3/R2)" to sentence case
by lowercasing non-acronym words—update the heading text (the line containing
"Required Bucket CORS (S3/R2)") to "Required bucket CORS (S3/R2)" so only the
first word is capitalized and the acronym CORS remains uppercase.
In `@packages/common/src/utils/input.validation.js`:
- Around line 298-303: The sanitizeObjectId function currently uses
mongoose.Types.ObjectId.isValid which allows 12-byte strings and 24-hex strings,
causing test/production drift; update sanitizeObjectId to enforce the stricter
24-character hex requirement used in tests by replacing the isValid check with
either mongoose.isObjectIdOrHexString(value) (if mongoose >= 8.x) or a direct
regex match (/^[a-fA-F0-9]{24}$/) so that sanitizeObjectId returns the
normalized value only for 24-character hex strings and null otherwise; modify
the check in sanitizeObjectId accordingly and keep the early-type/empty checks
as-is.
---
Duplicate comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 1322-1341: The provider is incorrectly set based on publicUrl and
the apiError fallback is pointless; update the block that calls
supabase.storage.from(bucket).getPublicUrl(normalizedPath) to (1) stop
destructuring an `error` (remove `error: apiError`) since getPublicUrl is a
synchronous URL constructor, (2) compute provider solely from the `external`
flag (use `external ? "external" : "internal"`), and (3) when no publicUrl set
only a static warning on `responseData.warning` (e.g. "Upload confirmed, but a
public URL is unavailable.") — do not attempt to include `apiError` or render an
Error object; keep references to `getStorage`, `getBucket`, `normalizedPath`,
`supabase.storage.from(...).getPublicUrl`, `provider`, and
`responseData.warning` to locate and change the code.
---
Nitpick comments:
In `@apps/dashboard-api/src/controllers/project.controller.js`:
- Around line 206-234: normalizeProjectPath currently normalizes and validates
path segments but doesn't reject segments containing backslashes, which can be
treated as separators on some platforms; update the function
(normalizeProjectPath) to explicitly reject any segment that contains a
backslash (e.g., add a check like segments.some(segment =>
segment.includes('\\')) and return null) so inputs like `${projectId}\evil` are
rejected regardless of POSIX normalization.
- Around line 1238-1241: The catch block currently swallows the original error
when calling getPresignedUploadUrl and returns a generic AppError(500, "Could
not generate upload URL"); update the catch to log the original error (e.g.,
processLogger.error or console.error) and include the original error details
when forwarding to next—either by passing a more descriptive AppError that
includes err.message/err.stack or by attaching the original error as a cause;
apply the same change to the confirmUpload catch (the block that currently
returns AppError(500, "Could not confirm upload")) so both getPresignedUploadUrl
and confirmUpload preserve and log the original error before calling next.
In `@apps/dashboard-api/src/routes/projects.js`:
- Around line 77-80: The inline comments above the presigned upload routes are
misleading — change the comment labels for clarity so they reflect the flow: for
router.post('/:projectId/storage/upload-request', ...) (handler requestUpload)
replace or update the comment to indicate "generate presigned upload URL" or
"presigned-upload: request URL", and for
router.post('/:projectId/storage/upload-confirm', ...) (handler confirmUpload)
update the comment to "confirm completed upload" or "presigned-upload: confirm
upload" so the intent of requestUpload and confirmUpload is clear.
In `@apps/web-dashboard/src/pages/Storage.jsx`:
- Around line 72-80: The upload-confirm call in Storage.jsx should special-case
a 409 response and retry a few times instead of immediately treating it as a
failure: in the catch block for the api.post to
`/api/projects/${projectId}/storage/upload-confirm` detect err.response?.status
=== 409, then perform a short backoff loop (e.g., 3 retries with incremental
delays) retrying the same api.post, and only show toast.error if all retries
fail; on a successful retry call fetchFiles() and toast.success as currently
done. Reference the existing api.post call, the catch block where toast.error is
called, and the fetchFiles() invocation to place the retry logic and success
handling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0e6b79c6-5526-45bd-8e94-d39df81c220b
📒 Files selected for processing (9)
apps/dashboard-api/src/__tests__/routes.projects.storage.test.jsapps/dashboard-api/src/__tests__/storage.presigned.controller.test.jsapps/dashboard-api/src/controllers/project.controller.jsapps/dashboard-api/src/routes/projects.jsapps/web-dashboard/src/pages/Storage.jsxdocs-legacy/storage.mdmintlify/docs/guides/storage.mdxpackages/common/src/index.jspackages/common/src/utils/input.validation.js
✅ Files skipped from review due to trivial changes (2)
- packages/common/src/index.js
- docs-legacy/storage.md
| sanitizeObjectId: jest.fn((value) => { | ||
| if (typeof value !== 'string') return null; | ||
| const normalized = value.trim(); | ||
| return /^[a-fA-F0-9]{24}$/.test(normalized) ? normalized : null; | ||
| }), | ||
| sanitizeNonEmptyString: jest.fn((value, options = {}) => { | ||
| if (typeof value !== 'string') return null; | ||
| const normalized = value.trim(); | ||
| if (!normalized) return null; | ||
| if (normalized.length > (options.maxLength || 1024)) return null; | ||
| return normalized; | ||
| }), |
There was a problem hiding this comment.
Test mocks for sanitizeObjectId/sanitizeNonEmptyString diverge from production.
The mocks use stricter validation (/^[a-fA-F0-9]{24}$/) than the real sanitizeObjectId in packages/common/src/utils/input.validation.js, which delegates to mongoose.Types.ObjectId.isValid and accepts any 12-byte string. This means inputs that pass production but would fail the test (or vice versa) won't be caught by these tests.
Consider importing the real implementations directly (or mirroring them exactly) so the tests are coupled to the actual sanitization behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js` around
lines 41 - 52, The test mocks for sanitizeObjectId and sanitizeNonEmptyString
diverge from production behavior; replace the inline jest.fn mocks by importing
the real implementations (sanitizeObjectId, sanitizeNonEmptyString) from the
shared input validation module or update the mocks to exactly mirror production
logic (for sanitizeObjectId use mongoose.Types.ObjectId.isValid semantics, and
for sanitizeNonEmptyString preserve trim + empty check + maxLength default 1024
behavior) so tests exercise the same sanitization rules as production.
| // Pre-check quota only; actual storage usage is charged after confirmUpload verifies object existence and size. | ||
| if (!external) { | ||
| const storageLimit = | ||
| typeof project.storageLimit === "number" | ||
| ? project.storageLimit | ||
| : 20 * 1024 * 1024; | ||
| const quotaLimit = | ||
| storageLimit === -1 ? SAFETY_MAX_BYTES : storageLimit; | ||
|
|
||
| if ((project.storageUsed || 0) + numericSize > quotaLimit) { | ||
| return next(new AppError(403, "Internal storage limit exceeded.")); | ||
| } | ||
| } |
There was a problem hiding this comment.
SAFETY_MAX_BYTES caps unlimited (-1) plans at 100 MB total.
For projects with storageLimit === -1 (intended to be unlimited), this code substitutes SAFETY_MAX_BYTES (100 MB) as the quota cap and then checks (storageUsed + numericSize) > quotaLimit. Because storageUsed accumulates over time, paid/unlimited tenants will hit a hard 100 MB ceiling once their cumulative usage approaches that number, even though MAX_FILE_SIZE already caps each individual upload at 10 MB.
If SAFETY_MAX_BYTES is meant as a per-request safety cap, compare against numericSize only; if it's meant as an absolute total cap, the constant name and the value (100 MB) seem too low for "unlimited" plans.
♻️ Likely intent
- if (!external) {
- const storageLimit =
- typeof project.storageLimit === "number"
- ? project.storageLimit
- : 20 * 1024 * 1024;
- const quotaLimit =
- storageLimit === -1 ? SAFETY_MAX_BYTES : storageLimit;
-
- if ((project.storageUsed || 0) + numericSize > quotaLimit) {
- return next(new AppError(403, "Internal storage limit exceeded."));
- }
- }
+ if (!external) {
+ const storageLimit =
+ typeof project.storageLimit === "number"
+ ? project.storageLimit
+ : 20 * 1024 * 1024;
+
+ if (
+ storageLimit !== -1 &&
+ (project.storageUsed || 0) + numericSize > storageLimit
+ ) {
+ return next(new AppError(403, "Internal storage limit exceeded."));
+ }
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1209 -
1221, The current check treats SAFETY_MAX_BYTES as a total quota for unlimited
projects, causing projects with project.storageLimit === -1 to be capped at
(SAFETY_MAX_BYTES) cumulative bytes; change the logic in the block around
project.storageLimit / SAFETY_MAX_BYTES so that unlimited plans do not compare
(project.storageUsed || 0) + numericSize against SAFETY_MAX_BYTES—instead, when
project.storageLimit === -1 enforce a per-request safety check (if numericSize >
SAFETY_MAX_BYTES or numericSize > MAX_FILE_SIZE then reject) but skip the
cumulative comparison with project.storageUsed; keep the existing cumulative
quota check for non-unlimited plans. Ensure you update the code paths
referencing project.storageLimit, numericSize, project.storageUsed,
SAFETY_MAX_BYTES and MAX_FILE_SIZE accordingly.
| const safeName = sanitizedFilename.replace(/\s+/g, "_"); | ||
| const filePath = `${projectId}/${randomUUID()}_${safeName}`; |
There was a problem hiding this comment.
safeName does not strip path separators.
sanitizedFilename.replace(/\s+/g, "_") only replaces whitespace. A filename like ../foo or a/b/c.txt becomes part of the storage key as-is, e.g. ${projectId}/${uuid}_../foo. While normalizeProjectPath in confirmUpload will reject any normalized path whose first segment is no longer projectId, the actual stored object (whose key was generated here and uploaded directly to S3/R2/Supabase) may end up at an unexpected location relative to the path the client receives, breaking listing/quota accounting.
Strip slashes/backslashes (and ideally any control chars) before composing the key:
🛡️ Proposed fix
- const safeName = sanitizedFilename.replace(/\s+/g, "_");
+ const safeName = sanitizedFilename
+ .replace(/[\\/]+/g, "_") // strip path separators
+ .replace(/\s+/g, "_")
+ .replace(/[\u0000-\u001f]/g, ""); // strip control chars🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1223 -
1224, The current safeName only replaces whitespace and can leave path
separators/control chars, letting inputs like "../foo" or "a/b.txt" escape the
intended key namespace; update the safeName creation (used when building
filePath alongside projectId and randomUUID()) to strip or normalize
slashes/backslashes and control characters before composing filePath — e.g.,
replace any sequence of forward/back slashes, control chars, and whitespace with
a single underscore (operate on sanitizedFilename), ensuring safeName contains
no path separators so filePath cannot point outside the project namespace; keep
confirmUpload/normalizeProjectPath logic unchanged.
| try { | ||
| actualSize = await verifyUploadedFile(project, normalizedPath); | ||
| } catch (err) { | ||
| if (err?.message === "File not found after upload") { | ||
| await bestEffortDeleteUploadedObject(project, normalizedPath); | ||
| return next( | ||
| new AppError( | ||
| 409, | ||
| "Uploaded file is not visible yet. Please retry confirmation.", | ||
| ), | ||
| ); | ||
| } | ||
| throw err; | ||
| } | ||
|
|
There was a problem hiding this comment.
Cleanup before retryable 409 can defeat the retry mechanism.
When verifyUploadedFile throws "File not found after upload", the handler calls bestEffortDeleteUploadedObject(project, normalizedPath) and then returns a retryable 409. But:
- If the absence is genuine (no file uploaded), the delete is a harmless no-op.
- If the
list-based check failed only due to eventual consistency (file actually exists), the underlyingremovemay succeed and delete the very file the client is about to retry-confirm. The retry will then see "not found" forever.
This silently corrupts the retry path on backends where list and remove consistency differ (S3/R2 with read-after-write of list).
🛡️ Recommended fix
try {
actualSize = await verifyUploadedFile(project, normalizedPath);
} catch (err) {
if (err?.message === "File not found after upload") {
- await bestEffortDeleteUploadedObject(project, normalizedPath);
return next(
new AppError(
409,
"Uploaded file is not visible yet. Please retry confirmation.",
),
);
}
throw err;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard-api/src/controllers/project.controller.js` around lines 1276 -
1290, The current handler deletes the uploaded object on a "File not found after
upload" error which can remove a legitimately uploaded file when list-based
checks are eventually consistent; instead, on verifyUploadedFile throwing that
error, do NOT unconditionally call bestEffortDeleteUploadedObject(project,
normalizedPath). Replace that delete with a safe existence check (e.g., a
HEAD/get metadata call against the storage backend) and only call
bestEffortDeleteUploadedObject if that existence check confirms the object truly
does not exist (404); if the HEAD shows the object exists, skip deletion and
just return the 409 AppError as before. Ensure you update the code path around
verifyUploadedFile and bestEffortDeleteUploadedObject so deletion only happens
after a definitive non-existence check.
| module.exports.sanitizeObjectId = (value) => { | ||
| if (typeof value !== "string") return null; | ||
| const normalized = value.trim(); | ||
| if (!normalized) return null; | ||
| return mongoose.Types.ObjectId.isValid(normalized) ? normalized : null; | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does mongoose.Types.ObjectId.isValid return true for arbitrary 12-character strings in mongoose 8.x?
💡 Result:
Yes, mongoose.Types.ObjectId.isValid returns true for arbitrary 12-character strings in mongoose 8.x. This is the intended behavior because a MongoDB ObjectId is defined as 12 arbitrary bytes, and any JavaScript string of length 12 can be cast to a 12-byte buffer, making it a valid ObjectId. For stricter validation (e.g., only 24-character hex strings), use mongoose.isObjectIdOrHexString instead.
Citations:
- 1: https://stackoverflow.com/questions/13850819/can-i-determine-if-a-string-is-a-mongodb-objectid
- 2: Types.ObjectId.isValid(123) returns true! Automattic/mongoose#15510
- 3: Types.ObjectId.isValid misbehavior Automattic/mongoose#11226
- 4: https://masteringjs.io/tutorials/mongoose/objectid
- 5: Bug: check string if is
ObjectIdor not Automattic/mongoose#14200 - 6: Why does Types.ObjectId.isValid(123) return true? Automattic/mongoose#4083
🏁 Script executed:
# Find mongoose version
fd package.json | xargs grep -l mongoose | head -5Repository: geturbackend/urBackend
Length of output: 156
🏁 Script executed:
# Check the storage.presigned.controller.test.js file for the mock regex
fd storage.presigned.controller.test.jsRepository: geturbackend/urBackend
Length of output: 136
🏁 Script executed:
# Search for how sanitizeObjectId is used in the codebase
rg "sanitizeObjectId" -A 2 -B 2Repository: geturbackend/urBackend
Length of output: 3014
🏁 Script executed:
# Check mongoose documentation in node_modules or check what functions are available
fd -i "package.json" | xargs grep '"mongoose"' -hRepository: geturbackend/urBackend
Length of output: 147
🏁 Script executed:
# View the test mock implementation to see the exact regex
sed -n '/sanitizeObjectId: jest.fn/,/^[[:space:]]*}/p' apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js | head -20Repository: geturbackend/urBackend
Length of output: 277
🏁 Script executed:
# View the requestUpload and confirmUpload methods in project.controller.js to see how sanitizeObjectId is used
sed -n '/module.exports.requestUpload/,/^module.exports/p' apps/dashboard-api/src/controllers/project.controller.js | head -30Repository: geturbackend/urBackend
Length of output: 1168
🏁 Script executed:
# Also view confirmUpload
sed -n '/module.exports.confirmUpload/,/^module.exports/p' apps/dashboard-api/src/controllers/project.controller.js | head -30Repository: geturbackend/urBackend
Length of output: 1178
🏁 Script executed:
# Find and view normalizeProjectPath to understand security implications
rg "normalizeProjectPath" -A 10 packages/common/srcRepository: geturbackend/urBackend
Length of output: 48
🏁 Script executed:
# Check mongoose documentation or source to confirm isObjectIdOrHexString availability in 8.x
rg "isObjectIdOrHexString" packages/common/src apps/dashboard-api/srcRepository: geturbackend/urBackend
Length of output: 48
🏁 Script executed:
# Check if mongoose.isObjectIdOrHexString is exported or used anywhere
fd package.json -exec grep -l mongoose {} \; | xargs grep mongoose | grep -i "version\|isObjectIdOrHexString"Repository: geturbackend/urBackend
Length of output: 310
🏁 Script executed:
# Find normalizeProjectPath implementation
rg "normalizeProjectPath" -A 15Repository: geturbackend/urBackend
Length of output: 7373
🏁 Script executed:
# Check mongoose version and what ObjectId validation functions exist
cat packages/common/package.json | grep -A 2 -B 2 mongooseRepository: geturbackend/urBackend
Length of output: 196
🏁 Script executed:
# Search for any imports/usage of mongoose.isObjectIdOrHexString
rg "isObjectIdOrHexString"Repository: geturbackend/urBackend
Length of output: 48
sanitizeObjectId is more permissive than the test mock, creating test/prod drift.
mongoose.Types.ObjectId.isValid returns true for any 12-byte string and any 24-character hex string, while the test mock in apps/dashboard-api/src/__tests__/storage.presigned.controller.test.js enforces /^[a-fA-F0-9]{24}$/ (24-char hex only). This divergence means inputs like "abcdefghijkl" (12 chars) will pass in production but fail in tests.
Although the owner: req.user._id check and MongoDB's own validation provide some mitigation, tightening the validation would close the gap and align with test expectations. Consider using mongoose.isObjectIdOrHexString() (available in mongoose 8.x) instead of mongoose.Types.ObjectId.isValid(), or apply the stricter regex from the test.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/common/src/utils/input.validation.js` around lines 298 - 303, The
sanitizeObjectId function currently uses mongoose.Types.ObjectId.isValid which
allows 12-byte strings and 24-hex strings, causing test/production drift; update
sanitizeObjectId to enforce the stricter 24-character hex requirement used in
tests by replacing the isValid check with either
mongoose.isObjectIdOrHexString(value) (if mongoose >= 8.x) or a direct regex
match (/^[a-fA-F0-9]{24}$/) so that sanitizeObjectId returns the normalized
value only for 24-character hex strings and null otherwise; modify the check in
sanitizeObjectId accordingly and keep the early-type/empty checks as-is.
Generated-By: mintlify-agent
There was a problem hiding this comment.
🧹 Nitpick comments (1)
mintlify/docs/guides/storage.mdx (1)
195-198: Consider clarifying whether headers are allowed-headers or exposed-headers.The bullet list under "Required headers should include at least" doesn't specify which CORS field these belong to (
AllowedHeadersvsExposeHeaders). For S3/R2 presigned PUTs,content-typemust be inAllowedHeaders(andcontent-lengthis generally a CORS-safelisted header but some configs still need it allow-listed). A short clarifier would prevent users from misconfiguringExposeHeadersinstead.📝 Suggested wording
-Required headers should include at least: +Required `AllowedHeaders` should include at least:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mintlify/docs/guides/storage.mdx` around lines 195 - 198, Update the "Required headers should include at least:" bullet list to explicitly state these are CORS AllowedHeaders (not ExposeHeaders); clarify that `content-type` must be present in AllowedHeaders for presigned PUTs and that `content-length` is typically safelisted but may still need to be allow-listed in some providers, e.g., change the heading or add a short sentence like "These should be listed in CORS AllowedHeaders (not ExposeHeaders) — ensure `content-type` is included for presigned PUTs; `content-length` is usually safelisted but may need to be allow-listed in some configurations."
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@mintlify/docs/guides/storage.mdx`:
- Around line 195-198: Update the "Required headers should include at least:"
bullet list to explicitly state these are CORS AllowedHeaders (not
ExposeHeaders); clarify that `content-type` must be present in AllowedHeaders
for presigned PUTs and that `content-length` is typically safelisted but may
still need to be allow-listed in some providers, e.g., change the heading or add
a short sentence like "These should be listed in CORS AllowedHeaders (not
ExposeHeaders) — ensure `content-type` is included for presigned PUTs;
`content-length` is usually safelisted but may need to be allow-listed in some
configurations."
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4b9e46a0-31c4-4d63-995c-6e8cd3d4da1a
📒 Files selected for processing (1)
mintlify/docs/guides/storage.mdx
Generated-By: mintlify-agent
|
Clarified that the required headers list refers to CORS |
|
@coderabbitai Ok to merge?? or anything else to update?? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me do a quick check to ensure no other places in the codebase still reference the old upload endpoint before giving a final recommendation. 🏁 Script executed: Length of output: 11452
✅ All clear:
No blockers — the PR is consistent and complete. 🚀 ✏️ Learnings added
🧠 Learnings used |
|
Bro @coderabbitai I know the presigned url flow is not updated in this PR but it was done in PR #129 |
|
Tip For best results, initiate chat on the files or code changes.
Everything checks out — this PR is good to merge 🚀 ʕ •ᴥ•ʔ ~ thanks for the clarification! 🧠 Learnings used |
|
Okay Thank you @coderabbitai |
|
Tip For best results, initiate chat on the files or code changes.
Key things worth verifying in a local end-to-end test would be:
🧠 Learnings used |
|
These are public api test? |
|
@yash-pouranik |
|
U have done changes in dashboard apis |
|
@yash-pouranik |
|
@Nitin-kumar-yadav1307 |
|
@yash-pouranik |
|
@yash-pouranik supabase |
but i am not asking here for any credentials because they are confidential |
|
@Nitin-kumar-yadav1307 |
|
Okay I have tested ur PR locally @Nitin-kumar-yadav1307 |

Presigned Upload Flow for Dashboard API
Issue: #131
Description
Implements a three-step presigned URL upload architecture for the dashboard API, enabling large file uploads directly to S3/R2 without proxying bytes through Node.js. This mirrors the implementation in PR #129 for the public API but uses dashboard-appropriate auth (JWT/session via
loadProjectForAdminmiddleware).Changes
Files Modified
apps/dashboard-api/src/routes/projects.jsPOST /:projectId/storage/upload-request- Generate presigned URL with quota validationPOST /:projectId/storage/upload-confirm- Verify upload completion and charge quota atomicallyauthMiddleware → verifyEmail → loadProjectForAdminapps/dashboard-api/src/controllers/project.controller.jsNew handlers:
requestUpload()- Validates size, peeks at quota (no charge yet), returns signed URL and tokenconfirmUpload()- Verifies file exists, compares size, atomically charges quota, cleans up on failureNew helpers:
parsePositiveSize()- Validates positive finite numbers for file sizenormalizeProjectPath()- Prevents path traversal; validates projectId prefixbestEffortDeleteUploadedObject()- Non-fatal cleanup helper for failed confirmationsdocs-legacy/storage.mdFiles Created
apps/dashboard-api/src/__tests__/storage.presigned.controller.test.jsrequestUploadandconfirmUploadhandlersapps/dashboard-api/src/__tests__/routes.projects.storage.test.jsKey Features
Quota Management
requestUpload: Peeks at quota without chargingconfirmUpload: Charges quota atomically only after file verificationSecurity
..segments)Reliability
Shared Infrastructure
@urbackend/commonhelpers:getPresignedUploadUrl,verifyUploadedFilegetStorageandgetBucketfunctionsTest Results
Summary by CodeRabbit
New Features
Documentation
Tests
Chores