Skip to content

fix: remove 64MB camouflage body cap that terminates HTTP/3 downloads#118

Closed
Itsusinn wants to merge 2 commits into
mainfrom
claude/focused-ptolemy-e2a2c8
Closed

fix: remove 64MB camouflage body cap that terminates HTTP/3 downloads#118
Itsusinn wants to merge 2 commits into
mainfrom
claude/focused-ptolemy-e2a2c8

Conversation

@Itsusinn
Copy link
Copy Markdown
Owner

@Itsusinn Itsusinn commented May 5, 2026

Summary

Fixes #117 — when tuic-server is used as an HTTP/3 camouflage reverse proxy, browser downloads (and video streams) over QUIC are interrupted at exactly 64 MB.

Root cause is not QUIC receive_window (the maintainer's initial guess) — it's a hard-coded MAX_RESPONSE_BODY_SIZE = 64 MiB constant in tuic-server/src/camouflage.rs. After accumulating 64 MB of forwarded response bytes, forward_request returns Err, which finishes the H3 stream prematurely. From the browser's side this looks like a terminated QUIC connection.

A symmetrical 16 MB cap (MAX_REQUEST_BODY_SIZE) on the request side fully buffered request bodies in memory and rejected larger uploads.

Changes

  • Removed both MAX_RESPONSE_BODY_SIZE and MAX_REQUEST_BODY_SIZE constants.
  • Response side: dropped the body_size accumulator/check; chunks now stream end-to-end without an artificial cap. QUIC stream flow control (stream_receive_window, send_window) still bounds memory.
  • Request side: replaced the buffered read_request_body (which read the whole body into a Vec<u8>) with a streaming reqwest::Body::wrap_stream adapter built on futures::stream::unfold over the H3 receive half. Large uploads now also pass through.
  • Restructured forward_request to take ownership of RequestStream and split() it; the 502-on-backend-failure path moved inside forward_request since the stream is consumed there.

No new config fields. Users wanting to tune memory/throughput already have [quic].receive_window and [quic].send_window.

Out of scope: the maintainer's separate suggestion to swap quinn for quiche/boringssl for fingerprint reasons.

Test plan

  • cargo build -p tuic-server
  • cargo test -p tuic-server --lib — 120 passed, 0 failed
  • cargo clippy -p tuic-server --all-targets — clean
  • End-to-end: configure [camouflage] with a backend serving a >100 MB file, download via an HTTP/3 browser (Edge/Chrome with alt-svc), confirm completion past the previous ~64 MB cutoff
  • Regression: small file (<1 MB) and a typical web page still load through the camouflage path
  • Optional upload check: curl --http3 -X POST --data-binary @largefile ... against an echo backend with body > 16 MB

Assisted-by: Claude:claude-opus-4-7

Itsusinn added 2 commits May 5, 2026 16:18
End-to-end test that spawns:
  - a one-shot HTTP/1.1 backend serving an 80 MiB body
  - tuic-server with [camouflage] enabled
  - an h3 client (h3 + h3-quinn + crates.io quinn 0.11 dev-deps)

and asserts the full 80 MiB arrives. Verified to fail when the old
MAX_RESPONSE_BODY_SIZE = 64 MiB cap is reintroduced (truncates at ~64 MiB
with a "response body too large" error).

The dev-deps use crates.io quinn 0.11 for the client side, separate from
the forked quinn 0.12 that tuic-server itself uses — different Rust types
but they interoperate over the QUIC wire.

Assisted-by: Claude:claude-opus-4-7
@Itsusinn Itsusinn closed this May 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

When using HTTP/3 disguised transport, the connection is terminated after every 64MB of data transfer.

1 participant