fix(proxy): buffer request body to prevent POST body being dropped upstream#750
Conversation
When ModifyRequest set req.ContentLength = -1, Go's HTTP/1.1 transport switched to chunked transfer encoding. In that mode the transport reads req.Body in its own goroutine; if the async logger had already drained req.Body before the transport got to it, the upstream server received an empty chunked body — the exact symptom reported in issue projectdiscovery#749. Fix: read and buffer the full request body in ModifyRequest, then reset req.Body to a bytes.Reader and update req.ContentLength to the actual byte count. This guarantees that: - the body is available for both the logger and the transport - the forwarded request uses Content-Length (not chunked encoding), which is what the upstream server originally received from the client - match-replace DSL operations continue to work correctly Fixes projectdiscovery#749
Neo - PR Security ReviewCritical: 1 Highlights
Critical (1)
Security ImpactUnbounded memory allocation enables memory exhaustion DoS ( Attack ExamplesUnbounded memory allocation enables memory exhaustion DoS ( Suggested FixesUnbounded memory allocation enables memory exhaustion DoS ( 🤖 Prompt for AI AgentsHardening Notes
Comment |
| // had already consumed req.Body before the transport got to it, the | ||
| // upstream received a zero-length chunked body. | ||
| if req.Body != nil && req.Body != http.NoBody { | ||
| bodyBytes, err := io.ReadAll(req.Body) |
There was a problem hiding this comment.
🔴 Unbounded memory allocation enables memory exhaustion DoS (CWE-400) — The ModifyRequest function calls io.ReadAll(req.Body) without any size limit, loading the entire request body into memory. An attacker can send arbitrarily large POST bodies (gigabytes) to exhaust server memory and cause denial of service.
Attack Example
curl -x http://proxy:8888 -X POST -H 'Content-Length: 5000000000' --data-binary @5GB_file http://example.com
Or send 100 concurrent 100MB requests to exhaust ~10GB RAM:
for i in {1..100}; do (curl -x http://proxy:8888 -X POST -d "$(head -c 100M /dev/zero | base64)" http://example.com &); done
Suggested Fix
Wrap req.Body with io.LimitReader before calling io.ReadAll to enforce a maximum body size. Example: bodyBytes, err := io.ReadAll(io.LimitReader(req.Body, 10*1024*1024)) // 10MB limit. Consider making the limit configurable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@proxy.go` at line 217, the code calls io.ReadAll(req.Body) without any size
restriction; wrap req.Body with io.LimitReader(req.Body, maxBodySize) before
reading to prevent unbounded memory allocation. Add a configurable maxBodySize
field to Options (e.g., default 100MB) and reject requests exceeding this limit
with HTTP 413 Request Entity Too Large.
Problem
Fixes #749
When
ModifyRequestsetsreq.ContentLength = -1, Go's HTTP/1.1 transport switches to chunked transfer encoding. In that mode the transport readsreq.Bodyin its own goroutine. Meanwhile, the async logger (pkg/logger/logger.go) also reads the samereq.Bodypointer viahttputil.DumpRequest. Whichever goroutine wins the race drains the body; the other gets an empty reader.The result: upstream servers receive
Transfer-Encoding: chunkedwith a zero-length body, while Proxify's own log shows the body correctly (log wins the race, transport loses).Root Cause
Fix
Buffer the entire request body in
ModifyRequestbefore it is handed to either the logger or the transport:This ensures:
Content-Length(not chunked), matching what the upstream originally expectedVerification
Using the reproduction steps from #749: