The "One-Way Street" Problem: Why We Clone Request Bodies in Go

If you’re coming from languages like Python or Java, Go’s handling of HTTP request bodies might feel like a trap. We try to read a request body twice (e.g. once for logging and once for processing) and the second time, it’s mysteriously empty.

In Go, the http.Request.Body is an io.ReadCloser. Once we read the data, the stream pointer stays at the end. We can’t read it again.

The Anatomy of the Problem

The io.Reader interface is designed for efficiency. It streams data. Once the stream has been read to the end (EOF), the pointer stays there. Go doesn’t automatically “rewind” the stream because, in many cases (like a massive file upload), keeping that data in memory would be too expensive.

Here’s the trap in action:

func loggingMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        body, _ := io.ReadAll(r.Body)
        log.Printf("request body: %s", body)

        // r.Body is now exhausted - next handler reads nothing
        next.ServeHTTP(w, r)
    })
}

The downstream handler receives an empty body, even though the request arrived with data.

Why we need cloneRequest

If we are building middleware (like a retry mechanism or an authentication logger), we need to:

  1. Read the body to see what’s inside.
  2. Put it back so the next handler in the chain can read it too.

Since we can’t rewind the stream, we have to:

  1. Read the entire body into a temporary byte slice ([]byte).
  2. Create a new reader from those bytes.
  3. Assign that new reader back to the request.
func loggingMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        body, err := io.ReadAll(r.Body)
        if err != nil {
            http.Error(w, "failed to read body", http.StatusBadRequest)
            return
        }
        r.Body.Close()

        log.Printf("request body: %s", body)

        // Restore the body for downstream handlers
        r.Body = io.NopCloser(bytes.NewReader(body))

        next.ServeHTTP(w, r)
    })
}

For a retry middleware, the same bytes need to be re-readable on each attempt:

func withRetry(req *http.Request, client *http.Client, attempts int) (*http.Response, error) {
    body, err := io.ReadAll(req.Body)
    if err != nil {
        return nil, err
    }
    req.Body.Close()

    var (
        resp *http.Response
        doErr error
    )
    for i := range attempts {
        req.Body = io.NopCloser(bytes.NewReader(body))
        resp, doErr = client.Do(req)
        if doErr == nil && resp.StatusCode < 500 {
            return resp, nil
        }
        log.Printf("attempt %d failed, retrying...", i+1)
    }
    return resp, doErr
}

The Role of io.NopCloser

The request body doesn’t just need to be a Reader. It must be a ReadCloser (meaning it has a .Close() method), because http.Request.Body is declared as:

Body io.ReadCloser

When you create a new reader from a byte slice using bytes.NewReader, the compiler won’t let us assign it directly to r.Body because *bytes.Reader only implements io.Reader, not io.ReadCloser:

r.Body = bytes.NewReader(body)               // compile error: missing Close method
r.Body = io.NopCloser(bytes.NewReader(body)) // works

io.NopCloser wraps your reader and adds a “No-Operation” close method. It’s a wrapper that says: “I’m a Closer now, but I don’t actually do anything when you close me.” There’s no underlying network connection to shut down, so there’s nothing to clean up.

Analogies in Other Languages

While many high-level languages hide this complexity, the concept exists everywhere you deal with streams.

Python (File Objects/Iterators)

Once we’ve read a file object or consumed a generator, we can’t iterate through it again. To read it twice, we buffer it first:

import io

def middleware(body_stream, next_handler):
    data = body_stream.read()     # consume the stream
    log(data)
    next_handler(io.BytesIO(data))  # wrap bytes in a new stream-like object

io.BytesIO plays the same role as bytes.NewReader in Go: it turns raw bytes back into something stream-shaped.

Java (InputStreams)

A standard InputStream is also a one-way street. The idiomatic fix buffers the bytes and wraps them in a new stream:

byte[] body = request.getInputStream().readAllBytes();
log(new String(body));

// Replace the input stream for downstream use
HttpServletRequest wrapped = new HttpServletRequestWrapper(request) {
    @Override
    public ServletInputStream getInputStream() {
        ByteArrayInputStream bais = new ByteArrayInputStream(body);
        return new DelegatingServletInputStream(bais);
    }
};
chain.doFilter(wrapped, response);

Node.js (Readable Streams)

In Node, a consumed stream can’t be re-read. We collect the chunks, then wrap the buffer in a new Readable:

const chunks = [];
for await (const chunk of req) chunks.push(chunk);
const body = Buffer.concat(chunks);

console.log(body.toString());

// Restore a fresh stream for the next handler
const { Readable } = require('stream');
req.body = Readable.from(body);

The Takeaway

In Go, explicit is better than implicit. The language forces us to acknowledge that reading a body has a cost (memory). By buffering the bytes and restoring the body with io.NopCloser, we are intentionally managing that memory so the application remains predictable and performant.

One caveat worth keeping in mind: buffering the entire body is the right call for small JSON payloads, but the wrong call for large file uploads. For those, we’re better off reading the body exactly once and designing the middleware chain so nothing upstream needs to re-read it.

Comments

© 2025 Threads of Thought. Built with Astro.