A Tower middleware that buffers requests and flushes them in batches. Use it when the downstream system is more efficient with bulk writes – databases, message brokers, object stores, etc. The middleware collects individual requests as BatchControl::Item(R) and, once the buffer reaches a maximum size or a maximum duration elapses, signals the inner service with BatchControl::Flush so it can process the accumulated batch.
Add the dependency to your Cargo.toml:
[dependencies]
tower-batch = { version = "0.2.0" }Create a batch service and start sending requests:
use std::time::Duration;
use tower_batch::Batch;
// `my_service` implements `Service<BatchControl<MyRequest>>`
let batch = Batch::new(my_service, 100, Duration::from_millis(250));If you prefer the Tower layer pattern:
use tower_batch::BatchLayer;
let layer = BatchLayer::new(100, Duration::from_millis(250));Your inner service must implement Service<BatchControl<R>> where R is the request type. The middleware sends two kinds of calls:
BatchControl::Item(request)– buffer this request. Typically, you just push it onto aVecand returnOk(()).BatchControl::Flush– process everything you have buffered, then return the result.
Batch::new spawns a background worker that owns the inner service. It forwards each incoming request as an Item, and triggers a Flush when the batch is full or the timer fires. Batch handles are cheap to clone – each clone shares the same worker, so you can hand them to multiple tasks.
See the examples/ directory:
sqlite_batch– batch-insert rows into an in-memory SQLite database using the rarray virtual table.
Run an example with:
cargo run --example sqlite_batchThis project is licensed under the MIT license.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in tower-batch by you, shall be licensed as MIT, without any additional terms or conditions.