Documentation Index
Fetch the complete documentation index at: https://docs.goldsky.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The throttle transform caps the throughput of a stream by buffering records into batches and emitting each batch on a fixed minimum interval. Use it to:- Stay under rate limits of downstream sinks or external APIs
- Smooth out bursty sources into a steady, predictable rate
- Test sink behavior at a controlled records-per-second rate
- Reduce pressure on small resource sizes during development
Configuration
Parameters
Must be
throttleThe source or transform to read data from
Maximum number of records to emit per batch. The throttle accumulates up to
this many records before flushing.
Minimum time to wait between batches (e.g.,
10s, 500ms, 1m). The next
batch will not be emitted until this interval has elapsed since the previous
batch.How throttling works
Records are buffered as they arrive from the upstream source or transform. A batch is flushed downstream when both conditions are met:max_batch_sizerecords have accumulated, andmin_batch_intervalhas elapsed since the last batch was emitted
max_batch_size: 100 with min_batch_interval: 10s caps throughput at roughly 10 records per second.
Throttle limits the maximum rate, not the minimum. If the upstream is slow,
batches will be smaller and arrive less frequently.
Example
Throttle a high-volume ERC-20 transfer stream down to ~10 rps before sending it to a sink:When to use throttle
- Rate-limited sinks — Stay under per-second write quotas on downstream APIs or databases.
- External handler protection — Pace records into an HTTP handler so the receiving service is not overwhelmed.
- Cost control during development — Slow down processing while iterating on a pipeline against a live source.
- Testing — Reproduce sink behavior under a known, fixed input rate.
Best Practices
Place throttle close to the bottleneck
Throttle the stream just before the rate-limited sink or handler so
upstream transforms still process at full speed.
Tune batch size to your sink
Larger
max_batch_size reduces per-batch overhead but increases latency
per record. Pick a size that matches your sink’s preferred batch size.