Overview
Send processed pipeline data to Amazon SQS queues for downstream processing, event-driven architectures, and decoupled integrations.Configuration
Using Goldsky secrets (recommended)
Using direct credentials
Parameters
Must be
sqs_sinkThe transform or source to read data from
The full URL of your SQS queue (e.g.,
https://sqs.us-east-1.amazonaws.com/123456789012/my-queue)Name of the secret containing SQS credentials. See Secret format below.
AWS access key ID for authentication. Not required if using
secret_name.AWS secret access key for authentication. Not required if using
secret_name.AWS region where the queue is located. Not required if using
secret_name.How frequently to flush messages to SQS (e.g.,
1s, 5s, 100ms). Lower values reduce latency but may increase costs.Secret format
When usingsecret_name, create a Goldsky secret of type sqs with the following structure:
IAM permissions
Your AWS credentials need the following IAM permissions on the target queue:Example
Stream blockchain events to an SQS queue for downstream processing:Best practices
Choose appropriate batch flush intervals
Choose appropriate batch flush intervals
- Use shorter intervals (e.g.,
100ms,1s) for low-latency requirements - Use longer intervals (e.g.,
5s,10s) to reduce SQS API calls and costs - Balance between latency and cost based on your use case
Configure dead-letter queues
Configure dead-letter queues
Set up a dead-letter queue (DLQ) in AWS to capture messages that fail processing. This helps with debugging and prevents data loss.
Monitor queue depth
Monitor queue depth
Use CloudWatch to monitor your queue’s
ApproximateNumberOfMessages metric. A growing backlog may indicate downstream processing issues.Use FIFO queues when order matters
Use FIFO queues when order matters
If message ordering is critical, use a FIFO queue. Note that FIFO queues have lower throughput limits (300 messages/second without batching).