Documentation Index
Fetch the complete documentation index at: https://docs.goldsky.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Send processed pipeline data to Amazon SQS queues for downstream processing, event-driven architectures, and decoupled integrations. Each row emitted by the upstream transform is serialized as a JSON object and delivered as the body of a single SQS message.Configuration
Using Goldsky secrets (recommended)
Using direct credentials
Parameters
Must be
sqs_sinkThe transform or source to read data from
The full URL of your SQS queue (e.g.,
https://sqs.us-east-1.amazonaws.com/123456789012/my-queue)Name of the secret containing SQS credentials. See Secret format below.
AWS access key ID for authentication. Not required if using
secret_name.AWS secret access key for authentication. Not required if using
secret_name.AWS region where the queue is located. Not required if using
secret_name.Optional custom SQS endpoint URL. Useful for pointing at SQS-compatible services or VPC endpoints.
Optional AWS session token for temporary credentials (e.g., STS, AWS SSO, assumed roles).
Secret format
When usingsecret_name, create a Goldsky secret of type sqs with the following structure:
IAM permissions
Your AWS credentials need the following IAM permissions on the target queue:Message format
Each row emitted by the upstream transform becomes the body of a single SQS message, serialized as a JSON object. For example, a row with columnsid, from, and value is delivered as:
Delivery behavior
- Batching: Messages are sent using the SQS
SendMessageBatchAPI with the maximum supported chunk size of 10 messages per request. Larger upstream batches are split into multiple 10-message chunks automatically. - Retries: Partial batch failures are retried up to 5 times with exponential backoff (starting at 100ms, capped at 5s). Failures flagged as sender faults by SQS fail immediately without retry.
- Queue type: Only standard SQS queues are supported. FIFO queues are not supported — the sink does not set
MessageGroupIdorMessageDeduplicationId, which FIFO queues require.
Example
Stream blockchain events to an SQS queue for downstream processing:Best practices
Configure dead-letter queues
Configure dead-letter queues
Set up a dead-letter queue (DLQ) in AWS to capture messages that fail processing. This helps with debugging and prevents data loss.
Monitor queue depth
Monitor queue depth
Use CloudWatch to monitor your queue’s
ApproximateNumberOfMessages metric. A growing backlog may indicate downstream processing issues.Stay within the 256 KB message size limit
Stay within the 256 KB message size limit
SQS rejects messages larger than 256 KB. Use an upstream SQL transform to drop or truncate large columns if your row payloads approach this limit.