Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.goldsky.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Send processed pipeline data to Amazon SQS queues for downstream processing, event-driven architectures, and decoupled integrations. Each row emitted by the upstream transform is serialized as a JSON object and delivered as the body of a single SQS message.

Configuration

sinks:
  my_sqs_sink:
    type: sqs_sink
    from: my_transform
    queue_url: https://sqs.us-east-1.amazonaws.com/123456789012/my-queue
    secret_name: MY_SQS_SECRET

Using direct credentials

Hardcoding credentials in pipeline definitions is not recommended for production use. Use secret_name with Goldsky secrets instead.
sinks:
  my_sqs_sink:
    type: sqs_sink
    from: my_transform
    queue_url: https://sqs.us-east-1.amazonaws.com/123456789012/my-queue
    access_key_id: <your-access-key>
    secret_access_key: <your-secret-key>
    region: us-east-1

Parameters

type
string
required
Must be sqs_sink
from
string
required
The transform or source to read data from
queue_url
string
required
The full URL of your SQS queue (e.g., https://sqs.us-east-1.amazonaws.com/123456789012/my-queue)
secret_name
string
Name of the secret containing SQS credentials. See Secret format below.
access_key_id
string
AWS access key ID for authentication. Not required if using secret_name.
secret_access_key
string
AWS secret access key for authentication. Not required if using secret_name.
region
string
AWS region where the queue is located. Not required if using secret_name.
endpoint_url
string
Optional custom SQS endpoint URL. Useful for pointing at SQS-compatible services or VPC endpoints.
session_token
string
Optional AWS session token for temporary credentials (e.g., STS, AWS SSO, assumed roles).

Secret format

When using secret_name, create a Goldsky secret of type sqs with the following structure:
{
  "accessKeyId": "your-access-key-id",
  "secretAccessKey": "your-secret-access-key",
  "region": "us-east-1",
  "type": "sqs"
}
Create the secret using the Goldsky CLI:
goldsky secret create --name MY_SQS_SECRET --value '{
  "accessKeyId": "your-access-key-id",
  "secretAccessKey": "your-secret-access-key",
  "region": "us-east-1",
  "type": "sqs"
}'

IAM permissions

Your AWS credentials need the following IAM permissions on the target queue:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "sqs:SendMessage",
        "sqs:SendMessageBatch"
      ],
      "Resource": "arn:aws:sqs:us-east-1:123456789012:my-queue"
    }
  ]
}

Message format

Each row emitted by the upstream transform becomes the body of a single SQS message, serialized as a JSON object. For example, a row with columns id, from, and value is delivered as:
{"id": "0xabc...", "from": "0x123...", "value": "1000000000000000000"}

Delivery behavior

  • Batching: Messages are sent using the SQS SendMessageBatch API with the maximum supported chunk size of 10 messages per request. Larger upstream batches are split into multiple 10-message chunks automatically.
  • Retries: Partial batch failures are retried up to 5 times with exponential backoff (starting at 100ms, capped at 5s). Failures flagged as sender faults by SQS fail immediately without retry.
  • Queue type: Only standard SQS queues are supported. FIFO queues are not supported — the sink does not set MessageGroupId or MessageDeduplicationId, which FIFO queues require.

Example

Stream blockchain events to an SQS queue for downstream processing:
name: erc20-to-sqs
resource_size: s

sources:
  transfers:
    type: dataset
    dataset_name: ethereum.erc20_transfers
    version: 1.2.0
    start_at: latest

transforms:
  high_value:
    type: sql
    primary_key: id
    sql: |
      SELECT * FROM transfers
      WHERE CAST(value AS DECIMAL) > 1000000000000000000

sinks:
  sqs_output:
    type: sqs_sink
    from: high_value
    queue_url: https://sqs.us-east-1.amazonaws.com/123456789012/high-value-transfers
    secret_name: MY_SQS_SECRET

Best practices

Set up a dead-letter queue (DLQ) in AWS to capture messages that fail processing. This helps with debugging and prevents data loss.
Use CloudWatch to monitor your queue’s ApproximateNumberOfMessages metric. A growing backlog may indicate downstream processing issues.
SQS rejects messages larger than 256 KB. Use an upstream SQL transform to drop or truncate large columns if your row payloads approach this limit.