When you need to react to every new event coming from the blockchain or subgraph, SQS can be a simple and resilient way get started. SQS works with any mirror source, including subgraph updates and on-chain events.

Mirror Pipelines will send events to an SQS queue of your choosing. You can then use AWS’s SDK to process events, or even create a lambda to do serverless processing of the events.

SQS is append-only, so any events will be sent with the metadata needed to handle mutations as needed.

Full configuration details for SQS sink is available in the reference page.

Secrets

Create an AWS SQS secret with the following CLI command:

goldsky secret create --name AN_AWS_SQS_SECRET --value '{
  "accessKey": "Type.String()",
  "secretAccessKey": "Type.String()",
  "region": "Type.String()",
  "type": "sqs"
}'

Secret requires sqs:SendMessage permission. Refer to AWS SQS permissions documentation for more information.

Processing Data

Typically, you would use SQS sync to cue data up and process for one reason or another.

The data will havre two high level fields:

  • op: The operation type like c for create, u for update, d for delete)
  • body: The actual row data

In a normal pipeline driven by blockchain events, or one sending transactions, logs, blocks or traces, when you see op = d, typically it means it’s processing a fork or reorganization.

By default, the id of each of our datasets is consistent and meant for deterministic processing of blockchain forks. If you see a d, you can issue a delete for your downstream logic to negate the writing or processing of the data. The full body is provided for easy negation of aggregations.

If you’re enriching the data and then writing it into a database, you can just do upsert logic for c and u and then delete for d.