Documentation Index
Fetch the complete documentation index at: https://docs.goldsky.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
In this quickstart, you’ll create a simple Turbo pipeline that:- Reads ERC-20 transfer data from a Goldsky dataset
- Writes the results to a blackhole sink
- Inspect the data live
- [Optional] Write the data into a PostgreSQL Database
- Goldsky Flow: A guided visual canvas editor in the dashboard
- CLI: Using YAML configuration files
Goldsky Flow
Flow allows you to deploy Turbo pipelines by dragging and dropping components onto a visual canvas. Open Flow by going to the Pipelines page and clicking New pipeline.Turbo pipelines in Flow work similarly to Mirror pipelines but with some differences. See Turbo vs Mirror in Flow for details.
1. Select a data source
Drag a Data Source card onto the canvas. Select the chain and dataset you want to use. For blockchain data, Turbo supports:- EVM chains: Ethereum, Base, Polygon, and more
- Solana: Transactions, instructions, and token transfers
- Stellar: Ledgers, transactions, and operations
- Bitcoin: Blocks and transactions
ERC-20 Transfers for Base).
2. Add transforms (optional)
Click the+ button on your source card to add transforms:
- SQL Transform: Filter and project data using SQL queries
- Script Transform: Execute custom TypeScript code for complex transformations
- Dynamic Table: Create real-time lookup tables for filtering and enrichment
3. Select a sink
Click the+ button to add a sink. Turbo supports:
- PostgreSQL
- ClickHouse
- Kafka
- Webhook
- S3
- Blackhole (for testing)
4. Deploy
Name your pipeline and click Deploy. Select a resource size and your pipeline will start processing data.Switching between Flow and YAML
You can toggle between the visual canvas and YAML view using the switcher in the top-left corner. This lets you:- See the YAML configuration generated from your visual design
- Copy the YAML for version control or CI/CD deployment
- Make advanced edits directly in YAML
Turbo vs Mirror in Flow
When creating Turbo pipelines in Flow, note these differences from Mirror:| Feature | Turbo | Mirror |
|---|---|---|
| Script transforms | Yes (TypeScript) | No |
| Dynamic Table transforms | Yes | No |
| Kafka source | YAML only | Supported |
| ClickHouse source | YAML only | Supported |
| Hybrid source | YAML only | Supported |
| Live inspect | Yes | No |
Creating Turbo pipelines with the CLI
Prerequisites
- Turbo Pipelines CLI extension installed
- A Goldsky account, logged in to your project
Step 1: Create your pipeline
Create a file namederc20-pipeline.yaml:
erc20-pipeline.yaml
What's happening in this pipeline?
What's happening in this pipeline?
Sources:
- We’re using the
base.erc20_transfersdataset (version 1.2.0) start_at: latestmeans we’ll only process new transfers going forward
- The
blackholesink discards the data but allows you to test the pipeline - Perfect for development and testing before adding real outputs
Step 2: Deploy Your Pipeline
Apply your pipeline configuration:Step 3: Inspect the Data Live
Now that the pipeline is running, you can pass the pipeline name
erc20-transfers instead of erc20-pipeline.yaml to any of the commands
below.Step 4: Add a Filter for USDC Only
Now let’s add a SQL transform to filter only USDC transfers. Update yourerc20-pipeline.yaml:
erc20-pipeline.yaml
What changed?
What changed?
Transforms:
- Added a SQL transform named
usdc_transfersthat filters for the USDC contract on Base - The contract address
0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913is USDC on Base - We use
lower()to ensure consistent case-insensitive matching - Selected only essential columns:
sender,recipient,amount, andblock_time - Converted the Unix timestamp to a human-readable format using
to_timestamp()
- Updated the sink to read from
usdc_transfersinstead of the raw source - Now only USDC transfers will flow through the pipeline
Alternative: Using TypeScript Transform
Alternative: Using TypeScript Transform
You can achieve the same filtering using a TypeScript transform instead of SQL. Return Key features:
null to filter out records:- Return
nullto filter: Records that don’t match your criteria can be filtered out by returningnull - Custom output schema: Use the
schemafield to define a different output schema than the input. This lets you reshape data, rename fields, or include only specific columns - Flexible transformations: Perform calculations, string manipulation, and conditional logic
- More flexible data transformations and complex logic
- Familiar syntax for developers
- Type safety and autocompletion support
- Can perform calculations, string manipulation, and conditional logic
- Generally faster for simple filtering and aggregations
- More concise for straightforward queries
- Better for set-based operations
Redeploy and Inspect
Apply the updated configuration:usdc_transfers transform.
Optional: Write to PostgreSQL
To persist your data to a PostgreSQL database, update your pipeline configuration:1. Create a Secret
Store your PostgreSQL credentials:2. Update Your Pipeline
Modifyerc20-pipeline.yaml to add a PostgreSQL sink:
erc20-pipeline.yaml