Overview
Goldsky is the modern back-end for crypto-enabled products; the infrastructure layer between your application and the blockchain. We handle the complex, undifferentiated work of building on crypto rails: streaming real-time data, maintaining reliable chain connectivity, and executing onchain logic. Teams use Goldsky to ship faster and stay focused on their core product.Partnership
Goldsky has partnered with to make our product available to the ecosystem and provide dedicated support for . Below in the overview of each product, the “Partner Sponsored” tag indicates that usage of that product is fully covered by the chain, if approved by the team. Where this perk is available, please reach out to the developer relations team for an access code to the private signup form.Getting started
To use Goldsky, you’ll need to create an account, install the CLI, and log in. If you want to use Turbo or Compose, you’ll also need to install their respective CLI extensions.Install Goldsky CLI and log in
Install Goldsky CLI and log in
-
Install the Goldsky CLI:
For macOS/Linux:
For Windows:Windows users need to have Node.js and npm installed first. Download from nodejs.org if not already installed.
- Go to your Project Settings page and create an API key.
-
Back in your Goldsky CLI, log into your Project by running the command
goldsky loginand paste your API key. -
Now that you are logged in, run
goldskyto get started:
Install Turbo CLI extension
Install Turbo CLI extension
If you already have the Goldsky CLI installed, install the Turbo extension by running:This will automatically install the Turbo extension. Verify the installation:For a complete reference of all Turbo CLI commands, see the CLI Reference guide.
Make sure to update the CLI to the latest version before running Turbo commands:
curl https://goldsky.com | shInstall Compose CLI extension
Install Compose CLI extension
If you already have the Goldsky CLI installed, install the Compose extension by running:To update to the latest version:For more details, see the Compose quickstart guide.
Subgraphs
NOT COMPATIBLESubgraphs are designed for EVM-compatible chains and are not available for Stellar.Stellar uses a different virtual machine architecture. For Stellar data indexing, consider using Mirror or Turbo pipelines which support non-EVM chains.Mirror
MAINNET SUPPORTEDTESTNET SUPPORTEDMirror pipelines allow users to replicate data into their own infrastructure (any of the supported sinks) in real time, including both subgraphs as well as chain-level datasets (ie. blocks, logs, transactions, traces). Pipelines can be deployed on Goldsky in 3 ways:- Using Goldsky Flow on the dashboard, see walkthrough video here
- Using the interactive CLI, by entering the command
goldsky pipeline create <pipeline-name>. This will kick off a guided flow with the first step to choose the dataset type (project subgraph, community subgraph, or chain-level dataset). You'll then be guided through adding some simple filters to this data and where to persist the results. - Using a definition file, by entering the command
goldsky pipeline create <pipeline-name> --definition-path <path-to-file>. This makes it easier to set up complex pipelines involving multiple sources, multiple sinks, and more complex, SQL-based transformations. For the full reference documentation on, click here.
Working with Stellar datasets
Goldsky provides real time (under 5 seconds) streaming of Stellar datasets, including all historical data, for both mainnet and testnet. Every Stellar dataset is derived from a mainLedgers dataset. Ledgers are the core building blocks of the Stellar blockchain and the highest level of abstraction in our datasets.
Need a refresher on Stellar data structures? Check out Stellar’s official documentation.
transferstransactionsdiagnostic_eventsevents
Advanced: Working with Ledgers dataset on the CLI
Advanced users that feel comfortable working with the Goldsky CLI have the option to use the canonicalledgers dataset itself. In essence, ledgers functions as a “mega-schema,” allowing you to query all ledger data holistically or customize exactly which subsets of data you want.
Ledger Schema
Ledger Schema
ledgers dataset here.
To use this dataset, choose stellar.ledgers as the dataset_name, specify latest version 3.1.0 and start_at as earliest or latest depending on whether you want historical data or data from the current tip of the chain:
Deploying Stellar pipelines using Goldsky Flow
As explained in Create a Pipeline, Goldsky Flow is the visual editor from which we can build and deploy Mirror pipelines. In this example we create a pipeline using Flow to stream into thetransactions dataset into a ClickHouse instance.
Start from the pipelines page in your project and click on New Pipeline. This will redirect you an open canvas where you can build your pipeline by dragging and dropping components.
From here, drop a Source card and select Stellar as the network, then choose Transactions from the list of specialized datasets. Add a Sink card, configure your target database, and deploy the pipeline.
Simple filters on Stellar datasets
You can use transform blocks or the “Advanced” filter functionality to apply filters on these datasets. By clicking the “view schema” button you can see a preview of all of the various columns which you can filter on. For example for the transfers dataset, some common filters include:- Pairs of
asset_issuer+asset_codefor a specific token - A list of addresses for the
sender/recipientto track specific wallets contract_idfor specific contracts of interest
- Operations:
type,source_account - Transactions:
memo - Events and Diagnostic Events:
contract_id,topics
Advanced: Using Stellar Ledgers on the CLI
Working with theledgers dataset usually involves using a combination of CROSS JOIN UNNEST queries to “explode” and be able to access its inner structures:
stellar-ledger-transactions.yaml
goldsky pipeline apply stellar-ledger-transactions.yaml --status ACTIVE
Working with Testnet data
When working with Stellar testnet data, it’s important to note that the testnet is frequently reset (typically about once every third month). Each reset effectively starts a new version of the testnet, but the data continues to flow into the same datasets. We recommend deploying your pipelines withstart_at: latest to ensure your pipelines always index from the most recent testnet version.
Turbo
MAINNET SUPPORTEDTESTNET SUPPORTEDTurbo pipelines provide high-performance streaming data pipelines with sub-second latency. Deploy a pipeline to start streaming Stellar data to your preferred destination.Quick deploy
Create a new Turbo pipeline using the CLI:Create pipeline
Configuration file
For more complex pipelines, use a YAML configuration file:stellar-pipeline.yaml
Deploy from config
Available chain slugs
Mainnet:stellar | Testnet: stellar-testnetFor the full configuration reference and available transforms, see the Turbo documentation.
Edge
NOT COMPATIBLERPC Edge is designed for EVM-compatible chains and is not available for Stellar.Stellar uses a different virtual machine architecture. For Stellar data access, consider using Mirror or Turbo pipelines which support non-EVM chains.Compose
NOT YET AVAILABLECompose lets you build offchain-to-onchain systems that durably move data and execute logic between your application and the blockchain. Learn more about what you can build with Compose in the Compose documentation. Compose is not currently enabled for Stellar, but we'd love to change that. From the Stellar team? Book a call to explore enabling Compose for your ecosystem.Building on Stellar? Contact us about dedicated infrastructure options.