Skip to main content

Overview

Goldsky is the modern back-end for crypto-enabled products; the infrastructure layer between your application and the blockchain. We handle the complex, undifferentiated work of building on crypto rails: streaming real-time data, maintaining reliable chain connectivity, and executing onchain logic. Teams use Goldsky to ship faster and stay focused on their core product.

Partnership

Goldsky has partnered with to make our product available to the ecosystem and provide dedicated support for . Below in the overview of each product, the “Partner Sponsored” tag indicates that usage of that product is fully covered by the chain, if approved by the team. Where this perk is available, please reach out to the developer relations team for an access code to the private signup form.

Getting started

To use Goldsky, you’ll need to create an account, install the CLI, and log in. If you want to use Turbo or Compose, you’ll also need to install their respective CLI extensions.
  1. Install the Goldsky CLI: For macOS/Linux:
    curl https://goldsky.com | sh
    
    For Windows:
    npm install -g @goldskycom/cli
    
    Windows users need to have Node.js and npm installed first. Download from nodejs.org if not already installed.
  2. Go to your Project Settings page and create an API key.
  3. Back in your Goldsky CLI, log into your Project by running the command goldsky login and paste your API key.
  4. Now that you are logged in, run goldsky to get started:
    goldsky
    
If you already have the Goldsky CLI installed, install the Turbo extension by running:
goldsky turbo
This will automatically install the Turbo extension. Verify the installation:
goldsky turbo list
Make sure to update the CLI to the latest version before running Turbo commands: curl https://goldsky.com | sh
For a complete reference of all Turbo CLI commands, see the CLI Reference guide.
Compose is currently in private beta and access is invite-only. The following commands will not work unless you have been explicitly whitelisted by the Goldsky team. Enterprise customers can contact their Account Manager for expedited early access.
If you already have the Goldsky CLI installed, install the Compose extension by running:
goldsky compose install
To update to the latest version:
goldsky compose update
For more details, see the Compose quickstart guide.

Subgraphs

NOT COMPATIBLESubgraphs are designed for EVM-compatible chains and are not available for Stellar.Stellar uses a different virtual machine architecture. For Stellar data indexing, consider using Mirror or Turbo pipelines which support non-EVM chains.

Mirror

MAINNET SUPPORTEDTESTNET SUPPORTEDMirror pipelines allow users to replicate data into their own infrastructure (any of the supported sinks) in real time, including both subgraphs as well as chain-level datasets (ie. blocks, logs, transactions, traces). Pipelines can be deployed on Goldsky in 3 ways:
  • Using Goldsky Flow on the dashboard, see walkthrough video here
  • Using the interactive CLI, by entering the command goldsky pipeline create <pipeline-name>. This will kick off a guided flow with the first step to choose the dataset type (project subgraph, community subgraph, or chain-level dataset). You'll then be guided through adding some simple filters to this data and where to persist the results.
  • Using a definition file, by entering the command goldsky pipeline create <pipeline-name> --definition-path <path-to-file>. This makes it easier to set up complex pipelines involving multiple sources, multiple sinks, and more complex, SQL-based transformations. For the full reference documentation on, click here.

Working with Stellar datasets

Goldsky provides real time (under 5 seconds) streaming of Stellar datasets, including all historical data, for both mainnet and testnet. Every Stellar dataset is derived from a main Ledgers dataset. Ledgers are the core building blocks of the Stellar blockchain and the highest level of abstraction in our datasets.
Need a refresher on Stellar data structures? Check out Stellar’s official documentation.
As an example, let’s look at Ledger 57255191. A ledger contains multiple transactions, and each transaction contains one or more operations, at the same time, each transaction and operation contains 0:n events. We’ve modeled these inner structures accordingly in the datasets:
  • transfers
  • transactions
  • diagnostic_events
  • events
These are the datasets that you can see enabled on the Goldsky dashboard and it’s the preferred way of working with this data.

Advanced: Working with Ledgers dataset on the CLI

Working with ledgers involves a certain level of SQL knowledge which is why it’s easier to directly work with the inner datasets directly using the dashboard experience whenever it’s possible.
Advanced users that feel comfortable working with the Goldsky CLI have the option to use the canonical ledgers dataset itself. In essence, ledgers functions as a “mega-schema,” allowing you to query all ledger data holistically or customize exactly which subsets of data you want.
{
    "type": "record",
    "name": "Ledger",
    "namespace": "com.stellar.flatten",
    "fields": [
        {
            "name": "sequence",
            "type": "long"
        },
        {
            "name": "ledger_hash",
            "type": "string"
        },
        {
            "name": "previous_ledger_hash",
            "type": "string"
        },
        {
            "name": "closed_at",
            "type": {
                "type": "long",
                "logicalType": "timestamp-millis"
            }
        },
        {
            "name": "protocol_version",
            "type": "int"
        },
        {
            "name": "total_coins",
            "type": "long"
        },
        {
            "name": "fee_pool",
            "type": "long"
        },
        {
            "name": "base_fee",
            "type": "int"
        },
        {
            "name": "base_reserve",
            "type": "int"
        },
        {
            "name": "max_tx_set_size",
            "type": "int"
        },
        {
            "name": "successful_transaction_count",
            "type": "int"
        },
        {
            "name": "failed_transaction_count",
            "type": "int"
        },
        {
            "name": "soroban_fee_write_1kb",
            "type": ["null", "long"],
            "default": null
        },
        {
            "name": "node_id",
            "type": ["null", "string"],
            "default": null
        },
        {
            "name": "signature",
            "type": ["null", "string"],
            "default": null
        },
        {
            "name": "transactions",
            "type": {
                "type": "array",
                "items": {
                    "type": "record",
                    "name": "Transaction",
                    "fields": [
                        {"name": "transaction_hash", "type": "string"},
                        {"name": "account", "type": "string"},
                        {"name": "account_muxed", "type": ["null", "string"], "default": null},
                        {"name": "account_sequence", "type": "long"},
                        {"name": "max_fee", "type": "long"},
                        {"name": "fee_charged", "type": "long"},
                        {"name": "operation_count", "type": "int"},
                        {"name": "successful", "type": "boolean"}
                    ]
                }
            }
        }
    ]
}
You can view the excerpt of an example record of the ledgers dataset here. To use this dataset, choose stellar.ledgers as the dataset_name, specify latest version 3.1.0 and start_at as earliest or latest depending on whether you want historical data or data from the current tip of the chain:
sources:
  ledgers:
    type: dataset
    dataset_name: stellar.ledgers
    version: 3.1.0
    start_at: latest|earliest
From here you can then add any corresponding transformations to access its inner datastructure as explained in the later sections.

Deploying Stellar pipelines using Goldsky Flow

As explained in Create a Pipeline, Goldsky Flow is the visual editor from which we can build and deploy Mirror pipelines. In this example we create a pipeline using Flow to stream into the transactions dataset into a ClickHouse instance. Start from the pipelines page in your project and click on New Pipeline. This will redirect you an open canvas where you can build your pipeline by dragging and dropping components. From here, drop a Source card and select Stellar as the network, then choose Transactions from the list of specialized datasets. Add a Sink card, configure your target database, and deploy the pipeline.

Simple filters on Stellar datasets

You can use transform blocks or the “Advanced” filter functionality to apply filters on these datasets. By clicking the “view schema” button you can see a preview of all of the various columns which you can filter on. For example for the transfers dataset, some common filters include:
  • Pairs of asset_issuer + asset_code for a specific token
  • A list of addresses for the sender / recipient to track specific wallets
  • contract_id for specific contracts of interest
These filters can be written simply within the Advanced filter text:
asset_code = 'USDC' AND asset_issuer = 'GA5ZSEJYB37JRC5AVCIA5MOP4RHTM335X2KGX3IHOJAPP5RE34K4KZVN'
Or as full SQL in a dedicated transform block:
SELECT *
FROM source_1
WHERE
  asset_code = 'USDC'
  AND asset_issuer = 'GA5ZSEJYB37JRC5AVCIA5MOP4RHTM335X2KGX3IHOJAPP5RE34K4KZVN'
Other common fields to filter on against each dataset:
  • Operations: type, source_account
  • Transactions: memo
  • Events and Diagnostic Events: contract_id, topics

Advanced: Using Stellar Ledgers on the CLI

Working with the ledgers dataset usually involves using a combination of CROSS JOIN UNNEST queries to “explode” and be able to access its inner structures:
stellar-ledger-transactions.yaml
name: stellar-ledger-transactions
resource_size: s
apiVersion: 3
sources:
  ledgers:
    type: dataset
    dataset_name: stellar.ledgers
    version: 3.1.0
    start_at: latest
transforms:
  sql_1:
    type: sql
    sql: |-
      SELECT
        ledgers.sequence AS ledger_sequence,
        transaction.transaction_hash as transaction_hash,
        transaction.account as account,
        transaction.transaction_result_code as transaction_result_code
      FROM ledgers
      CROSS JOIN UNNEST(transactions) AS transaction
    primary_key: transaction_hash
sinks:
  sink_1:
    type: clickhouse
    secret_name: <YOUR_SECRET_NAME>
    from: sql_1
    table: stellar_events
Deploy using the CLI: goldsky pipeline apply stellar-ledger-transactions.yaml --status ACTIVE

Working with Testnet data

When working with Stellar testnet data, it’s important to note that the testnet is frequently reset (typically about once every third month). Each reset effectively starts a new version of the testnet, but the data continues to flow into the same datasets. We recommend deploying your pipelines with start_at: latest to ensure your pipelines always index from the most recent testnet version.

Turbo

MAINNET SUPPORTEDTESTNET SUPPORTEDTurbo pipelines provide high-performance streaming data pipelines with sub-second latency. Deploy a pipeline to start streaming Stellar data to your preferred destination.

Quick deploy

Create a new Turbo pipeline using the CLI:
Create pipeline
goldsky turbo deploy my-stellar-pipeline --chain stellar

Configuration file

For more complex pipelines, use a YAML configuration file:
stellar-pipeline.yaml
name: my-stellar-pipeline
sources:
  - type: evm
    chain: stellar
    start_block: latest

transforms:
  - type: sql
    query: |
      SELECT * FROM blocks

sinks:
  - type: postgres
    secret_name: MY_POSTGRES_SECRET
Deploy with:
Deploy from config
goldsky turbo deploy -f stellar-pipeline.yaml

Available chain slugs

Mainnet: stellar | Testnet: stellar-testnetFor the full configuration reference and available transforms, see the Turbo documentation.

Edge

NOT COMPATIBLERPC Edge is designed for EVM-compatible chains and is not available for Stellar.Stellar uses a different virtual machine architecture. For Stellar data access, consider using Mirror or Turbo pipelines which support non-EVM chains.

Compose

NOT YET AVAILABLECompose lets you build offchain-to-onchain systems that durably move data and execute logic between your application and the blockchain. Learn more about what you can build with Compose in the Compose documentation. Compose is not currently enabled for Stellar, but we'd love to change that. From the Stellar team? Book a call to explore enabling Compose for your ecosystem.
Building on Stellar? Contact us about dedicated infrastructure options.

Getting support

Can’t find what you’re looking for? Reach out to us at support@goldsky.com for help.