Subgraphs
Upgraded developer experience
Upgraded developer experience
In addition to the standard subgraph development experience, Goldsky offers
numerous developer experience improvements.Specifically:
- Webhooks: Enable efficient, instant, push-based communication and eliminate the need for API polling or manual data retrieval. This enables realtime notifications, data synchronization, and numerous other use cases.
- Instant subgraphs: Index contract data without a single line of code, allowing developers to explore contracts with ease, and for non-technical users to work with blockchains more easily.
- Tags: A ground-up rethink of subgraph endpoint management that allows you to seamlessly update your front-end interfaces with zero downtime or stale data.
Improved reliability and performance
Improved reliability and performance
Goldsky proxies all data ingestion through an advanced load balancer with
over 20+ RPC endpoints and automatically prioritizes between them based on
latency, time of day, historical responsiveness, and more. This means that
Goldsky indexes data more quickly, and with greater uptime reliability than
the alternatives.
Custom chain support
Custom chain support
On a dedicated indexing instance, Goldsky offers the ability to add custom
RPC endpoints for any EVM-compatible chain with no downtime. This allows you
to work with custom or private blockchains seamlessly.
Integrated with broader data stack
Integrated with broader data stack
Goldsky helps integrate Subgraph data into your broader infrastructure via
Mirror and Turbo, providing a level of flexibility and control that is not possible via API-based solutions. This unlocks more granular data integration, enabling advanced use cases such as cross-chain subgraphs.
Mirror
Own your data
Own your data
By replicating data into your own database, you can co-locate it alongside your other app data (product, customer, and any off-chain data). This eliminates the need for brittle scraping and polling scripts, and simplifies your front-end queries.
Parallelizable
Parallelizable
Mirror workers are parallelizable, enabling unrivaled performance and throughput. This means that working with large-scale datasets (eg. full chain replication) is the work of minutes and hours instead of days and weeks, allowing for faster iteration.
Broad sink support
Broad sink support
Mirror supports a very broad set of sinks from OLAP databases like ClickHouse to OLTP databases like Postgres or MySQL. For advanced users, queue systems like Kafka and S2 are also available.
Turbo
10x performance
10x performance
Ground-up Rust rewrite uses ~10x fewer resources than Mirror to do the same job, keeping up with faster chains like Solana seamlessly. This dramatic efficiency improvement means lower costs and faster processing without compromising reliability.
Enhanced developer experience
Enhanced developer experience
Turbo accelerates development with multiple workflow improvements: write transformation logic in TypeScript/JavaScript, see data flowing through your pipeline in real-time with live inspect for faster debugging, and run pipelines as one-off batch jobs with a defined start and end for ad-hoc pulls of point-in-time data. Combined with faster startup times, iteration cycles are 10x faster.
Dynamic tables
Dynamic tables
Update filters on a running pipeline instantly: no restarts, no re-syncs. Track new wallets or addresses on the fly, enabling real-time flexibility and responsiveness to changing data requirements without disrupting your pipeline.
Solana-native support
Solana-native support
Access full historical Solana data from genesis (not just mid-2024 as in Mirror v1), with built-in IDL decoding for seamless integration with Solana’s unique architecture and data structures.
RPC Edge
Fastest responses from 8+ edge regions
Fastest responses from 8+ edge regions
Multi-region elastic cloud infrastructure serves requests from the closest location. A tip-of-the-chain CDN stores and serves recent blockchain data faster, while hedging mechanisms send parallel requests to multiple nodes for faster response times.
Maximum resiliency and failover
Maximum resiliency and failover
Automatic failover ensures uptime even during provider outages. Internal scoring mechanisms prioritize the most reliable nodes historically, and multiplexing auto-merges identical requests to reduce redundant RPC calls.
Automated data quality checks
Automated data quality checks
Cross-validate responses from multiple RPC nodes for accuracy. Integrity mechanisms track block heights across all providers and enforce consensus checks to prevent stale, incorrect, or partial data—no more missing
eth_getLogs results in your indexer.Optimized for indexing
Optimized for indexing
Auto-split large
eth_getLogs requests to avoid provider limits. Historical data requests are automatically routed to archive nodes, and block range enforcement ensures complete data without gaps.Boosted for frontend
Boosted for frontend
Sub-50ms latency from edge locations, request deduplication so multiple users share a single upstream call, graceful degradation with automatic retries and failover, and real-time data via tip-of-chain caching.
Simple pricing
Simple pricing
$5 per million requests with all methods priced equally—no surprise charges for
eth_getLogs or trace methods. Volume discounts available for usage over 100M requests/month.Compose
Verifiable
Verifiable
Run code in Trusted Execution Environments (TEEs) to verify operations (rather than rely on slow and inefficient decentralized consensus). This approach delivers the security guarantees you need without sacrificing performance.
Durable execution
Durable execution
Workflows complete even through failures. Retries, recovery, and state persistence are built in, ensuring your operations finish successfully without manual intervention or complex error handling logic.
Fully traceable
Fully traceable
Every function call that touches external systems is logged with inputs and outputs. Step through executions in the CLI or UI to debug issues quickly and understand exactly what happened at every stage of your workflow.
Flexible data orchestration
Flexible data orchestration
Build custom data feeds tailored to your exact needs: custom data sources, scopes, refresh logic, and content. You get exactly what you need rather than adapt to someone else’s design decisions.
Platform
Easy to work with
Easy to work with
With no token, your team no longer needs to worry about fluctuating service costs and operate a token trading desk to pay your service providers. In addition, Goldsky doesn’t charge any per-query fees, making our costs and their rate of change highly predictable.
Enterprise-level support
Enterprise-level support
Goldsky offers 24/7 on-call support and has a team of engineering staff available to assist with debugging, issue resolution, and proactive management.