Pricing
Understand how metered billing works on Goldsky
Our prices are quoted on a monthly basis for simpler presentation, but metered and billed on an hourly basis. This has a few key implications.
-
To account for the varying number of days in each month of the year, we conservatively estimate that each month has 730 hours. This means that estimated monthly pricing shown is higher than what you would typically pay for the specified usage in most months.
-
All estimations of the number of subgraphs or Mirror pipelines assume “always-on” capacity. In practice, you can run double the number of subgraph workers or pipeline workers for half the time and pay the same price. This similarly holds for the “entities stored” metric in subgraphs.
Subgraphs
We track usage based on two metrics: (1) the number of active subgraphs, and (2) the amount of data stored across all subgraphs in your project.
Metering
Active Subgraphs
The number of active subgraph workers, tracked hourly. If you pause or delete a subgraph, it is no longer billed.
Examples:
- If you have 10 active subgraphs, you use 10 subgraph worker hours per hour. At 730 hours per month, you incur 7,300 subgraph worker hours.
- If you begin a period with 10 active subgraphs and delete all of them halfway through the period, you are billed the equivalent of 5 subgraphs for that period.
Subgraph Entities Stored
The number of entities stored across all subgraphs in your project, tracked hourly. If you delete a subgraph, stored entities are no longer tracked. All entities in a project count toward the project’s usage on a cumulative basis.
Examples:
-
If you have 3 active subgraphs that cumulatively store 30,000 entities, you use 30,000 subgraph entity storage hours per hour.
At 730 hours per month, you incur
30,000 * 730 = 21,900,000
subgraph entity storage hours in that month. -
If you begin a period with 3 active subgraphs, each with 10,000 entities, and you delete 2 of them after 10 days, you use 30,000 entity subgraph hours for the first 10 days, then 10,000 entity subgraph hours thereafter.
Starter Plan
Active Subgraphs
Up to 3 active subgraphs per month.
Subgraph Storage
Up to 100,000 entities stored per month.
You incur usage for each hour that each subgraph in your project is deployed and active. If you have 2 subgraphs deployed and active for 2 hours each, you will accumulate 4 hours of usage.
When you exceed Starter Plan usage limits, subgraph indexing will be paused, but subgraphs will remain queryable.
Scale Plan
Active Subgraphs (subgraph worker-hours) | |
---|---|
First 2,250 worker-hours | Free (i.e., 3 always-on subgraphs) |
Above 2,250 worker-hours | $0.05/hour (i.e., ~$36.50/month/additional subgraph) |
Subgraph Storage (subgraph entity storage-hours) | |
---|---|
First 100k entities stored (i.e., up to 75,000,000 storage-hours) | Free |
Up to 10M entities stored (i.e., up to 7.5B storage-hours) | ~$4.00/month per 100k entities stored (i.e., $0.0053 per 100k entities stored/hour) |
Above 10M entities stored (i.e., >7.5B storage-hours) | ~$1.05/month per 100k entities stored (i.e., $0.0014 per 100k entities stored/hour) |
Mirror
Metering
Active Pipeline Workers
The number of active workers, billed hourly. Pipeline resources can have multiple parallel workers, and each worker incurs usage separately.
Resource Size | Workers |
---|---|
small (default) | 1 |
medium | 4 |
large | 10 |
x-large | 20 |
xx-large | 40 |
If you have one small pipeline and one large pipeline each deployed for 2 hours, you will accumulate 1*2*1 + 1*2*10 = 2 + 20 = 22
hours of usage.
Note: Pipelines that use a single subgraph as a source, and webhooks or GraphQL APIs as sink(s), are not metered as pipelines. However, you still accumulate hourly subgraph usage.
Examples:
- If you have 1 small pipeline, you use 1 pipeline worker-hour every hour. At 730 hours in the average month, you would incur 730 pipeline worker-hours for that month.
- If you start with 10 small pipelines in a billing period and delete all of them halfway through the billing period, you are charged the equivalent of 5 pipeline workers for the full billing period.
- If you have 2 large pipelines, you will be using 20 pipeline worker-hours every hour, equating to 14,600 pipeline worker-hours if you run them the entire month.
Pipeline Event Writes
The number of records written by pipelines in your project. For example, for a PostgreSQL sink, every row created, updated, or deleted, counts as a ‘write’. For a Kafka sink, every message counts as write.
Examples:
- If you have a pipeline that writes 20,000 records per day for 10 days, and then 20 records per day for 10 days, you will be using 200,200 pipeline event writes.
- If you have two pipelines that each write 1 million events in one month, then you are not charged for the first one million events, but you are charged $1 for the next one million, as per the Starter Plan limits below.
Starter Plan
Active Pipeline Workers
Each billing cycle, you can run 1 small pipeline free of charge (~730 pipeline worker-hours).
Pipeline Event Writes
You can write up to 1 million events to a sink, cumulatively across all pipelines, per billing cycle.
When you exceed Starter Plan limits, pipelines will be paused, but pipelines will remain queryable.
Scale Plan
You will incur usage for each hour that each pipeline in your project is deployed and active.
Note: The pipeline resource size
maps to the underlying VM size and acts as a multiplier on hourly usage.
Active Pipelines (pipeline worker-hours) | |
---|---|
First 750 worker-hours | Free (i.e., 1 always-on pipeline worker/month) |
751+ worker-hours | $0.10 (i.e., $73.00/month per worker) |
Pipeline Throughput (pipeline events written) | |
---|---|
First 1M events written | Free |
Up to 100M events written | $1.00 per 100,000 events |
Above 100M events written | $0.10 per 100,000 events |
Was this page helpful?