With mirror pipelines, you can access to indexed on-chain data. Define them as a source and pipe them into any sink we support.
- Mirror specific logs and traces from a set of contracts into a postgres database to build an API for your protocol
- ETL data into a data warehouse to run analytics
- Push the full blockchain into Kafka or S3 to build a datalake for ML
|Blocks||Raw transactions||Raw logs||Decoded logs||Raw Traces||Decoded Traces||Receipts|
|Scroll Alpha Testnet||✓||✓||✓||✓||✓||✓||✓|
* The Arweave dataset includes bundled/L2 data.
Additional chains, including roll-ups, can be indexed on demand. Contact us at email@example.com to learn more.
The schema for each of these datasets can be found here.
Please ensure you have adequate storage before syncing complete datasets.
Raw and decoded logs can be pre-filtered by contract address to include only relevant data to that contract. Note, if you don’t filter these logs, the datasets will be sizeable and require sufficient storage and budget.
You can choose to sync all historical data or start syncing from the time the pipeline is created during the pipeline creation process.
When we index a chain, we decode logs and traces to the best of our ability using a large database of ABI’s. We also use a set of heuristics to decode logs and traces that don’t have an ABI.
For Ethereum, we have around 97+% of event logs decoded. However, if you’re developing a new contract, it’s likely that we won’t have the ABI for it. In this case, you can provide us with the ABI and we’ll add it to our database.
The ability to decode your contracts using Mirror pipelines is in testing - email us if you’d like to try it out.