What you’ll need
- A high-level understanding of Uniswap’s contract design, namely the factory > pool process.
- A destination to write your data to; for this use case specifically we recommend a Postgres sink.
Walkthrough
1
Write logic to collect Uniswap pool contract addresses
In order to find all Uniswap swap events across pools, we need to know all Uniswap pool contract addresses.Here, we’re using Base as an example, and can find the relevant
UniswapV3Factory deployment address from Uniswap’s docs.We can use Mirror to watch the factory for PoolCreated() events against the factory address; the SQL logic to do this using Goldsky’s decoded event logs schema is below.2
Write logic to collect trades
Next, we need to define the logic to filter the decoded logs database for Uniswap
Swap() events, and we can use the logic written above as a subquery to filter only for Uniswap pool events (so that we filter out other contracts that emit similar Swap() events from our stream).We need to add STATE_TTL hints in order to prevent all the logs from being kept.3
Write full pipeline configuration
Now that we have the core logic for our streaming transformation written, we can combine it into a Mirror pipeline configuration file. As a refresher, the high-level outline for a pipeline configuration is as follows.We’ll one-line the SQL from the previous step, and write out a full configuration file.
- v3
- v2 (deprecated)
overview.yaml
config.yaml
4
Deploy pipeline
Once we have our After a minute or so, the Mirror pipeline will start walking through the decoded logs. You can monitor the progress of the pipeline by monitoring the
pipeline.yaml configuration file, we can deploy from CLI with a single line of code:- v3
- v2 (deprecated)
max(block) in the database against a block explorer.You can speed up the backfill by upscaling the resource size (and then scale back down to an S worker at edge). An M pipeline caught up to edge (~600K swaps as of October 2023) in approximately 45 minutes.5
Create stream for other chains (or one multi-chain stream)
Once you’ve iterated on a single-chain use case (with the exact schema, sink indexes, etc that you need), you can update your pipeline configuration to write multiple chains’ worth of Uniswap data in one. You’d simply add additional sources/transforms/sinks in your config, replacing the
referenceName and deployment addresses for each chain.