What you’ll need
- A Goldky account and the CLI installed
Install Goldsky's CLI and log in
Install Goldsky's CLI and log in
-
Install the Goldsky CLI:
For macOS/Linux:
For Windows:Windows users need to have Node.js and npm installed first. Download from nodejs.org if not already installed.
- Go to your Project Settings page and create an API key.
-
Back in your Goldsky CLI, log into your Project by running the command
goldsky login
and paste your API key. -
Now that you are logged in, run
goldsky
to get started:
- A basic understanding of the Mirror product
- A destination sink to write your data to. In this example, we will use PostgresSQL Sink
Preface
To get decoded contract data on EVM chains in a Mirror pipeline, there are three options:- Decode data with a subgraph, then use a subgraph entity source.
- Use the
decoded_logs
anddecoded_traces
direct indexing datasets. These are pre-decoded datasets, with coverage for common contracts, events, and functions. - Use the
raw_logs
dataset and decode inside a pipeline transform.
Pipeline definition
In the
_gs_fetch_abi
function call below, we pull from a gist. You can also pull from basescan directly with an api key. _gs_fetch_abi('<basescan-link>', 'etherscan'),
event-decoding-pipeline.yaml
- Your
secret_name
(v2:secretName
). If you already created a secret, you can find it via the CLI commandgoldsky secret list
. - The schema and table you want the data written to, by default it writes to
decoded_events.friendtech
.
Decoding transforms
Let’s start analyzing the first transform:Transform: friendtech_decoded
id
, block_number
and transaction_hash
. Since its columns
topics
and data
are encoded we need to make use of the _gs_log_decode to decode the data. This function takes the following parameters:
- The contract ABI: rather than specifying ABI directly into the SQL query, which would made the code considerably less legible, we have decided to make use of the _gs_fetch_abi function to fetch the ABI from the BaseScan API but you could also fetch it from an external public repository like Github Gist if you preferred.
-
topics
: as a second argument to the decode function we pass in the name of the column in our dataset that contains the topics as comma-separated string. -
data
: as a third argument to the decode function we pass in the name of the column in our dataset that contains the encoded data.Some columns are surrounded by backticks, this is because they are reserved words in Flink SQL. Common columns that need backticks are: data, output, value, and a full list can be found here.
decoded
which is a nested ROW with the properties event_param::TEXT[]
and event_signature::TEXT
. We create a second transform that reads from the resulting dataset of this first SELECT query to access the decoded data:
Transform: friendtech_clean
decoded IS NOT NULL
as a safety measure to discard processing potential issues in the decoding phase.
Deploying the pipeline
As a last step, to deploy this pipeline and start sinking decoded data into your database simply execute:goldsky pipeline apply <yaml_file>
Conclusion
In this guide we have explored an example implementation of how we can use Mirror Decoding Functions to decode raw contract events and stream them into our PostgreSQL database. This same methodology can be applied to any contract of interest for any chain withraw_log
and raw_traces
Direct Indexing datasets available (see list).
Goldsky also provides alternative decoding methods:
- Decoded datasets:
decoded_logs
anddecoded_traces
- Subgraphs entity sources to your pipelines.