Key Features
- Real-time Updates: Add or remove values without pipeline restart by updating a postgres table.
- SQL Integration: Use
dynamic_table_check()function in any SQL transform - Pipeline Updates: Update the table with on-chain data in the same pipeline.
Use Cases
Wallet Tracking
Monitor transfers to/from specific wallet addresses
Deduplication
Track processed records to avoid duplicates
Factory Pattern
Track contracts created by a factory
Basic Configuration
Parameters
Must be
dynamic_tableStorage backend:
PostgresThe table name in the backend storage. For Postgres, this creates a table in
the
streamling schema (configurable via the schema field).The name of a Goldsky secret containing Postgres credentials. Required for
the Postgres backend.
Optional. SQL query to automatically populate the table from pipeline data.
Optional. PostgreSQL schema name for the table. Defaults to
streamling.Optional. Name of the primary key column storing values. Defaults to
value.Optional. Name of the timestamp column. Defaults to
updated_at.Backend Types
PostgreSQL Backend (Recommended)
Best for production deployments requiring persistence:- Data persists across pipeline restarts and failures
- Can be updated externally via direct SQL — no redeploy needed
- Indexed primary-key lookups scale to millions of rows
- A primary key column (default:
value) storing the lookup values - A timestamp column (default:
updated_at) automatically set to the insertion time
streamling schema: streamling.tracked_contracts
Custom schema and column names
You can customize the schema, column name, and timestamp column name:my_app.tracked_contracts:
Using Dynamic Tables in SQL
Once defined, use thedynamic_table_check() function in SQL transforms:
Function Signature
- table_name (
TEXT): The transform name of the dynamic table in your pipeline (not thebackend_entity_name). Must be a string literal — the same value on every row. - value (
TEXT): The value to check for existence. - Returns:
trueif the value exists in the table,falseotherwise.
dynamic_table_check in SQL functions reference.
Auto-Population with SQL
You can automatically populate a dynamic table from your pipeline data:When using SQL to populate, the query only supports projections and filters
(no joins or aggregations).
Manual Updates
For Postgres backends, you can update the table directly using any Postgres client. (Substitute the schema and column names if you customized them.)Example: Track Specific Token Contracts
Monitor transfers for specific ERC-20 tokens like USDC:Example: Track Wallet Activity
Monitor all ERC-20 transfers for specific wallets:Example: Complete Pipeline with Dynamic Tables
This example shows a complete pipeline that uses dynamic tables to filter ERC-20 transfers to specific contracts:Example: Factory Pattern
Track all contracts created by a factory and filter events from those contracts:Source Validation
Good example:Performance Considerations
Lookup Performance
Lookup Performance
- Lookups are batched and executed in parallel against Postgres (
ANY(ARRAY[...])queries). - The value column is
PRIMARY KEY, so Postgres uses its unique index automatically. - Large tables (millions of entries) work fine as long as the Postgres instance has adequate resources.
Update Latency
Update Latency
- Postgres backend: each
dynamic_table_check()call queries the table directly (no in-process cache), so changes take effect on the next batch — typically within a second or two. - Auto-population via SQL: updates flow through with normal pipeline latency.
Table Size
Table Size
- No hard row limit is enforced, but lookup cost scales with table size — keep tables as small as your use case allows.
- Use specific filters in auto-population SQL to avoid unbounded growth.
- For long-running pipelines, consider a cleanup strategy (
DELETEold rows byupdated_at).
Best Practices
Use Postgres for production
Always use the Postgres backend for production deployments to ensure data persistence and external updatability.
Use specific filters in auto-population SQL
Use specific filters in auto-population SQL to ensure data consistency and proper synchronization.