Why Use Snowflake Share
- Avoid managing raw files—datasets appear as ready-to-query tables inside your Snowflake account.
- Gain access to change streams for incremental processing without building your own CDC logic.
- Keep data in sync automatically; BlockDB operates the provider account and updates propagate in minutes.
Provisioning Flow
- Provide your Snowflake account locator (e.g.,
xy12345.us-east-1) to [email protected]. - BlockDB creates a share containing the dataset schemas (
0101_blocks_v1,0102_transactions_v1, etc.) and grants it to your account. - You create a database from the share:
- (Optional) Enable streams on critical tables:
Table Structure
- Column names and types match the SQL in
/BlockDb.Postgres.Tables.Public. _tracing_id,_created_at,_updated_at, and verification columns are included for lineage.- Each dataset sits in its own schema so you can apply RBAC at a granular level.
Operational Tips
- Use Snowpipe or tasks to copy data from the shared tables into your production schema if you need transformations.
- Streams capture inserts/updates so you can power near-real-time analytics or downstream event buses.
- Monitor freshness by comparing
_updated_atagainst the SLAs defined in Access & SLA.
Pair the Snowflake share with Verification endpoints to validate
_tracing_id samples before critical reporting runs.