Skip to main content

Why Use Snowflake Share

  • Avoid managing raw files—datasets appear as ready-to-query tables inside your Snowflake account.
  • Gain access to change streams for incremental processing without building your own CDC logic.
  • Keep data in sync automatically; BlockDB operates the provider account and updates propagate in minutes.

Provisioning Flow

  1. Provide your Snowflake account locator (e.g., xy12345.us-east-1) to [email protected].
  2. BlockDB creates a share containing the dataset schemas (0101_blocks_v1, 0102_transactions_v1, etc.) and grants it to your account.
  3. You create a database from the share:
CREATE DATABASE blockdb_prod FROM SHARE blockdb_org.blockdb_share;
  1. (Optional) Enable streams on critical tables:
CREATE OR REPLACE STREAM block_stream ON TABLE blockdb_prod.public."0101_blocks_v1";

Table Structure

  • Column names and types match the SQL in /BlockDb.Postgres.Tables.Public.
  • _tracing_id, _created_at, _updated_at, and verification columns are included for lineage.
  • Each dataset sits in its own schema so you can apply RBAC at a granular level.

Operational Tips

  • Use Snowpipe or tasks to copy data from the shared tables into your production schema if you need transformations.
  • Streams capture inserts/updates so you can power near-real-time analytics or downstream event buses.
  • Monitor freshness by comparing _updated_at against the SLAs defined in Access & SLA.
Pair the Snowflake share with Verification endpoints to validate _tracing_id samples before critical reporting runs.