Skip to main content

Integration Options

ModeWhen to UseHow It Works
Direct ConnectionTeams already consuming BlockDB via Snowflake, Redshift, BigQuery, or Databricks.Point Tableau to the warehouse where BlockDB tables land; publish certified data sources.
Flat ExtractsLightweight pilots or offline analysis.Schedule CSV/Parquet exports from archive or bucket channels and refresh Tableau extracts on a cadence.
  1. Ingest Data using one of the delivery channels (S3 archives, Snowflake share, Real Time streaming).
  2. Model in Warehouse via dbt/db SQL, preserving _tracing_id so dashboards can link to lineage.
  3. Expose in Tableau as a published data source with role-based permissions.
  4. Refresh using Tableau Bridge or Server schedules aligned with BlockDB freshness SLAs.

Live Connection Example (Snowflake)

  1. Accept the Snowflake Share and create the blockdb_prod database.
  2. In Tableau Desktop, choose Snowflake → enter account, database, warehouse, role.
  3. Drag the desired BlockDB table (e.g., "0101_blocks_v1") or your curated view into the canvas.
  4. Publish the workbook/data source to Tableau Server with extract or live mode depending on query load.
Use Tableau’s data quality warnings to flag when _updated_at drifts beyond the thresholds described in Data Freshness.

Extract Workflow

  1. Schedule an ETL job (Airflow/dbt) that exports curated tables to S3/Azure/GCS as CSV/Hyper.
  2. Point Tableau Prep or Tableau Server’s file connector to the exported location.
  3. Store manifest metadata (row counts, _tracing_id ranges) so dashboard owners can audit changes.

Governance & Lineage

  • Document every published data source with links back to the relevant BlockDB dataset pages.
  • Expose _tracing_id as a tooltip field so analysts can trace any metric back to raw records or verification endpoints.
  • Combine Tableau’s Data Catalog with BlockDB’s Schema Governance alerts to detect breaking changes early.