Integration Options
| Mode | When to Use | How It Works |
|---|---|---|
| Direct Connection | Teams already consuming BlockDB via Snowflake, Redshift, BigQuery, or Databricks. | Point Tableau to the warehouse where BlockDB tables land; publish certified data sources. |
| Flat Extracts | Lightweight pilots or offline analysis. | Schedule CSV/Parquet exports from archive or bucket channels and refresh Tableau extracts on a cadence. |
Recommended Architecture
- Ingest Data using one of the delivery channels (S3 archives, Snowflake share, Real Time streaming).
- Model in Warehouse via dbt/db SQL, preserving
_tracing_idso dashboards can link to lineage. - Expose in Tableau as a published data source with role-based permissions.
- Refresh using Tableau Bridge or Server schedules aligned with BlockDB freshness SLAs.
Live Connection Example (Snowflake)
- Accept the Snowflake Share and create the
blockdb_proddatabase. - In Tableau Desktop, choose Snowflake → enter account, database, warehouse, role.
- Drag the desired BlockDB table (e.g.,
"0101_blocks_v1") or your curated view into the canvas. - Publish the workbook/data source to Tableau Server with extract or live mode depending on query load.
Use Tableau’s data quality warnings to flag when
_updated_at drifts beyond the thresholds described in Data Freshness.Extract Workflow
- Schedule an ETL job (Airflow/dbt) that exports curated tables to S3/Azure/GCS as CSV/Hyper.
- Point Tableau Prep or Tableau Server’s file connector to the exported location.
- Store manifest metadata (row counts,
_tracing_idranges) so dashboard owners can audit changes.
Governance & Lineage
- Document every published data source with links back to the relevant BlockDB dataset pages.
- Expose
_tracing_idas a tooltip field so analysts can trace any metric back to raw records or verification endpoints. - Combine Tableau’s Data Catalog with BlockDB’s Schema Governance alerts to detect breaking changes early.