Skip to main content

Delivery Modes

ModeWhen to ChooseHow It Works
Authorized ViewNeed always-fresh tables without owning storage.BlockDB shares a dataset in its project; you query it directly.
Transfer ServiceWant the data inside your own project for transformation or long-term retention.Scheduled jobs copy tables/partitions into your project.
At launch, datasets 0101-0105 (ledger/execution) are GA; pricing datasets follow shortly.

Onboarding Steps

  1. Share your GCP project ID and region with [email protected].
  2. For Authorized Views, grant BlockDB’s service account bigquery.dataViewer on the consuming project. For Transfer Service, grant bigquery.admin on the target dataset or create a dedicated service account.
  3. Confirm dataset list, chain coverage, and schedule (daily/hourly).
  4. BlockDB configures the share or transfer job; you receive dataset names like blockdb_prod.ledger_0101_blocks_v1.

Usage Patterns

  • Join BlockDB tables with your proprietary data in BigQuery SQL; leverage _tracing_id for lineage.
  • Export slices to GCS if you need to interop with other warehouses.
  • Use scheduled queries or dbt to materialize curated marts downstream.

Validation & Monitoring

  • Compare row counts with the Dataset Index or manifest metadata delivered via auxiliary tables.
  • Alert when _updated_at drifts from the SLAs in Access & SLA.
  • For Transfer Service, monitor job status in Cloud Console and set up notifications for failures.
If you require low-latency updates, combine BigQuery archive loads with REST API polling or streaming buckets described under Real Time Channels.