Delivery Modes
| Mode | When to Choose | How It Works |
|---|---|---|
| Authorized View | Need always-fresh tables without owning storage. | BlockDB shares a dataset in its project; you query it directly. |
| Transfer Service | Want the data inside your own project for transformation or long-term retention. | Scheduled jobs copy tables/partitions into your project. |
0101-0105 (ledger/execution) are GA; pricing datasets follow shortly.
Onboarding Steps
- Share your GCP project ID and region with [email protected].
- For Authorized Views, grant BlockDB’s service account
bigquery.dataVieweron the consuming project. For Transfer Service, grantbigquery.adminon the target dataset or create a dedicated service account. - Confirm dataset list, chain coverage, and schedule (daily/hourly).
- BlockDB configures the share or transfer job; you receive dataset names like
blockdb_prod.ledger_0101_blocks_v1.
Usage Patterns
- Join BlockDB tables with your proprietary data in BigQuery SQL; leverage
_tracing_idfor lineage. - Export slices to GCS if you need to interop with other warehouses.
- Use scheduled queries or dbt to materialize curated marts downstream.
Validation & Monitoring
- Compare row counts with the Dataset Index or manifest metadata delivered via auxiliary tables.
- Alert when
_updated_atdrifts from the SLAs in Access & SLA. - For Transfer Service, monitor job status in Cloud Console and set up notifications for failures.
If you require low-latency updates, combine BigQuery archive loads with REST API polling or streaming buckets described under Real Time Channels.