Skip to main content

What Archive Delivery Covers

  • Full-history exports of every dataset listed in the Catalog Overview.
  • Files generated from the same canonical SQL found in /BlockDb.Postgres.Tables.Public, preserving _tracing_id, _created_at, and _updated_at.
  • Delivery via secure object storage buckets or managed transfers controlled by your BlockDB account team.
Archive drops are immutable snapshots designed for fast backfills. They complement—but do not replace—the incremental updates provided by the Real Time pipeline.

Preparation Checklist

  1. Pick datasets and versions using the Dataset Index.
  2. Confirm storage capacity and egress allowances (archives can span multiple TBs for long histories).
  3. Generate or rotate credentials for the target bucket (S3, GCS, Azure) where BlockDB will deposit the files.
  4. Decide how integrity will be validated: BlockDB supplies per-file hashes and _tracing_id lineage hooks.

Delivery Flow

  1. Schedule the drop with [email protected]. Provide dataset IDs, chains, and earliest block height.
  2. Receive manifest files describing each object (dataset ID, row count, checksum).
  3. Load into warehouse using the DDL scripts from /BlockDb.Postgres.Tables.Public (see Quickstart workflow).
  4. Verify lineage by sampling rows and hitting Verification endpoints or recomputing proofs locally.

Post-Delivery Tips

  • Track ingestion metrics (row counts, min/max timestamps) using the Coverage and Data Freshness docs as references.
  • Document any transformations or column overrides inside your own repository so schema changes can be reconciled with BlockDB’s Schema Governance notices.
  • Once archives are landed, enable the Real Time Delivery path to keep the warehouse current.