What Archive Delivery Covers
- Full-history exports of every dataset listed in the Catalog Overview.
- Files generated from the same canonical SQL found in
/BlockDb.Postgres.Tables.Public, preserving_tracing_id,_created_at, and_updated_at. - Delivery via secure object storage buckets or managed transfers controlled by your BlockDB account team.
Preparation Checklist
- Pick datasets and versions using the Dataset Index.
- Confirm storage capacity and egress allowances (archives can span multiple TBs for long histories).
- Generate or rotate credentials for the target bucket (S3, GCS, Azure) where BlockDB will deposit the files.
- Decide how integrity will be validated: BlockDB supplies per-file hashes and
_tracing_idlineage hooks.
Delivery Flow
- Schedule the drop with [email protected]. Provide dataset IDs, chains, and earliest block height.
- Receive manifest files describing each object (dataset ID, row count, checksum).
- Load into warehouse using the DDL scripts from
/BlockDb.Postgres.Tables.Public(see Quickstart workflow). - Verify lineage by sampling rows and hitting Verification endpoints or recomputing proofs locally.
Post-Delivery Tips
- Track ingestion metrics (row counts, min/max timestamps) using the Coverage and Data Freshness docs as references.
- Document any transformations or column overrides inside your own repository so schema changes can be reconciled with BlockDB’s Schema Governance notices.
- Once archives are landed, enable the Real Time Delivery path to keep the warehouse current.