Skip to main content

When to Use

Choose SFTP when you want a simple, firewall-friendly way to ingest BlockDB archives without wiring up cloud object storage. Each nightly drop includes the full dataset partitions you requested plus manifest files with row counts and checksums.

Delivery Characteristics

  • Cadence: Nightly (hourly bursts available for priority datasets).
  • Format: Compressed Parquet or CSV files that mirror the schemas in /BlockDb.Postgres.Tables.Public.
  • Retention: Files remain available for 30 days on the SFTP server; sync them to cold storage if you need longer retention.

Setup Checklist

  1. Provide an allow-listed IP, SSH public key, and desired folder layout to [email protected].
  2. Confirm the dataset IDs and chains you need (use the Dataset Index + Coverage).
  3. Harden your internal user account—BlockDB provisions read-only SFTP access scoped to your datasets.

File Layout

/0101_blocks_v1/
  2024-01-01/
    part-000.gz.parquet
    part-001.gz.parquet
  manifest.json
/0102_transactions_v1/
  ...
  • manifest.json lists row counts, SHA-256 hashes, and _tracing_id ranges per file.
  • Files are partitioned by export date; use _updated_at if you need to detect late-arriving records.

Post-Ingest Validation

  1. Load the data using the DDL scripts in /BlockDb.Postgres.Tables.Public.
  2. Compare row counts against the manifest and run spot checks with Verification endpoints.
  3. Record ingestion status in your observability system so you can layer Real Time Delivery on top once archives are stable.