When to Use
Choose SFTP when you want a simple, firewall-friendly way to ingest BlockDB archives without wiring up cloud object storage. Each nightly drop includes the full dataset partitions you requested plus manifest files with row counts and checksums.
Delivery Characteristics
- Cadence: Nightly (hourly bursts available for priority datasets).
- Format: Compressed Parquet or CSV files that mirror the schemas in
/BlockDb.Postgres.Tables.Public.
- Retention: Files remain available for 30 days on the SFTP server; sync them to cold storage if you need longer retention.
Setup Checklist
- Provide an allow-listed IP, SSH public key, and desired folder layout to support@blockdb.io.
- Confirm the dataset IDs and chains you need (use the Dataset Index + Coverage).
- Harden your internal user account—BlockDB provisions read-only SFTP access scoped to your datasets.
File Layout
/0101_blocks_v1/
2024-01-01/
part-000.gz.parquet
part-001.gz.parquet
manifest.json
/0102_transactions_v1/
...
manifest.json lists row counts, SHA-256 hashes, and _tracing_id ranges per file.
- Files are partitioned by export date; use
_updated_at if you need to detect late-arriving records.
Post-Ingest Validation
- Load the data using the DDL scripts in
/BlockDb.Postgres.Tables.Public.
- Compare row counts against the manifest and spot-check
_tracing_id / block anchors against your own node or support@blockdb.io.
- Record ingestion status in your observability system so you can layer Real Time Delivery on top once archives are stable.
Last modified on March 21, 2026