Why BlockDB
BlockDB packages institutional-grade onchain data in three layers:- Postgres schemas mirroring the production exports in
/BlockDb.Postgres.Tables.Publicso warehouses can be hydrated with the same objects the API surfaces. - Historic REST APIs under
/api-referencefor programmatic access to raw EVM data, pricing layers, lineage, and verification. - Operational guarantees documented in the Data Catalog (coverage, granularity, freshness, SLAs, and governance) so teams can model delivery risk.
Foundational Reading
Catalog Overview
Understand how datasets are grouped, versioned, and mapped to API dataset IDs.
Delivery Options
Compare archive drops, real-time feeds, and access patterns before onboarding.
API Primer
Learn the Historic API conventions: HTTPS-only POST, OAuth 2.0, and enumerations.
Verification & Lineage
Trace
_tracing_id provenance or recompute roots to audit any record you ingest.What You Need Before Building
- Contract + dataset entitlements - granted by your BlockDB account team. Entitlements scope chains, history depth, and SLAs.
- API credentials - OAuth 2.0 client ID/secret issued by [email protected]. Required for every REST call.
- Warehouse target - Postgres-compatible destination (Snowflake, BigQuery, Redshift, or native Postgres) that can execute the schema scripts in this repo.
- Automation runtime - CLI or orchestration tool that can run
curl,psql, dbt, or your preferred ingestion stack.
Next Steps
- Follow the Quickstart workflow to bootstrap schemas, download a dataset, and fire your first API call.
- Browse the Dataset Index to choose which tables to materialize.
- Review Access & SLA expectations so your monitoring matches BlockDB guarantees.