Skip to main content

Why BlockDB

BlockDB packages institutional-grade onchain data in three layers:
  • Postgres schemas mirroring the production exports in /BlockDb.Postgres.Tables.Public so warehouses can be hydrated with the same objects the API surfaces.
  • Historic REST APIs under /api-reference for programmatic access to raw EVM data, pricing layers, lineage, and verification.
  • Operational guarantees documented in the Data Catalog (coverage, granularity, freshness, SLAs, and governance) so teams can model delivery risk.
Each section of the docs expands on one of these pillars—this page gives you the signposts.

Foundational Reading

Catalog Overview

Understand how datasets are grouped, versioned, and mapped to API dataset IDs.

Delivery Options

Compare archive drops, real-time feeds, and access patterns before onboarding.

API Primer

Learn the Historic API conventions: HTTPS-only POST, OAuth 2.0, and enumerations.

Data verification

Trace _tracing_id provenance or recompute roots to audit any record you ingest.

What You Need Before Building

  1. Contract + dataset entitlements - granted by your BlockDB account team. Entitlements scope chains, history depth, and SLAs.
  2. API credentials — get your client ID and secret from your BlockDB account: Open Account. Required for every REST call.
  3. Warehouse target - Postgres-compatible destination (Snowflake, BigQuery, Redshift, or native Postgres) that can execute the schema scripts in this repo.
  4. Automation runtime - CLI or orchestration tool that can run curl, psql, dbt, or your preferred ingestion stack.

Next Steps

Last modified on March 21, 2026