Skip to main content

Refresh Cadence

Dataset FamilyExpected LatencyNotes
Ledger & Execution (0101-0105)< 5 minutesIngested directly from BlockDB Historic stream processors.
Tokens & Pools (0201-0203)< 5 minutesIncludes contract verification heuristics before publishing.
Reserves (0301)< 5 minutesTriggered whenever on-chain state changes exceed configured thresholds.
Pricing Layers (0401-0404)< 5 minutesUpdated on every book change from upstream venues.
Pricing Analytics (0501-0502)< 5 minutesRecomputed when underlying depth windows roll.
To guarantee consistency, we publish data outside a chain-specific reorg buffer (typically 20-100 blocks, chain-dependent). This ensures finalized data and avoids transient reorg artifacts.

How do I monitor data freshness?

Use the _updated_at column or the count and cursor envelope timestamps in the API.
Dashboards can alert you when _updated_at drifts beyond the SLA for the relevant dataset family.

Backfill Strategy

  1. Hotfix windows — If a venue outage occurs, BlockDB will replay the affected range.
  2. Historical expansions — New datasets start with block 0.
Need tighter guarantees? Mirror the dataset via the API and compare _tracing_id values. Differences indicate your mirror is stale.

Freshness by channel

  • API: near-real-time, subject to reorg buffer and endpoint-specific processing
  • Warehouse shares: near-real-time to hourly, depending on provider sync cadence
  • Bulk exports (SFTP/S3/Blob): nightly snapshots with ad-hoc hotfix replays when required