Skip to main content

Developer Goals

  • Prototype new dapps with realistic datasets (blocks, transactions, pools) without standing up heavyweight indexers.
  • Enrich wallets or explorers with verified metadata (token registries, pool classifications, lineage proofs).
  • Validate onchain workflows—deployment, settlement, compliance—against auditable tables.

Key Assets

NeedDataset / EndpointWhy it matters
Execution primitivesblocks, transactions, logsFuel backends, analytics, and debugging with deterministic block data.
Contract + token metadatacontracts, erc20 tokens, erc721 tokensPopulate UI components, compliance checks, and upgrade monitoring.
Liquidity + pricingliquidity pools, reserves, pricing layersBuild routing, quoting, and analytics services with normalized data.
Proof & lineageLineage endpoints, Verification suiteProvide “view onchain proof” buttons that recompute roots/log blooms.

Delivery Workflow

  1. Backfill: hydrate dev/test environments using Archive SFTP or S3 buckets for deterministic datasets.
  2. Real-time sync: subscribe to WebSocket feeds or REST pollers for live UX updates (new pools, token listings, governance proposals).
  3. Environment targeting: use the Chain enumeration to toggle between mainnet and supported L2s/L3s; align dataset IDs with API payloads.
  4. Testing: leverage _tracing_id to create reproducible fixtures; pair with Function Results for deterministic smart-contract outputs.

Best Practices

  • Schema-first development: treat the SQL in /BlockDb.Postgres.Tables.Public as the contract for your backend models.
  • Access control: rely on delivery SLAs + _updated_at fields to detect lag before it impacts production UX.
  • Version pinning: store the dataset ID + version (e.g., 0101_blocks_v1) in your configuration to simplify upgrades when new versions ship.
When rolling out user-facing analytics (wallet insights, explorer charts), combine BlockDB datasets with your proprietary data via Visualization delivery guides or custom frontends that query curated APIs.