Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.blockdb.io/llms.txt

Use this file to discover all available pages before exploring further.

Overview

BlockDB’s indexers do far more than persist raw blockchain payloads. Every dataset export passes through a multi-layer cryptographic verification pipeline — running in sequence for every block, every transaction, and every log:
1

Log RLP Reconstruction

Each log’s contract address, topics, and data are re-encoded from scratch using RLP encoding. This ensures every byte of every event is accounted for — no truncation, no reordering, no partial processing.
2

Log Bloom Validation

Each transaction’s 2048-bit bloom filter is recomputed by hashing the emitting contract address and every indexed topic. The result is compared bit-for-bit against the on-chain value. A single differing bit fails validation — catching subtle corruption that receipt checks alone would miss.
3

Receipts Root Recomputation

The reconstructed receipts are assembled into a Merkle Patricia Trie. A Keccak-256 hash of the rebuilt root is compared against the chain-supplied receiptsRoot. Any mismatch triggers an immediate blocking incident.
4

Block Continuity Check

Block numbers are verified to increment without gaps, and every parent_block_hash is matched against the previous block. This guarantees a complete, unbroken chain of evidence across the entire indexed range.
These checks ensure that all exported data is provably identical to what was produced on-chain — not just at ingest time, but at every stage of archival and replication.

Stored Evidence

Each successfully verified block writes two immutable breadcrumbs to blockdb_evm.b0101_blocks_v1:
ColumnDescription
_computed_receipt_rootThe receipts root recomputed from verified transaction receipts.
_computed_receipt_timestamp_utcUTC timestamp when the recomputation occurred.
These fields allow consumers and auditors to independently confirm that BlockDB’s recomputation matched the chain-provided root at the time of export.

Archive Re-validation

In addition to live verification, BlockDB runs offline re-validation of data already exported to the persistence layer — replaying all verification logic using only stored data, without depending on live node responses:
  • Long-term consistency across cold storage and replicas is preserved
  • Block continuity and parent hash linkage remain intact after archival compaction
  • Historical reproducibility is maintained, proving that what left the exporter stays self-consistent at rest
Because this process operates solely on exported data, it confirms integrity post-ingestion — and can safely run on lagged replicas or analytical nodes without affecting live indexing performance.

Integrity Guarantees

The combination of Log RLP reconstruction, bloom validation, receipts-root recomputation, and block continuity checks makes BlockDB’s verification pipeline uniquely resistant to silent corruption:
SafeguardEnsuresPrevents
LogsRlp rebuilt from each log’s address, topics, and dataLog-level completeness and byte-accurate reproduction of on-chain payloadsMissing or truncated logs
Receipts root recomputed from all transactionsTransaction-level integrity and Merkle proof consistencyMissing transactions
Block continuity validationContinuous block sequence and parent hash linkageMissing or orphaned blocks
Archive re-validation using exported database dataLong-term consistency and reproducibility independent of live nodesData drift, replica inconsistency, or corruption after export
These layers form a cryptographically auditable chain of evidence, far stronger than checksum-based or row-count validation methods.

De-duplication & Reorg Safety

BlockDB enforces stable, domain-correct primary keys to prevent duplicates across ingestion — mirroring how identity is defined on-chain and guaranteeing deterministic joins and idempotent re-ingestion:
  • blockdb_evm.b0101_blocks_v1 — primary key: block_number
  • blockdb_evm.b0102_transactions_v1 — primary key: tx_hash
  • blockdb_evm.b0103_logs_v1 — composite key: (block_number, tx_index, log_index)
Archive indexing operates with a bounded backoff from the chain tip (typically 20-100 blocks, chain-dependent). This buffer ensures short-range reorgs don’t leak transient data into exports, and that final datasets reflect post-reorg canonical history.

Auditing & API

BlockDB exposes dedicated verification endpoints so clients can independently validate exported data on their side:

Verify Receipt Root

Recompute and compare canonical receipt trie roots against the chain-supplied value.

Verify Logs Bloom

Regenerate logs bloom filters for targeted block or transaction reconciliations.
For organizations that require deeper insight into the end-to-end verification pipeline or custom validation tooling, contact us at support@blockdb.io.
Last modified on March 31, 2026