Documentation Index
Fetch the complete documentation index at: https://docs.blockdb.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
BlockDB’s indexers do far more than persist raw blockchain payloads. Every dataset export passes through a multi-layer cryptographic verification pipeline — running in sequence for every block, every transaction, and every log:Log RLP Reconstruction
Each log’s contract address, topics, and data are re-encoded from scratch using RLP encoding. This ensures every byte of every event is accounted for — no truncation, no reordering, no partial processing.
Log Bloom Validation
Each transaction’s 2048-bit bloom filter is recomputed by hashing the emitting contract address and every indexed topic. The result is compared bit-for-bit against the on-chain value. A single differing bit fails validation — catching subtle corruption that receipt checks alone would miss.
Receipts Root Recomputation
The reconstructed receipts are assembled into a Merkle Patricia Trie. A Keccak-256 hash of the rebuilt root is compared against the chain-supplied
receiptsRoot. Any mismatch triggers an immediate blocking incident.Stored Evidence
Each successfully verified block writes two immutable breadcrumbs toblockdb_evm.b0101_blocks_v1:
| Column | Description |
|---|---|
_computed_receipt_root | The receipts root recomputed from verified transaction receipts. |
_computed_receipt_timestamp_utc | UTC timestamp when the recomputation occurred. |
Archive Re-validation
In addition to live verification, BlockDB runs offline re-validation of data already exported to the persistence layer — replaying all verification logic using only stored data, without depending on live node responses:- Long-term consistency across cold storage and replicas is preserved
- Block continuity and parent hash linkage remain intact after archival compaction
- Historical reproducibility is maintained, proving that what left the exporter stays self-consistent at rest
Integrity Guarantees
The combination of Log RLP reconstruction, bloom validation, receipts-root recomputation, and block continuity checks makes BlockDB’s verification pipeline uniquely resistant to silent corruption:| Safeguard | Ensures | Prevents |
|---|---|---|
LogsRlp rebuilt from each log’s address, topics, and data | Log-level completeness and byte-accurate reproduction of on-chain payloads | Missing or truncated logs |
| Receipts root recomputed from all transactions | Transaction-level integrity and Merkle proof consistency | Missing transactions |
| Block continuity validation | Continuous block sequence and parent hash linkage | Missing or orphaned blocks |
| Archive re-validation using exported database data | Long-term consistency and reproducibility independent of live nodes | Data drift, replica inconsistency, or corruption after export |
De-duplication & Reorg Safety
BlockDB enforces stable, domain-correct primary keys to prevent duplicates across ingestion — mirroring how identity is defined on-chain and guaranteeing deterministic joins and idempotent re-ingestion:blockdb_evm.b0101_blocks_v1— primary key:block_numberblockdb_evm.b0102_transactions_v1— primary key:tx_hashblockdb_evm.b0103_logs_v1— composite key:(block_number, tx_index, log_index)
Auditing & API
BlockDB exposes dedicated verification endpoints so clients can independently validate exported data on their side:Verify Receipt Root
Recompute and compare canonical receipt trie roots against the chain-supplied value.
Verify Logs Bloom
Regenerate logs bloom filters for targeted block or transaction reconciliations.
For organizations that require deeper insight into the end-to-end verification pipeline or custom validation tooling, contact us at support@blockdb.io.