Skip to main content
When an API call fails, categorize the error quickly and follow a repeatable playbook. This guide walks through the diagnostic flow our support team follows when triaging issues with the EVM data APIs.

Error taxonomy

Break incoming reports into one of the following buckets to focus your debugging effort:
CategoryTypical HTTP codesSymptomsPrimary checks
Authentication and authorization401, 403Missing or expired credentialsToken validity, project scope, request signing
Rate limiting and quotas429Spiky error rates tied to throughputUsage metrics, retry headers, adaptive backoff
Data integrity mismatches409, 422, 5xxHash, state, or balance conflictsChain re-orgs, data freshness, verification endpoints
Platform availability500, 502, 503Broad failure across multiple endpointsAccess & SLA dashboard status, incident comms
Use provider logs to confirm the code being surfaced to clients. If the error does not fit a known bucket, capture request IDs and move straight to the escalation path.

Authentication issues

  1. Validate credentials
    • Confirm that the API key or bearer token is present and matches an active project in the dashboard.
    • Check X-Project-Id headers when scoped access is enforced.
  2. Reproduce with a clean request
    • Run a simple call that requires no optional parameters:
curl -X POST https://api.blockdb.io/evm/verify/receipt-root \
  -H "Authorization: Bearer $BLOCKDB_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"transactionHash": "0x..."}'
  1. Reset or rotate secrets
    • If an integration recently rotated keys, make sure the new secret is deployed to all environments.
    • Advise customers to revoke leaked keys immediately and regenerate via the dashboard.
Tip: Authentication errors after a long-running session often trace back to cached tokens. Force a refresh and retest before escalating.

Rate limiting

  1. Inspect response headers
    • X-RateLimit-Limit, X-RateLimit-Remaining, and Retry-After provide the primary diagnosis signal.
    • Graph usage trends from logs to confirm the limit being hit aligns with the contract.
  2. Implement adaptive retries
    • Backoff exponentially on 429 responses and respect the provided retry window.
    • Example resilient fetch with curl and sleep for quick triage:
for attempt in 1 2 3; do
  response=$(curl -s -o /tmp/resp.json -w "%{http_code}" \
    "https://api.blockdb.io/evm/lineage/record?transactionHash=0x...")
  if [ "$response" -eq 429 ]; then
    echo "Hit rate limit, backing off..." && sleep $((attempt * 2))
  else
    break
  fi
done
cat /tmp/resp.json
  • The lineage record endpoint is documented in Lineage Record and shares quota with other lineage APIs, so tune retries accordingly.
  1. Request higher limits
    • Collect peak RPS, average RPS, and burst duration before filing a quota increase request.
    • Reference contract terms in the Access and SLA guide when escalating to Customer Success.

Data integrity mismatches

  1. Confirm chain context
    • Ensure the request targets the correct chain, height, and environment (e.g., mainnet vs. testnet).
    • Cross-validate block numbers and timestamps against the Lineage Overview datasets.
  2. Verify on-chain proof artifacts
    • Use the verification suite to compare BlockDB responses against canonical on-chain data:
curl -X POST https://api.blockdb.io/evm/verify/logs-bloom \
  -H "Authorization: Bearer $BLOCKDB_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"blockHash": "0x..."}'
  • If the bloom filter matches but downstream services still disagree, fetch the lineage parents via /lineage/parents to spot re-orgs or pruning events.
  1. Check freshness and SLAs
    • Compare the returned block height with the freshness guarantees in Access & SLA.
    • For discrepancies exceeding SLA, gather the verify responses, lineage snapshots, and timestamps before escalating.
Heads-up: Conflicts across verify endpoints (e.g., Verify Overview) often indicate partial propagation after a re-org. Wait one confirmation cycle and re-run proofs.

Escalation paths

  1. Gather diagnostics
    • Capture request IDs, timestamps, endpoint paths, and relevant payloads (redacting secrets).
    • Attach outputs from verification commands and lineage diffs to speed up triage.
  2. Self-service checks
    • Review the real-time status page linked from Access & SLA.
    • Search the incident feed for active maintenance windows affecting verify or lineage endpoints.
  3. Contact support tiers
    • Standard: Email support with the diagnostic bundle and impact assessment.
    • Premium: Page the on-call channel via the escalation hotline; include your contract ID and a summary of rate limit adjustments attempted.
    • Enterprise: Engage your TAM with call notes and immediate mitigation steps taken (e.g., disabled retries or traffic shaping).
  4. Document resolution
    • Update the internal runbook with remediation steps, new log signatures, or necessary rule changes.
    • If product changes were required, file follow-up issues referencing the affected verify and lineage APIs for visibility.
Following this workflow keeps post-mortems short and ensures customers see consistent resolution steps across the support team.