Error taxonomy
Break incoming reports into one of the following buckets to focus your debugging effort:| Category | Typical HTTP codes | Symptoms | Primary checks |
|---|---|---|---|
| Authentication and authorization | 401, 403 | Missing or expired credentials | Token validity, project scope, request signing |
| Rate limiting and quotas | 429 | Spiky error rates tied to throughput | Usage metrics, retry headers, adaptive backoff |
| Data integrity mismatches | 409, 422, 5xx | Hash, state, or balance conflicts | Chain re-orgs, data freshness, verification endpoints |
| Platform availability | 500, 502, 503 | Broad failure across multiple endpoints | Access & SLA dashboard status, incident comms |
Authentication issues
- Validate credentials
- Confirm that the API key or bearer token is present and matches an active project in the dashboard.
- Check
X-Project-Idheaders when scoped access is enforced.
- Reproduce with a clean request
- Run a simple call that requires no optional parameters:
- A
401indicates invalid or missing credentials; a403signals that the token exists but lacks scope for verify endpoints such as Verify Receipt Root or Verify Logs Bloom.
- Reset or rotate secrets
- If an integration recently rotated keys, make sure the new secret is deployed to all environments.
- Advise customers to revoke leaked keys immediately and regenerate via the dashboard.
Tip: Authentication errors after a long-running session often trace back to cached tokens. Force a refresh and retest before escalating.
Rate limiting
- Inspect response headers
X-RateLimit-Limit,X-RateLimit-Remaining, andRetry-Afterprovide the primary diagnosis signal.- Graph usage trends from logs to confirm the limit being hit aligns with the contract.
- Implement adaptive retries
- Backoff exponentially on 429 responses and respect the provided retry window.
- Example resilient fetch with
curlandsleepfor quick triage:
- The lineage record endpoint is documented in Lineage Record and shares quota with other lineage APIs, so tune retries accordingly.
- Request higher limits
- Collect peak RPS, average RPS, and burst duration before filing a quota increase request.
- Reference contract terms in the Access and SLA guide when escalating to Customer Success.
Data integrity mismatches
- Confirm chain context
- Ensure the request targets the correct chain, height, and environment (e.g., mainnet vs. testnet).
- Cross-validate block numbers and timestamps against the Lineage Overview datasets.
- Verify on-chain proof artifacts
- Use the verification suite to compare BlockDB responses against canonical on-chain data:
- If the bloom filter matches but downstream services still disagree, fetch the lineage parents via
/lineage/parentsto spot re-orgs or pruning events.
- Check freshness and SLAs
- Compare the returned block height with the freshness guarantees in Access & SLA.
- For discrepancies exceeding SLA, gather the verify responses, lineage snapshots, and timestamps before escalating.
Heads-up: Conflicts across verify endpoints (e.g., Verify Overview) often indicate partial propagation after a re-org. Wait one confirmation cycle and re-run proofs.
Escalation paths
- Gather diagnostics
- Capture request IDs, timestamps, endpoint paths, and relevant payloads (redacting secrets).
- Attach outputs from verification commands and lineage diffs to speed up triage.
- Self-service checks
- Review the real-time status page linked from Access & SLA.
- Search the incident feed for active maintenance windows affecting verify or lineage endpoints.
- Contact support tiers
- Standard: Email support with the diagnostic bundle and impact assessment.
- Premium: Page the on-call channel via the escalation hotline; include your contract ID and a summary of rate limit adjustments attempted.
- Enterprise: Engage your TAM with call notes and immediate mitigation steps taken (e.g., disabled retries or traffic shaping).
- Document resolution