Graph Node on Kakarot zkEVM — AWS infrastructure, Docker orchestration, production indexing.
Kakarot’s EVM-compatible L2 needed dependable ERC-20 transfer history and a Graph Node footprint that could ride RPC upgrades — not only a WASM mapping but infra that survives devnet-style churn.
AssemblyScript subgraph indexing Transfer(indexed,indexed,uint256) from block 1000 onward; Graph Node on AWS with hardened Docker Compose, health checks, RPC failover, and pinned subgraph manifests for reproducible deploys.
Bootstrap indexing from an early block with sparse history, keep Graph Node head aligned with a moving Kakarot RPC, and containerize on AWS with observability so outages were obvious before subgraph consumers noticed.
Define Transfer entity schema with tx metadata and start block 1000 for Kakarot Sepolia.
Compose Graph Node, IPFS, and Postgres with persistent volumes and restart policies.
Failover endpoints, backoff, and alerts when head lag exceeds SLO.
Smoke queries against known token contracts; document upgrade path for new RPC URLs.
Indexing plus operational deployment for Kakarot Sepolia-era infrastructure — subgraph correctness and node uptime treated as one deliverable because dead GraphQL with a healthy chain is still a failed integration.
Indexing plus operational deployment for Kakarot Sepolia-era infrastructure — subgraph correctness and node uptime treated as one deliverable because dead GraphQL with a healthy chain is still a failed integration.
Early L2 RPCs change often; owning the indexer stack meant we could pause/resync without vendor tickets.
Docker-compose parity between staging and prod cut debugging time when IPFS pinning acted up.
Scope stayed narrow to ship reliable history before expanding to complex protocols.
Explicit start block avoided reprocessing genesis noise on young chains.
We take on a small number of projects at a time. If the problem is hard, we're interested.