Impact Study: HyperEVM Deployment

Full chain coverage in under 2 hours. It used to take weeks.

How agent-driven deployment turned a multi-week engineering project into a Tuesday morning.

2 hrs
Full deployment
20+
Chains deployed
42
Agent skills
100+
DeFi integrations
01 / The Situation

The work is well-understood. It's also enormous.

Every EVM chain runs through the same pipeline: ingest blocks and transactions, decode contracts, build token tables, wire up prices, label wallets. We've done this over 20 times. The work is well-understood. It's also enormous.

Two years ago, deploying a new chain meant 2–4 weeks of dedicated engineering time. An engineer would manually configure infrastructure, write ingestion logic, build and test every model, chase down edge cases. Careful, competent work that took as long as it took.

HyperEVM was the latest. We curated it in under 2 hours.

"

The work is well-understood. It's also enormous.

"
02 / The Challenge

The problem was never just skill. It was time.

Every deployment followed the same pattern, but each step waited on a human to execute it, verify it, and move on. RPC endpoint testing. Block rate sampling. Ingestion configuration. Schema validation against a block explorer. ABI collection. Model building. Price feeds. Balance snapshots. Quality checks. Automated workflow setup.

An experienced engineer knows every step. But knowing what to do and having the hours to do it are different problems.

The deployment pipeline evolved in three stages:

  1. Manual: 2–4 weeks. Engineers hand-built everything. Siloed repos, manual cloud provisioning, knowledge in people's heads.
  2. Shared EVM codebase: 2–4 days. A standardized package that Flipside built for all EVM chains cut redundant work, but deployment still required manual orchestration across dozens of steps.
  3. Agent skills: 1–2 hours. The full playbook encoded into structured skills. Agents orchestrate everything, engineers review and approve.

Stage 01

2–4 weeks

Manual

Stage 02

2–4 days

fsc-evm Package

Stage 03● Now

1–2 hours

Agent Skills

"

Knowing what to do and having the hours to do it are different problems.

"
03 / The Solution

42 skills. 20+ deployments. One system.

We encoded what our engineers know. 42 agent skills capture what the team learned across 20+ chain deployments: the decision trees, validation checks, and recovery patterns that experienced engineers carry in their heads. Three internal tools make it work:

  1. Streamline: per-chain infrastructure isolation. Each chain gets its own AWS environment. Issues on one chain never cascade into another.
  2. LiveQuery: real-time node access from Snowflake. UDFs that call RPC nodes on demand directly from SQL, the agent's primary tool for probing chain characteristics.
  3. FSC-EVM: one codebase, every EVM chain. A single dbt package powering all chains through a 3-layer variable system. 70+ DeFi protocol integrations built in.

The deployment runs in four phases, with verification gates between every step:

  1. Pre-deployment: repo and secrets setup, Streamline deploy, RPC testing, block rate sampling.
  2. Core ingestion: LiveQuery and UDFs, Streamline invocation, bronze data landing. Validated against the block explorer.
  3. Core models and ABIs: silver/gold models, ABI collection, token metadata reads.
  4. Decoded and prices: decoded logs, token transfers, hourly prices, balances and chain stats. DQ checks pass, GHA workflows start.

The engineer reviews and authorizes. The agent executes and verifies. Pair-programming, not autopilot.

Download the Full Infographic

The pipeline, tools, and phases in one visual.

Your Agents 1 configured

EVM Chain Deployer

Running

Deploys full EVM data pipelines end-to-end. Provisions infrastructure, ingests blocks, builds models, wires up prices and labels.

Tools

streamline livequery snowflake_admin

Skills

rpc_testing block_sampling abi_collection model_validation dq_checks +37 more

Sources

FSC-EVM Block Explorers Node Vault

Agents run on your schedule and deliver insights to Slack, email, or reports.

"

The engineer reviews and authorizes. The agent executes and verifies.

"
04 / The Outcome

HyperEVM went from zero to full production coverage.

Full coverage. Blocks, transactions, decoded events, token transfers, prices, balances, labeled wallets. The same schema that runs on Ethereum, Base, Arbitrum, and every other EVM chain in the platform.

100+ DeFi protocol integrations inherited from the shared codebase. 700M+ labeled wallets carried over automatically. An analyst querying HyperEVM gets the same table structure and data quality as any chain we've supported for years.

Each deployment also feeds back into the skill library. Infrastructure quirks, provider-specific edge cases, recovery patterns from partial failures, all captured. The twenty-first deployment is faster and more resilient than the twentieth.

"

The twenty-first deployment is faster and more resilient than the twentieth.

"
05 / The Takeaway

Speed matters, but it's not the real story.

The real story is what happens at the edges. Infrastructure goes down. RPC providers have quirks. Ingestion hits partial state. In the manual era, an engineer handled each of these by pattern-matching against experience. Now that experience is encoded. The agent handles graceful recovery the same way the engineer would, because the engineer taught it how.

That's what compounding returns look like in data infrastructure. Faster deploys, yes, but also deeper memory. Every chain we add makes the system better at adding the next one.

Next Steps

7 trillion+ rows. 20+ chains. Hours to deploy the next one.

The agent-driven pipeline is how we curate chains now. See how our standardized schemas, institutional knowledge, and automated deployment work across every EVM chain we support.