Raw blocks, transactions, and logs become structured, labeled, analysis-ready datasets.
Flipside transforms your chain's raw data into curated tables covering DeFi, staking, bridges, stablecoins, and 70+ protocol integrations. Delivered through Snowflake and Flipspace. EVM chains go live in weeks, not months.
Flipside turns this...
0xa9059cbb00000000000000007a250d56...dacb4c659f2488d
topics: [0xddf252ad...] data: 0x0000000000...
trace_address: [0,1,3] call_type: delegatecall
Hex-encoded logs, nested traces, no labels, no USD values
ez_dex_swaps
WETH → USDC | $47,291 | Uniswap V3
dim_labels
Wintermute | Market Maker | CEX
ez_bridge_activity
Base → Arbitrum | 500K USDC | Across
Decoded tables, entity labels, USD values, ready to query
01 / Two paths to curated data
EVM chains benefit from a shared codebase that makes onboarding fast. Non-EVM chains get dedicated engineering with the same end result: structured, queryable data in Snowflake.
Standardized infrastructure
Flipside runs one unified pipeline for every EVM chain — same macros, same templates, same output. When a new EVM chain comes online, it inherits all existing protocol integrations and curated models automatically. No custom engineering per chain. That consistency is why EVM onboarding is measured in weeks, not months.
Currently live on 12+ EVM chains
Ethereum, Polygon, Arbitrum, Optimism, Base, BSC, Avalanche, Gnosis, Flow EVM, Ink, Aurora, and more
Custom-built pipelines
Solana, Tron, Hyperliquid, Flow Cadence — each has unique data structures, transaction models, and protocol behaviors. No shared package works here. Flipside builds dedicated curation pipelines, often partnering with external data providers to accelerate base data ingestion.
Complexity factors
Chain age, historical data volume, transaction throughput, and non-standard data models all affect timeline and cost
02 / What you get
Every curated chain ships with the same layers of data. The goal is simple: an analyst with SQL access should be able to answer any question about your chain's activity without building their own pipeline.
Blocks, transactions, traces, event logs, and decoded contract calls. The foundation everything else builds on.
Swaps, lending, staking, bridges, stablecoins, TVL, NFT activity, and governance. EVM chains include 70+ protocol integrations.
700M+ addresses classified across chains. Know whether a wallet belongs to an exchange, a DeFi protocol, a fund, or an individual.
Every transaction table includes USD amounts at time of execution. No external price oracle stitching. Decimal adjustments handled.
Your chain's data fits into Flipside's crosschain schema. Analysts can compare activity across 20+ chains using the same table structures and column names.
Data shares, marketplace listings, and S3 pipelines. Enterprise teams access your chain's data through the tools they already use.
03 / How it works
The process depends on whether you're EVM or not, but the milestones are the same.
We review your chain's architecture (EVM vs. non-EVM), data volume, key protocols, and priority use cases. For EVM chains this is fast; we've done it 12+ times. For non-EVM chains we map the unique data structures and identify the right ingestion approach.
Streamline (Flipside's ingestion platform) spins up per-chain, per-source pipelines in isolated AWS environments. Core data (blocks, transactions, traces, logs) starts flowing. For EVM chains, this takes hours. For non-EVM, days to weeks depending on historical depth.
Raw data gets transformed into curated tables: DeFi swaps decoded, entity labels applied, USD values computed, protocol-specific models built. EVM chains inherit 70+ integrations from the shared pipeline automatically. Non-EVM chains get equivalent coverage built custom.
Curated data goes live through Snowflake data shares and marketplace listings. It's also accessible in Flipspace for AI-powered analysis, automated reports, and monitoring. Your ecosystem's analysts, researchers, and institutions can start querying immediately.
04 / Protocol-specific curation
Beyond full-chain curation, Flipside builds dedicated data models for specific protocols. Every metric queryable in seconds through standard SQL.
A protocol like Marinade Finance comes to Flipside and says: we need to analyze our staking activity, track whale behavior, and understand our user base. Flipside builds dedicated tables for that protocol — not generic chain-level tables, but models specifically designed around that protocol's contracts and activity patterns.
Need something beyond standard chain or protocol curation? Flipside also builds and manages custom data models for enterprise teams: simplified views, custom aggregations, or proprietary data pipelines delivered as private Snowflake shares. Your SQL, our infrastructure.
05 / Automated analysis
Curated data is the starting point. Agents are what make it useful without anyone logging in.
Once your chain's data is curated, Flipside can build AI agents that run on a schedule and deliver analysis to Slack, email, or Discord. Each agent handles one job. An ecosystem health agent doesn't also do whale tracking. That specialization is what makes the output reliable enough to act on.
Daily or weekly summaries of active wallets, transaction volume, TVL changes, and protocol-level activity across your chain. Drops into your Slack every morning.
Watch specific wallets or user cohorts. Get alerts when whales move, when retention drops, or when a new protocol starts pulling your users.
Track competitor chains across the same curated schema. Compare TVL growth, new protocol launches, and user migration patterns side by side.
Every chain has unique questions. We build agents around yours: grant recipient monitoring, bridge flow analysis, staking economics, or anything specific to your roadmap.
Agents deliver where your team already works
Scheduled reports and real-time alerts go to Slack, email, or Discord. Your team doesn't need to learn a new tool or check a dashboard. The intelligence comes to them.
Chains live
Each inheriting 70+ protocol integrations
EVM onboarding
From first call to full curated data
Protocol integrations
DeFi, NFTs, governance, staking, bridges
Labeled addresses
Crosschain entity classification
06 / Why Flipside
Flipside has been curating blockchain data since 2017. The infrastructure, the team, and the protocol relationships all grew out of that single focus.
Agent-driven automation has compressed EVM deployment from weeks to hours for the pipeline stage. Shared macros and templates mean each new chain makes the next one faster. Fixes and improvements roll out to every chain at once.
Your chain's data immediately joins the crosschain schema. Analysts can compare your chain's activity against 20+ others using the same SQL patterns.
Once curated, your chain's data is accessible to every Flipside user: researchers, institutions, protocols, and analysts already querying 20+ other chains. Your ecosystem gains an active community of data consumers on day one.
Contract upgrades, new protocol launches, schema changes. Flipside handles ongoing maintenance. Your data stays current without your team managing pipelines.
Flipside's data engineering team was named a Top 50 Data & Analytics Team by the OnCon Icon Awards. The same team that curates data for 20+ chains is the team that will curate yours.
For EVM-compatible chains, core data (blocks, transactions, traces, event logs) is typically available within days. Full curated tables, including DeFi swaps, lending, staking, bridges, and 70+ protocol integrations, go live in 3–4 weeks. The initial pipeline deployment takes 1–2 hours thanks to Flipside's standardized EVM pipeline, which uses shared macros and templates across every chain.
Non-EVM chains like Solana, Tron, or Hyperliquid require custom-built pipelines since each has unique data structures and transaction models. Full indexing and curation takes up to 6 weeks, with older blockchains (1+ year of history) toward the longer end. Partnerships with external data providers can reduce this to roughly 1 month for base data.
The standard output includes decoded contracts, DeFi activity (swaps, lending, staking), bridge transfers, stablecoin flows, TVL calculations, NFT events, and governance data. EVM chains inherit 70+ protocol integrations automatically through the shared pipeline. Non-EVM chains get the same coverage through dedicated engineering. All data includes entity labels, USD pricing, and crosschain address mapping.
All curated data is delivered through Snowflake via data shares, marketplace listings, or S3 pipelines. It's also accessible through Flipspace for AI-powered analysis, reporting, and automated monitoring. Standard SQL access, no proprietary query language or SDK required.
Yes. Beyond full-chain curation, Flipside builds protocol-specific curated tables. For example, Marinade Finance has dedicated tables that make it easy to analyze their users, whale activity, transaction volumes, and staking dynamics — all queryable with standard SQL through Snowflake or Flipspace.
EVM chain infrastructure runs approximately $15K–$40K per year depending on throughput. Non-EVM chains are higher and vary significantly based on transaction volume, historical data depth, and complexity of the chain's data model. Contact us for a specific estimate based on your chain.
Tell us about your chain. We'll walk you through the curation process, timeline, and what your ecosystem's data will look like when it's live.