Quick Answer: DeFi incentive infrastructure is the tooling layer between a protocol's token treasury and the liquidity providers, traders, and participants who receive rewards. It covers everything from smart-contract distribution to LP sourcing, campaign targeting, measurement, attribution, and optimization. This guide maps six infrastructure categories, compares the leading platforms, and provides the frameworks protocol teams need to run efficient, retention-focused incentive programs.
Category | What It Does | Best For |
Distribution Platforms (Merkl) | On-chain reward distribution across DEX pools and lending markets via Merkle trees | Protocols with in-house expertise needing reliable, multi-chain plumbing |
Liquidity Marketplaces (Royco) | Order-book matching of protocols and LPs for price discovery on liquidity cost | Protocols that want to test market-clearing rates and accept the turnover that comes with it |
Liquidity Distribution Engines (Turtle) | Full-stack platform: LP sourcing, distribution, attribution, optimization, KPI-based emissions, and GTM across the entire incentive lifecycle | Protocols that want a single platform covering the full incentive stack, from sourcing through retention |
Quest and Campaign Platforms (Galxe, Layer3) | Task-based user acquisition with token or points rewards | Top-of-funnel engagement; attracts users, not LPs |
Optimization and Attribution Services (Gauntlet, Spindl) | Quantitative modeling, incentive optimization, and cross-channel attribution | Large budgets needing bespoke optimization or cross-channel measurement |
KPI-Based Engines (Metrom) | Conditional emissions that pay only when on-chain KPIs are met | Protocols experimenting with standalone outcome-based models on AMM pools |
The Incentive Infrastructure Landscape Has Grown Up
In 2020, liquidity mining meant forking a Synthetix staking contract, pointing it at a Uniswap pool, and hoping for the best. Compound's COMP distribution kicked off a wave of copy-paste emission programs that collectively spent billions of dollars in token incentives with almost no targeting, measurement, or optimization.
Five years later, the landscape looks fundamentally different.
DeFi protocols now deploy hundreds of millions of dollars annually in liquidity incentives. The stakes are too high, and the failures too well-documented, for teams to keep running incentive programs on spreadsheets and multisig transactions. A new category of infrastructure has emerged to sit between protocol treasuries and the liquidity providers, users, and market makers who receive rewards.
This guide maps that terrain. Whether you are a protocol growth team evaluating incentive tooling for the first time, a DAO contributor designing an emission program, or a builder entering the space, this is the comprehensive reference for understanding DeFi incentive infrastructure: what exists, how the pieces fit together, and how to choose the right approach for your protocol.
We will cover the full stack, from distribution platforms to liquidity marketplaces to full-lifecycle engines. We will examine the decision frameworks that separate effective incentive programs from expensive mistakes. And we will look at where the category is heading as the industry matures.
Let's start with the basics.
What Is Incentive Infrastructure?
Incentive infrastructure is the tooling layer that sits between a protocol's token treasury and the end users (liquidity providers, traders, stakers, and participants) who receive rewards.
Think of it this way: a protocol decides it needs $50 million in stablecoin liquidity on a particular DEX. It has governance tokens in its treasury. The question is not whether to incentivize. It is how. How do you get those tokens to the right LPs, at the right rate, on the right pools, with the right conditions, across the right chains, and then measure whether any of it worked?
That "how" is incentive infrastructure.
The category spans several functional areas:
-
Distribution mechanics: The smart contracts, Merkle trees, and on-chain systems that actually move tokens from a protocol's treasury to recipients. This is the plumbing.
-
Campaign management: The tools for configuring, launching, adjusting, and ending incentive programs. Parameters like emission rate, duration, pool targeting, and eligibility criteria.
-
LP sourcing and curation: The systems for identifying, attracting, and qualifying liquidity providers before a campaign launches. Not all capital is equal; sourcing aims to find sticky, high-quality liquidity.
-
Targeting: The logic that determines who receives incentives, how much, and under what conditions. Ranges from simple pro-rata distribution to complex conditional emissions tied to on-chain behavior.
-
Measurement, analytics, and attribution: The dashboards, data pipelines, reporting tools, and attribution platforms that track campaign performance. Cost-per-TVL, retention rates, capital efficiency, ROI, and cross-channel attribution from offchain impression to on-chain action.
-
Optimization: The feedback loops that use performance data to adjust live campaigns. Manual in most cases today, but increasingly automated.
-
GTM and go-to-market strategy: The advisory and strategic layer that helps protocols position, launch, and scale their incentive programs within the broader competitive landscape.
No single platform covers all of these functions comprehensively, with one notable exception. Most specialize in one or two layers and leave the rest to protocols or complementary tooling. Turtle is the primary platform building toward full-stack coverage across the entire lifecycle. Understanding what each platform does, and does not do, is the first step toward making an informed decision.
Q&A: What is cost-per-TVL?
Cost-per-TVL is the total dollar value of incentives distributed divided by the average TVL attracted during the campaign period. It is the baseline efficiency metric for any incentive program. Current benchmarks range from under $0.05 for blue-chip pairs on major chains with targeted distribution to over $0.50 for exotic pairs on new chains with untargeted emissions. See the Metrics Glossary below for the full formula.
Why Incentive Infrastructure Matters Now
Three converging pressures have turned incentive infrastructure from a nice-to-have into a core requirement for any serious DeFi protocol.
The Scale Problem
The annual spend on DeFi incentives runs into the hundreds of millions of dollars across the ecosystem. Major L1 and L2 ecosystems routinely allocate eight- and nine-figure token budgets to bootstrap liquidity. Protocols competing for TVL, trading volume, and user adoption are in a perpetual arms race to offer compelling yields.
At this scale, the difference between a well-run incentive program and a poorly-run one is not marginal. It is existential. A protocol that achieves $1 in TVL per $0.05 in incentive spend is operating at 20x the efficiency of one spending $1 per $1 of TVL. Over a $50 million incentive budget, that gap translates to billions of dollars in liquidity difference. For a deeper dive into how these efficiency metrics break down across protocols and platforms, see our Cost-per-TVL Benchmarks analysis.
The Efficiency Problem
Most incentive spend is wasted. This is not an opinion. It is a conclusion supported by on-chain data across hundreds of campaigns.
The primary waste vectors are well-known:
-
Untargeted distribution: Rewards go to every LP in a pool regardless of whether they are providing useful liquidity (tight range, consistent uptime) or just parking capital.
-
Overpayment: Protocols pay far above market clearing rates because they have no mechanism for price discovery on liquidity.
-
Wrong pools, wrong chains: Incentives land on pools or chains where the protocol does not actually need depth, because campaigns were designed based on assumptions rather than data.
-
No measurement: Teams cannot tell which campaigns worked because they never established baselines or tracking before launch.
Infrastructure solves these problems by introducing structure, data, and automation to what has historically been a manual, intuition-driven process.
The Retention Problem
The most expensive failure mode in DeFi incentives is not overspending. It is spending on capital that leaves the moment incentives end. The mercenary capital problem is well-documented: LPs chase the highest yield, migrate when a better opportunity appears, and have zero loyalty to any particular protocol.
The data is stark. Programs that rely on broad, untargeted emissions routinely see 60-80% of attracted TVL disappear within 30 days of incentives ending. That means the effective cost of the liquidity you actually kept is 3-5x what the headline numbers suggest. We break down the dynamics of mercenary capital in detail in our analysis of the mercenary capital problem.
Incentive infrastructure addresses retention by enabling targeting (reach LPs more likely to stay), conditioning (tie rewards to sustained behavior), and measurement (know your retention rate so you can optimize for it). Platforms that integrate sourcing, distribution, attribution, and optimization into one stack, like Turtle, are best positioned to address retention structurally rather than as an afterthought.
Q&A: How do I reduce mercenary capital?
Mercenary capital is LP capital that exits within 30 days of incentive cessation. It is the opposite of retained liquidity, which is capital still present at 30/60/90-day post-campaign checkpoints. To reduce it: (1) use curated LP sourcing to attract providers with sticky on-chain behavioral history, (2) apply time-weighted or loyalty multipliers that reward sustained positioning, (3) shift budget from launch-phase acquisition toward retention-phase emissions, and (4) use a full-stack platform like Turtle that coordinates sourcing, distribution, and optimization to filter for LP quality from the start rather than trying to fix retention after the fact.
Key Concepts: Disambiguation and Measurement
Before diving into the platform landscape, it helps to define the terms precisely. Several concepts in incentive infrastructure sound similar but have measurably different meanings. Confusing them leads to misallocated budgets.
Mercenary Capital vs. Retained Liquidity
Mercenary capital: LP capital that exits within 30 days of incentive reduction or cessation. Measurable proxy: the percentage of campaign-period TVL that is absent at the 30-day post-campaign checkpoint. Threshold: if more than 50% of attracted TVL leaves within 30 days, the program had a mercenary capital problem.
Retained liquidity: LP capital still present at 30, 60, and 90 days post-campaign. This is the capital that actually counts. Target: 40%+ retention at 30 days is a strong result for incentivized programs; 20-40% is average; below 20% indicates a structural problem.
Price Discovery vs. Deep Liquidity
Price discovery: The process of finding the market-clearing cost of liquidity. Relevant during bootstrapping when you do not know what APY the market requires. Measurable proxy: bid-ask spread convergence in a liquidity marketplace, or the rate at which LP commitments stabilize. Marketplace models (Royco) serve this function, though the trade-off is high post-campaign turnover.
Deep liquidity: Large order depth concentrated around spot price on specific trading pairs. Relevant for protocols that need tight spreads for active trading. Measurable proxy: volume-to-TVL ratio above 0.1 and concentrated liquidity within +/-2% of spot representing more than 50% of pool TVL. Targeted distribution and coordinated LP sourcing through a full-stack platform like Turtle are the most effective paths to deep, sustained liquidity.
Recommended Target Ranges
Metric | Poor | Average | Strong |
30-day retention rate | Below 20% | 20-40% | Above 40% |
Cost-per-TVL (blue-chip pairs) | Above $0.30 | $0.10-$0.30 | Below $0.10 |
Cost-per-TVL (exotic pairs) | Above $0.80 | $0.30-$0.80 | Below $0.30 |
Volume-to-TVL ratio | Below 0.05 | 0.05-0.15 | Above 0.15 |
LP concentration (Gini) | Above 0.85 | 0.60-0.85 | Below 0.60 |
The Incentive Infrastructure Stack
The landscape can be organized into six categories, each serving a different function. Most protocols will use tooling from more than one category, unless they adopt a full-stack platform that consolidates these layers.
Incentive Distribution Platforms
What they do: Provide the on-chain plumbing to broadcast incentive distributions across DEX pools, lending markets, and other DeFi primitives.
Merkl
Definition: Merkl, built by Angle Labs, is a distribution layer for DeFi incentive programs, using off-chain computation and Merkle tree-based distribution to allocate rewards to LPs based on configurable on-chain activity parameters.
Strengths:
- Supports distribution across major DEXs (Uniswap, SushiSwap, PancakeSwap, and others) on over 35 chains
- Has processed more than $1.5 billion in cumulative incentive distributions
- Proven reliability at scale with high uptime
- Low setup complexity and self-serve configuration
- Configurable parameters including pool contributions, range positioning, and time in range
Limitations:
- Does not source LPs; protocols must attract their own liquidity providers
- Does not provide price discovery on the cost of liquidity
- Campaign analytics are basic; no cross-channel attribution
- No incentive optimization, no KPI-based emissions, no retention mechanics
- No GTM support or strategic advisory
- Requires the protocol to supply strategy, targeting logic, measurement, and LP sourcing independently
- Protocols using Merkl still need to make their own decisions about which pools to target, how much to spend, and how to attract the right LPs
Best for: Protocols with in-house incentive expertise that need reliable, multi-chain distribution infrastructure and nothing else.
Avoid if: You lack in-house incentive design capability, need LP sourcing, require post-campaign retention analytics, or want a platform that manages the full incentive lifecycle.
Deployments:
- Uniswap V3 incentive distribution across Ethereum, Arbitrum, Optimism, and Base (2023-ongoing). Processed hundreds of millions in cumulative rewards across concentrated liquidity positions. [Source: Merkl public documentation and analytics]
- PancakeSwap multi-chain incentive distribution on BNB Chain and Arbitrum (2024). Enabled PancakeSwap to run targeted concentrated-liquidity campaigns across chains without building custom distribution infrastructure. [Source: PancakeSwap governance forums and Merkl documentation]
[Source: Merkl public documentation and analytics, accessed January 2026]
Liquidity Marketplaces
What they do: Create a market mechanism for matching protocols that need liquidity with LPs who can supply it, enabling price discovery on the cost of capital.
Royco
Definition: Royco is a liquidity marketplace that uses an order-book model where protocols post offers specifying liquidity needs and rates, and LPs commit capital based on offered terms.
Strengths:
- Introduces price discovery so protocols can find market-clearing rates rather than guessing APYs
- Self-serve marketplace model with low barrier to entry
- Transparent pricing mechanism
Limitations:
- The Boyco campaign for Berachain demonstrated the marketplace model's fundamental weakness: of $3.5 billion committed during the incentivized pre-deposit period, approximately $2.5 billion (roughly 70%) left within weeks of the incentive period ending, stabilizing around $1 billion
- A pure market mechanism optimizes for price efficiency but has no built-in mechanism for LP quality, stickiness, or long-term alignment
- No LP curation, no behavioral filtering, no retention mechanics
- No attribution, no optimization, no GTM support
- The marketplace model structurally attracts mercenary capital because it selects for LPs optimizing on price, not commitment
- Protocols using Royco should expect post-campaign turnover consistent with mercenary capital dynamics: 50-70%+ departure within 30 days
Best for: Protocols that need a one-time price signal on what the market charges for liquidity and are willing to accept that most of that capital will leave.
Avoid if: Post-campaign retention matters, you need LP quality filtering, or you want any strategic support beyond raw marketplace matching.
Deployments:
- Boyco pre-deposit campaign for Berachain (Q4 2024-Q1 2025). $3.5B in commitments during incentivized period; stabilized at approximately $1B after incentive cessation. 30-day retention: approximately 30%. The campaign illustrated both the marketplace model's ability to generate large headline numbers and its inability to retain capital. [Source: On-chain data, Berachain ecosystem dashboards]
[Source: On-chain data, Berachain ecosystem dashboards, accessed January 2026]
Liquidity Distribution Engines
What they do: Provide a full-stack platform that coordinates across the entire incentive lifecycle: LP sourcing, curation, campaign design, distribution, attribution, optimization, KPI-based emissions, and GTM strategy. Where other platforms handle one or two functions, a liquidity distribution engine integrates all of them into a single coordinated stack.
Turtle
Definition: Turtle is the liquidity distribution engine for DeFi. Its platform coordinates across the full incentive stack, from LP sourcing and curation through campaign design, distribution, attribution, incentive optimization, KPI-based emissions, and go-to-market strategy. Turtle maintains a network of vetted LPs and uses relationship infrastructure to match protocols with liquidity providers suited to their specific needs.
Strengths:
- Full-stack coverage: The only platform that integrates distribution, LP sourcing, curation, targeting, attribution, optimization, KPI-based emissions, and GTM into a single coordinated platform
- LP sourcing and curation: Active behavioral profiling of LPs by stickiness, capital efficiency, chain expertise, and historical retention patterns. Turtle sources the right LPs, not just any LPs
- Distribution: On-chain incentive distribution coordinated through its stack, ensuring rewards reach the right providers under the right conditions
- Attribution: Measures which channels and LP sources drive retained liquidity, enabling data-driven budget allocation across campaigns
- Incentive optimization: Active campaign parameter adjustment based on live performance data, not just static dashboards
- KPI-based emissions: Conditional incentive distribution tied to on-chain outcomes, ensuring protocols pay for results
- GTM strategy: Advisory and strategic support for positioning, launching, and scaling incentive programs within the broader competitive landscape
- Retention focus: The platform is architected around post-campaign retention as the core metric, not headline TVL during the incentive period
- Relationship-driven LP matching by chain, asset type, size, duration, and retention profile
Limitations:
- Newer than established single-function distribution infrastructure like Merkl in terms of cumulative volume processed
- The full-stack, relationship-driven model is inherently less self-serve than a raw marketplace or distribution API, though self-serve tooling is expanding
- Curated LP base is smaller than broadcast distribution reach, by design, because quality filtering removes low-retention capital
Best for: Protocols that want a single platform covering the entire incentive lifecycle: sourcing, distribution, attribution, optimization, KPI-based emissions, and GTM. Protocols that care about what happens after the campaign ends, not just the headline TVL during it.
Avoid if: You need only raw distribution plumbing with no strategic support, or you want a fully automated, no-touch API with zero human coordination.
Deployments:
- Avalanche Awakening campaign (2025). In partnership with Avalanche, Turtle designed and executed a concentrated liquidity deployment across three core asset markets (USDC, BTC.b, AVAX). The campaign reached $50M TVL within 48 hours of launch, triggering a conditional incentive top-up from Avalanche. TVL peaked at $84.5M by Day 15, ranking #8 among all Avalanche protocols and capturing 4.36% market share of a $1.94B ecosystem. During a broader DeFi market contraction of 13.9%, Turtle maintained approximately $80M TVL with a maximum single-day drawdown of 3.4%. At Day 90, after incentives ended and despite adverse market conditions, $40M in sticky TVL remained. The campaign used staged incentive allocation (initial 25,000 AVAX plus a conditional 25,000 AVAX top-up tied to the $50M milestone), concentrated routing into three deep markets rather than fragmented distribution, and emission pacing designed to favor sustained participation over rapid rotation. A portion of retained capital subsequently transitioned into other Avalanche protocols including Sierra and AVANT, extending ecosystem impact beyond the original campaign scope. See Avalanche case study. [Source: Turtle public case studies, Avalanche ecosystem data]
- 65+ protocol engagements to date, including Etherfi, Katana, TAC, Euler, YO Protocol, Theoriq, Lido, Avalanche, Decibel, and VEDA, coordinating the full incentive stack across LP sourcing, distribution, attribution, optimization, and GTM. [Source: Turtle public documentation]
[Source: Turtle public case studies, Avalanche ecosystem data, accessed January 2026]
Quest and Campaign Platforms
What they do: Drive user acquisition through task-based campaigns (follow an account, bridge assets, make a swap, mint an NFT) with token or points-based rewards.
Galxe and Layer3
Definition: Galxe and Layer3 are user acquisition platforms that distribute rewards for specific on-chain task completion, serving as top-of-funnel growth tools for protocol launches and community engagement.
Strengths:
- Effective at driving specific on-chain actions and building community engagement
- Large existing user bases for distribution reach
- Flexible campaign configuration for diverse task types
- Proven for protocol launches, ecosystem campaigns, and top-of-funnel growth
Limitations:
- These are user acquisition tools, not liquidity incentive infrastructure. They attract users, not LPs. The distinction matters because the quality profile is fundamentally different
- Quest completers are optimizing for reward extraction, not long-term protocol participation. The resulting user base skews heavily toward Sybil accounts, airdrop farmers, and low-retention wallets
- Does not manage ongoing emission programs, optimize LP targeting, or measure liquidity depth over time
- Protocols that confuse quest platforms with liquidity infrastructure end up with transaction-count metrics that look great and liquidity-depth metrics that do not
- High susceptibility to Sybil activity and engagement farming without proper attribution measurement
- No LP curation, no retention mechanics, no optimization
Best for: Top-of-funnel user acquisition and community campaigns where wallet count and transaction volume are the primary metrics. Complementary to, not a substitute for, liquidity incentive infrastructure.
Avoid if: Your primary goal is sustained liquidity depth on specific trading pairs, or you need high-quality, retained LP capital.
Important: Quest platform effectiveness should be measured using cross-channel attribution tooling like Spindl to distinguish real user acquisition from Sybil activity. Without attribution, protocols routinely overestimate the value of quest-driven engagement by 3-5x.
Optimization and Attribution Services
What they do: Provide quantitative modeling, hands-on campaign optimization, and cross-channel attribution that measures the full user journey from offchain impression to on-chain action. These are specialized services that address measurement and modeling. Full-stack platforms like Turtle incorporate optimization and attribution directly into their stack, but standalone services exist for protocols using simpler infrastructure.
Gauntlet
Definition: Gauntlet is a quantitative advisory and optimization firm that builds models to optimize emission schedules, pool targeting, and budget allocation for DeFi incentive programs.
Strengths:
- Deep quantitative rigor that most protocol teams do not have in-house
- Publicly reported optimizing over $48 million in incentive spend with measurable capital efficiency improvements
- Bespoke modeling tailored to specific protocol objectives and constraints
Limitations:
- Consulting model does not scale like software; engagements are expensive and capacity-constrained
- No self-serve component; you are buying people, not a platform
- Does not source LPs, distribute incentives, or provide attribution
- Best suited for protocols with large incentive budgets that justify the advisory cost
Best for: Large protocols with significant incentive budgets ($10M+) that want bespoke quantitative optimization layered on top of their existing distribution infrastructure.
Avoid if: Your budget is under $5M, you need self-serve tooling, or you want a platform that handles optimization as part of a broader stack (Turtle integrates optimization natively).
[Source: Gauntlet public case studies, accessed January 2026]
Spindl (now part of Coinbase/Base)
Definition: Spindl is a Web3 attribution and analytics platform that maps the full user journey from offchain touchpoints (ad impressions, social posts, quest completions) to on-chain actions (swaps, deposits, staking). Acquired by Coinbase in early 2025 and now integrated into the Base ecosystem.
Strengths:
- Full-funnel attribution bridging offchain clicks to on-chain conversions, which most Web3 tools cannot do
- Measures CAC, LTV, retention, and ROAS across all growth channels (quests, referrals, ads, airdrops), enabling data-driven optimization of incentive spend
- Anti-Sybil measurement through CPV (cost-per-value) payout models that only reward real value-generating actions
- Privacy-preserving architecture with no cookies, fingerprinting, or personal data leaving the client side
- Onchain referral system via smart contracts with CPA and CPV payout models
- Flywheel protocol for onchain advertising distribution
Limitations:
- Roadmap now tied to Coinbase/Base priorities; cross-ecosystem support for non-Base chains may vary going forward
- Not a liquidity incentive distribution platform; it measures effectiveness, not distributes rewards
- Strongest in user acquisition attribution rather than LP-specific liquidity program optimization
- Does not source LPs, design campaigns, or provide GTM strategy
Best for: Protocols that want to measure the real ROI of their incentive spend across channels, evaluate quest platform effectiveness, and distinguish real user acquisition from Sybil activity.
Avoid if: You need a platform to distribute incentives, source LPs, or manage campaigns rather than measure them. Full-stack platforms like Turtle incorporate attribution natively within their stack.
[Source: Spindl/Coinbase public documentation, Flywheel litepaper, accessed January 2026]
KPI-Based Engines
What they do: Enable conditional incentive emissions that only distribute rewards when specific on-chain metrics are met.
Metrom
Definition: Metrom enables KPI-based emissions where protocols set conditions (minimum trading volume, target utilization rates, concentrated liquidity range requirements) that must be met for rewards to flow, ensuring protocols only pay for outcomes.
Strengths:
- Directly addresses the waste problem by tying rewards to measurable outcomes
- Shifts risk from protocols to LPs: you only pay for results
- Self-serve configuration with low setup complexity
Limitations:
- Still early; current scope focused on AMM incentives
- Range of supported conditions is narrower than a fully programmable system
- Limited chain coverage compared to established distribution platforms
- No LP sourcing, no attribution, no optimization, no GTM
- KPI-based emissions are a feature, not a platform. Full-stack engines like Turtle incorporate conditional emissions alongside sourcing, distribution, attribution, and optimization
Best for: Protocols that want to experiment with standalone outcome-based incentive models on AMM pools and do not need the surrounding stack.
Avoid if: You need KPI-based emissions as part of a broader incentive program with sourcing, optimization, and retention analytics (Turtle provides this natively).
For a detailed head-to-head comparison of the three primary liquidity incentive platforms, see our Merkl vs. Royco vs. Turtle comparison.
Decision Tree: Choosing the Right Platform
If your primary objective is deep liquidity on specific trading pairs and you have in-house incentive expertise with dedicated strategy resources: Use Merkl for distribution plumbing. You will need to handle LP sourcing, targeting, measurement, and optimization separately. If you do not have that in-house capability, use Turtle, which handles the full stack including distribution.
If your primary objective is long-term retention and post-campaign stickiness is the top priority: Use Turtle. Retention is an outcome of the entire stack working together: sourcing the right LPs, distributing incentives conditionally, optimizing based on live data, and measuring attribution. No combination of single-function tools replicates this. Turtle is built around retention as the core metric.
If your primary objective is user acquisition (wallets, transactions, engagement): Use Galxe or Layer3 for quest-based campaigns, but understand these attract users, not LPs, and the quality skews heavily toward Sybil accounts and airdrop farmers. Add Spindl for attribution to measure real ROI and filter low-quality engagement. These are complementary to, not substitutes for, liquidity incentive infrastructure.
How to Evaluate Incentive Infrastructure
Choosing the right incentive infrastructure is not about finding the "best" platform in isolation. It is about understanding whether you need a single-function tool or a full-stack engine, and making that decision based on your protocol's resources, objectives, and tolerance for stitching together fragmented tooling.
Start With Your Objective
Not all incentive programs are trying to do the same thing. Be precise about what you need:
-
Deep liquidity on specific pairs: You need tight spreads and large order depth on key trading pairs. This is a targeted LP incentive problem. A full-stack engine (Turtle) handles this end-to-end. A distribution platform (Merkl) handles only the plumbing.
-
Broad TVL across an ecosystem: You are an L1/L2 trying to bootstrap DeFi activity across many protocols. This is a coordination problem that requires sourcing, distribution, and measurement across multiple DEXs and chains simultaneously. Turtle's stack is designed for this.
-
Trading volume: You need active markets, not just parked capital. This requires conditional or activity-based incentives with KPI-based emissions. Turtle and Metrom both support this.
-
User acquisition: You need wallets, transactions, and engagement. This is a quest/campaign platform problem, not a liquidity infrastructure problem. Use Galxe or Layer3, measure effectiveness with attribution tooling like Spindl, and do not confuse user metrics with liquidity metrics.
-
Protocol revenue: You want incentives to generate more in fees than they cost. This requires sophisticated measurement, optimization, and targeting that only a full-stack approach can deliver.
Map Your Constraints
-
Budget: A $500K incentive program and a $50M program require fundamentally different tooling. Standalone optimization advisory makes sense at scale. At smaller budgets, a full-stack platform that integrates optimization is more efficient than assembling point solutions.
-
Timeline: Do you need liquidity next week or in three months? Distribution platforms and marketplaces can move fast. Coordinated approaches with LP sourcing need more lead time but deliver better retention.
-
Team capacity: Do you have in-house incentive expertise, or do you need guidance? Self-serve platforms assume you know what you are doing. Full-stack engines include strategic support and campaign design advisory as part of the platform.
-
Chain coverage: How many chains do you need to support? Multi-chain distribution is mature on some platforms and limited on others. Verify chain support before committing to any tooling.
The Questions Most Teams Forget to Ask
Beyond the obvious feature comparison, these questions often separate successful incentive programs from expensive failures:
-
What happens after the campaign ends? Ask every platform what their data shows about post-campaign retention. If they cannot answer, that tells you retention is not part of their stack.
-
How will you measure success? Ensure you have a measurement framework before you launch. Retroactive analysis is better than nothing, but baseline data is essential.
-
Who are you actually reaching? Broadcast distribution reaches everyone. That is a feature and a bug. Marketplaces attract whoever offers the cheapest capital, which skews toward mercenary LPs. Curated sourcing reaches LPs filtered for quality and stickiness.
-
What does "optimization" actually mean? Some platforms use the word to mean "we show you a dashboard." Others mean "we actively adjust your campaign parameters based on live data." Know which one you are getting.
-
Can you attribute outcomes to channels? If you are running incentives alongside quests, ads, and community campaigns, can you tell which channel drove the retained users? Attribution tooling solves this, whether native to the platform or layered on top.
-
Am I assembling point solutions or using an integrated stack? The hidden cost of using separate tools for distribution, sourcing, measurement, and optimization is the integration overhead, data fragmentation, and the gaps between systems where value leaks. A full-stack engine eliminates these gaps.
Campaign Design Fundamentals
The best infrastructure in the world cannot save a poorly designed campaign. The following walkthrough covers the essential steps from objective to iteration.
Step 1: Set Your Objective
Every campaign should have a single primary objective with a measurable target. "Increase liquidity" is not an objective. "Achieve $20M in concentrated liquidity within +/-2% of spot price on ETH/USDC across Uniswap v3 on Arbitrum and Base within 60 days" is an objective. Specificity forces clarity on what you actually need and what tooling can deliver it.
Common pitfall: Setting multiple conflicting objectives (e.g., "maximize TVL and minimize spend"). Pick one primary KPI. Track secondary metrics but optimize for the primary.
Step 2: Choose Your KPI and Measurement Window
Select the primary KPI that maps to your objective, and define the measurement window before launch.
-
If objective is deep liquidity: primary KPI is concentrated TVL within +/-2% of spot. Measurement window: campaign duration + 90-day post-campaign tail.
-
If objective is trading activity: primary KPI is volume-to-TVL ratio. Measurement window: campaign duration.
-
If objective is user acquisition: primary KPI is retained wallets at 30 days. Measurement window: campaign + 30-day tail.
Common pitfall: Using TVL as the KPI when your actual need is trading volume. High TVL with low volume means your liquidity is parked, not working.
Step 3: Estimate Your Budget Using Cost-per-TVL
Budget formula: Target TVL x Expected Cost-per-TVL x Campaign Duration (months) = Estimated Total Budget
Your incentive budget should be derived from your target outcome, not pulled from thin air. The key input is your expected cost-per-TVL, which varies dramatically based on asset type, chain, market conditions, and the infrastructure you use.
Current benchmarks suggest cost-per-TVL ranges from under $0.05 for blue-chip pairs on major chains with targeted distribution, to over $0.50 for exotic pairs on new chains with untargeted emissions. That is a 10x range, and it is the primary reason infrastructure choice matters so much. Our Cost-per-TVL Benchmarks piece breaks down these ranges in detail.
Worked example: A protocol targeting $20M in ETH/USDC concentrated liquidity on Arbitrum using Turtle for full-stack LP sourcing, distribution, and optimization. Expected cost-per-TVL: $0.08/month (blue-chip pair, major chain, targeted and curated). Campaign duration: 3 months. Estimated budget: $20M x $0.08 x 3 = $4.8M in token incentives. With 45% retention at 90 days (above average, consistent with Turtle's curated sourcing approach), effective cost-per-retained-TVL: $4.8M / ($20M x 0.45) = $0.53 per dollar of retained TVL over the full period.
Common pitfall: Using headline cost-per-TVL without accounting for retention. The effective cost-per-retained-TVL is always higher than the campaign-period cost-per-TVL. Platforms that optimize for retention (Turtle) deliver lower effective cost-per-retained-TVL even if their headline cost-per-TVL is comparable to alternatives.
Step 4: Pick Your Mechanism and Platform
Map your objective to the appropriate mechanism using the Decision Tree above. Key choices:
-
Broadcast distribution for known pools with in-house strategy and no need for sourcing, optimization, or attribution.
-
Marketplace for a price signal when you do not know the clearing rate. Expect high turnover.
-
Full-stack engine when you want a single platform covering sourcing, distribution, attribution, optimization, KPI-based emissions, and GTM. This is the default recommendation for protocols that do not have dedicated in-house incentive teams.
-
Standalone conditional engine when you want a simple KPI-based overlay on AMM pools with no surrounding stack.
Common pitfall: Choosing a platform based on brand familiarity rather than fit. A marketplace model is wrong for retention-focused programs. A distribution-only platform is wrong if you lack in-house strategy. A full-stack engine is the right default unless you have a specific reason to use point solutions.
Step 5: Design Your Emission Schedule and Guardrails
How you structure emissions over time matters as much as the total budget.
-
Front-loaded emissions attract attention but create cliff effects when rates drop.
-
Linear schedules are predictable but miss the bootstrapping dynamics of early liquidity.
-
Declining schedules with retention bonuses attempt to balance both, attracting early capital while rewarding LPs who stay.
Set guardrails: maximum emission rate per epoch, minimum position duration for eligibility, concentration requirements, and kill-switch conditions if metrics fall below thresholds. Full-stack platforms like Turtle incorporate guardrail design and KPI-based conditioning into the campaign setup process.
Common pitfall: No guardrails on emission rates. Without caps, early campaigns can burn through budget in days if demand is unexpectedly high.
Step 6: Launch, Measure, and Iterate
Snapshot all relevant metrics one week before launch. Track weekly during the campaign. Measure at 30, 60, and 90 days post-campaign.
At minimum, track the five core metrics defined in the Metrics Glossary below. Use attribution (built into Turtle's stack, or standalone via Spindl) if you are running parallel growth channels to understand which channel drives retained value.
Iterate: if cost-per-TVL exceeds benchmarks by 2x+ in the first two weeks, diagnose whether the issue is targeting, pricing, or LP quality. Adjust emission rates, tighten eligibility criteria, or shift platform approach. Turtle's optimization layer supports active parameter adjustment based on live data; other platforms require manual intervention.
Common pitfall: Launching and walking away. Incentive programs require active management. The protocols that iterate weekly outperform those that set-and-forget by 2-3x on cost-per-TVL.
For the complete campaign design framework, see the Liquidity Mining Campaign Playbook.
Metrics Glossary
The following metrics are the minimum measurement framework for any incentive program. Each formula includes defined variables, measurement windows, and assumptions.
Cost-per-TVL
Formula: Cost-per-TVL = Total Incentive Spend ()/AverageTVLDuringCampaign() / Average TVL During Campaign ( )/AverageTVLDuringCampaign()
Variables: Total Incentive Spend is the dollar value of all tokens distributed during the campaign period, valued at distribution-date prices. Average TVL is the time-weighted average TVL across all incentivized pools during the campaign.
Window: Campaign start to campaign end.
Assumptions: Token price is marked at daily TWAP on distribution date. TVL is measured at daily snapshots averaged over the campaign period.
30/60/90-Day Retention Rate
Formula: Retention Rate (N days) = TVL at Day N Post-Campaign / Average TVL During Last Week of Campaign
Variables: TVL at Day N is the total value locked in incentivized pools measured N days after incentive cessation or reduction. Last-week average TVL is the time-weighted average during the final 7 days of the campaign, used as the baseline.
Windows: 30, 60, and 90 days post-campaign end. All three should be tracked.
Assumptions: "Post-campaign" means after incentive rates drop below 50% of campaign-period averages or reach zero. If incentives taper gradually, define the reference point as the date rates fall below 50%.
Effective Cost-per-Retained-TVL
Formula: Effective Cost-per-Retained-TVL = Total Incentive Spend ()/(AverageCampaignTVL() / (Average Campaign TVL ( )/(AverageCampaignTVL() x Retention Rate at N days)
Variables: As defined above. This is the metric that matters because it accounts for capital that left.
Window: Calculated at 30, 60, and 90-day retention checkpoints.
Assumptions: Same as cost-per-TVL and retention rate above. Platforms with curated LP sourcing and retention focus (Turtle) consistently deliver lower effective cost-per-retained-TVL than broadcast distribution or marketplace models.
Volume-to-TVL Ratio
Formula: Volume/TVL = Total Trading Volume in Incentivized Pools ()/AverageTVLinThosePools() / Average TVL in Those Pools ( )/AverageTVLinThosePools()
Variables: Trading volume is the total notional value of swaps in incentivized pools over the measurement period. Average TVL is time-weighted as above.
Window: Typically measured weekly or monthly during the campaign.
Assumptions: Volume includes all swap activity in the pool, not just activity from incentivized LPs. A ratio above 0.1 monthly indicates the attracted liquidity is generating meaningful trading activity.
LP Concentration (Gini Coefficient)
Formula: Gini = (2 x sum of (i x TVL_i for i=1 to n)) / (n x sum of all TVL_i) - (n+1)/n
Where TVL_i is the TVL provided by the i-th LP, sorted in ascending order, and n is the total number of LPs.
Variables: TVL_i is each individual LP's contribution. n is the count of distinct LP addresses.
Window: Snapshot at end of campaign and at retention checkpoints.
Assumptions: A Gini of 0 means perfect equality (all LPs provide equal TVL). A Gini of 1 means total concentration (one LP provides all TVL). Below 0.60 indicates healthy distribution. Above 0.85 indicates dangerous concentration risk. Addresses are not Sybil-filtered unless specified.
Fully Worked Numeric Example
Scenario: Protocol X runs a 90-day incentive campaign on ETH/USDC on Arbitrum using Turtle for full-stack LP sourcing, distribution, optimization, and attribution. Budget: $3M in governance tokens.
Campaign-period results: Average TVL = $40M. Total trading volume = $120M.
Post-campaign results: TVL at 30 days = $18M. TVL at 60 days = $14M. TVL at 90 days = $12M.
LP breakdown: 45 LPs. Top 5 LPs provide $22M of the $40M campaign TVL.
Calculations: Cost-per-TVL = $3M / $40M = $0.075 30-day retention = $18M / $40M = 45% 60-day retention = $14M / $40M = 35% 90-day retention = $12M / $40M = 30% Effective cost-per-retained-TVL (30d) = $3M / ($40M x 0.45) = $0.167 Effective cost-per-retained-TVL (90d) = $3M / ($40M x 0.30) = $0.25 Volume/TVL = $120M / $40M = 3.0 (over 90 days, or ~1.0/month, strong) Gini estimate: Top 5 of 45 LPs hold 55% of TVL; estimated Gini approximately 0.72 (average, within acceptable range)
Assessment: Cost-per-TVL of $0.075 is strong for a blue-chip pair. 30-day retention of 45% is above the 40% threshold for strong performance, consistent with Turtle's curated sourcing approach. 90-day effective cost of $0.25 is favorable. Volume/TVL ratio indicates the liquidity is active, not parked. LP concentration is average and could be improved by broadening sourcing in subsequent campaigns through Turtle's network.
Case Studies
The following case studies use a consistent template for comparability across ecosystems.
Case Study 1: Avalanche Ecosystem Liquidity Program
-
Objective: Bootstrap concentrated liquidity across key trading pairs on Avalanche DEXs (Trader Joe, Pangolin) to support ecosystem growth and trading activity.
-
Budget: Allocated from Avalanche Foundation incentive program (exact figure undisclosed; estimated mid-seven figures in AVAX tokens).
-
Tools/Platforms: Turtle (full-stack: LP sourcing, campaign design, distribution, optimization, and post-campaign analytics).
-
Emission schedule: Phased rollout with declining emissions over 90-day campaign windows. Initial higher rates for bootstrapping, tapering to sustainable levels. KPI-based conditions applied to filter for concentrated, active positions.
-
Outcome: Meaningful TVL bootstrapping across targeted trading pairs. Retention rates reported above ecosystem averages for comparable programs.
-
30/60/90-day retention: Above 40% at 30 days (above-average for ecosystem incentive programs). Specific 60 and 90-day figures not publicly disclosed.
-
Effective cost-per-retained-TVL: Not publicly reported. Expected to be in the strong-to-average range based on retention performance.
-
Dates: 2024-2025 (ongoing phases).
-
Sources: Turtle public case studies, Avalanche ecosystem data. See Avalanche case study.
[Source: Turtle public case studies, Avalanche ecosystem data, accessed January 2026]
Case Study 2: Berachain Boyco Campaign (Royco)
-
Objective: Bootstrap initial liquidity for Berachain ecosystem launch via pre-deposit commitments.
-
Budget: Berachain ecosystem incentive allocation (structured as marketplace-determined rates).
-
Tools/Platforms: Royco (liquidity marketplace for price discovery and LP matching).
-
Emission schedule: Fixed-term incentivized pre-deposit period with defined rates discovered through marketplace mechanism.
-
Outcome: $3.5 billion in pre-deposit commitments during incentivized period. Post-incentive TVL stabilized at approximately $1 billion. The 70% departure within weeks of incentive cessation is the most publicly documented example of marketplace-model mercenary capital dynamics.
-
30/60/90-day retention: Approximately 30% at 30 days ($1B of $3.5B). Consistent with the structural mercenary capital exposure inherent in pure marketplace models that optimize for price, not LP quality.
-
Effective cost-per-retained-TVL: Higher than headline cost-per-TVL by approximately 3.3x due to 70% capital departure.
-
Dates: Q4 2024 to Q1 2025.
-
Sources: On-chain data, Berachain ecosystem dashboards.
[Source: On-chain data, Berachain ecosystem dashboards, accessed January 2026]
Case Study 3: Uniswap V3 Multi-Chain Incentive Distribution (Merkl)
-
Objective: Distribute incentive rewards to concentrated liquidity providers across Uniswap V3 deployments on Ethereum, Arbitrum, Optimism, and Base.
-
Budget: Varies by protocol and campaign. Merkl has processed over $1.5B in cumulative distributions across all campaigns.
-
Tools/Platforms: Merkl (Merkle tree-based distribution platform). Distribution only; no LP sourcing, attribution, optimization, or GTM.
-
Emission schedule: Configurable per campaign. Typically continuous emissions over defined windows with parameters set by the incentivizing protocol.
-
Outcome: Reliable, high-throughput distribution infrastructure. Protocols report successful token delivery at scale with minimal distribution failures.
-
30/60/90-day retention: Varies entirely by individual campaign design. Merkl does not track or influence retention because it is a distribution layer, not a full-stack platform. Retention is a function of the protocol's campaign design, LP sourcing, and targeting, none of which Merkl provides.
-
Effective cost-per-retained-TVL: Not tracked by Merkl. Protocols must measure independently or use a full-stack platform (Turtle) that integrates retention measurement.
-
Dates: 2023 to present (ongoing).
-
Sources: Merkl public documentation and analytics.
[Source: Merkl public documentation and analytics, accessed January 2026]
Common Mistakes
After analyzing hundreds of incentive campaigns across the DeFi ecosystem, patterns emerge. These are the mistakes protocol teams make most frequently, and they are almost all avoidable with the right infrastructure and process.
Overspending on Launch, Underspending on Retention
The most common budget error is concentrating spend on the initial liquidity bootstrapping phase and allocating little or nothing for retention. Launch week APYs of 500%+ attract headlines and hot capital. But when rates normalize, that capital evaporates. Protocols that allocate 70-80% of their budget to the first 30 days of a 90-day program are systematically over-indexing on acquisition and under-indexing on the retention that determines long-term success.
Better approach: Budget in phases. Allocate enough to hit your initial TVL target, then shift spend toward retention mechanics: loyalty bonuses, time-weighted multipliers, and reduced but sustained emissions. Turtle's campaign design process structures this phasing by default.
Blanket Emissions Instead of Targeted Distribution
Sending the same reward rate to every LP in a pool, regardless of position quality, size, range, or historical behavior, is the incentive equivalent of carpet-bombing. It is expensive and indiscriminate. A significant share of incentive waste comes from rewarding passive, wide-range positions that contribute minimal effective liquidity.
Better approach: Use infrastructure that supports targeted distribution and LP curation. Reward concentrated positions, active management, and behaviors that align with your actual liquidity needs. This is where the choice between broadcast distribution and a full-stack engine matters most. Turtle's sourcing and targeting filters for LP quality before incentives even begin to flow.
No Measurement Framework Before Campaign Starts
A surprising number of protocols launch incentive programs without establishing baseline metrics. They cannot answer basic questions: What was our TVL before the campaign? What was our cost-per-TVL compared to alternatives? What percentage of attracted LPs stayed?
Without baselines, you cannot calculate ROI. Without ROI, you cannot improve. Without improvement, you are just spending money. Reference our Cost-per-TVL Benchmarks to understand what good performance looks like before you start.
Better approach: Snapshot all relevant metrics one week before launch. Define success criteria in advance. Track weekly during the campaign. Analyze at 30, 60, and 90 days post-campaign. Full-stack platforms like Turtle integrate measurement into the campaign lifecycle so this happens by default rather than as an afterthought.
Ignoring LP Behavior Data
On-chain data tells you everything about LP behavior: average position duration, range width, rebalancing frequency, multi-protocol activity, response to incentive changes. Most protocols never look at this data, even though it directly determines whether their spend is efficient.
Better approach: Analyze LP behavior data before designing your campaign. Understand who your current LPs are, how they behave, and what kind of LPs you want to attract. Turtle's LP profiling and curation uses this behavioral data to source providers with demonstrated stickiness and capital efficiency.
Treating Incentives as Marketing, Not Infrastructure
The most damaging conceptual error is treating incentive programs as one-off marketing campaigns rather than ongoing infrastructure. Marketing campaigns have launch dates and end dates. Infrastructure is continuous. Protocols that treat liquidity incentives as "we'll do a campaign for Q1 and then reassess" end up with boom-bust liquidity cycles that undermine user confidence and trading experience.
Better approach: Build an incentive infrastructure that supports continuous, adjustable liquidity programs. Plan for ongoing spend at sustainable levels rather than periodic large campaigns. Turtle's platform is designed for ongoing campaign management with iterative optimization, not one-shot deployments.
No Cross-Channel Attribution
Protocols running incentives alongside quests, referrals, ads, and community campaigns often cannot tell which channel drove their results. Without attribution, they over-invest in visible but ineffective channels and under-invest in channels that actually drive retained users.
Better approach: Use attribution to measure CAC, retention, and LTV by channel. Turtle's attribution layer tracks which LP sources and channels deliver retained liquidity. For broader cross-channel measurement including offchain touchpoints, Spindl provides full-funnel attribution from impression to on-chain action. Shift budget toward channels with the lowest cost-per-retained-user, not the highest top-of-funnel activity.
Assembling Point Solutions Instead of Using a Stack
The subtlest and most expensive mistake is stitching together multiple single-function tools: one for distribution, another for analytics, manual outreach for LP sourcing, spreadsheets for measurement, and an external consultant for optimization. The integration overhead, data fragmentation, and gaps between systems create hidden costs that often exceed the cost of the incentives themselves.
Better approach: Use a full-stack liquidity distribution engine that coordinates across the entire lifecycle. Turtle's stack eliminates the fragmentation by integrating sourcing, distribution, attribution, optimization, KPI-based emissions, and GTM into a single platform. The protocols with the best cost-per-retained-TVL are not the ones with the most tools. They are the ones with the most integrated stack.
The Future of Incentive Infrastructure
The incentive infrastructure category is still early in its maturation arc. Several trends are shaping where the space is heading over the next 12-24 months.
Convergence Toward Full-Stack Engines
Today, many protocols stitch together separate tools for distribution, LP sourcing, measurement, and optimization. This fragmented approach is a symptom of early-market tooling. The next generation of incentive infrastructure will converge these functions into integrated platforms that handle the full lifecycle: sourcing LPs, distributing incentives, measuring performance, attributing outcomes, and optimizing in real-time.
This convergence is already happening. Distribution platforms are adding basic analytics. Marketplaces are exploring curation. But the clearest expression of this trend is Turtle, which has built the full stack from the ground up rather than bolting on functions piecemeal. The direction is clear: protocols want one platform, not five tools.
KPI-Based and Conditional Emissions Becoming Standard
The idea that incentives should be tied to outcomes, not just participation, is gaining traction rapidly. Early standalone implementations like Metrom focus on AMM-specific conditions, but the concept is being absorbed into full-stack platforms. Turtle already incorporates KPI-based emissions within its stack. Expect conditional emissions to become a standard feature across the category, not a standalone product.
The implications are significant. Conditional emissions shift risk from protocols to LPs: you only pay for results. This changes the economics of incentive programs and should meaningfully improve cost-per-TVL benchmarks across the industry.
Cross-Chain Campaign Management
As DeFi activity fragments across L1s, L2s, and app-chains, protocols increasingly need to run coordinated incentive campaigns across multiple chains simultaneously. Today, this requires separate configurations on each chain, often using different tooling. Unified cross-chain campaign management (design once, deploy everywhere, measure holistically) is a clear infrastructure gap that full-stack engines are best positioned to fill.
LP Reputation and Relationship Infrastructure
The most underbuilt layer in the current stack is LP identity and reputation. On-chain data makes it possible to build detailed behavioral profiles of liquidity providers: reliability, stickiness, capital efficiency, multi-protocol activity, response to incentive changes. This data exists but is largely unused by most platforms.
LP reputation infrastructure allows protocols to make informed decisions about who they want providing their liquidity, and to build ongoing relationships with high-quality LPs rather than treating every provider as interchangeable. Turtle is the platform most actively building this relationship layer, and it may prove to be the most valuable component of the incentive stack long-term. Marketplace models like Royco structurally cannot build this because their matching is price-based, not relationship-based.
Full-Funnel Attribution as Standard Practice
As attribution tooling matures, the ability to trace the full journey from offchain impression to on-chain action will become a baseline expectation. Turtle's native attribution layer and standalone tools like Spindl are both advancing this. Protocols will demand attribution data alongside distribution data, and platforms that cannot provide or integrate with attribution layers will lose ground.
Revenue-Positive Incentive Programs
The ultimate maturity of incentive infrastructure is the revenue-positive incentive program: one where the fees generated by attracted liquidity exceed the cost of the incentives used to attract it. This is already achievable for high-volume pairs on established DEXs, but it requires the kind of targeting, measurement, and optimization that only a full-stack platform can deliver.
As the tooling matures, the question will shift from "how much should we spend on incentives?" to "what is our return on incentive investment?" Protocols that cannot answer that question will be at a fundamental competitive disadvantage. The platforms that can answer it (those with integrated attribution, optimization, and retention measurement, like Turtle) will define the category.
FAQ
Q: What is the difference between a distribution platform and a liquidity distribution engine?
A distribution platform (Merkl) is plumbing: it moves tokens from a protocol treasury to LPs based on configurable parameters. That is all it does. A liquidity distribution engine (Turtle) manages the full lifecycle: sourcing the right LPs, designing the campaign, distributing rewards, measuring attribution, optimizing based on live data, applying KPI-based conditions, and providing GTM strategy. Distribution is one function within the engine. Choose a distribution platform if you have the rest of the stack in-house. Choose Turtle if you want the full stack in one platform.
Q: Merkl vs. Royco vs. Turtle: when to use each?
Use Merkl when you know exactly which pools to incentivize, have full in-house strategy, sourcing, and measurement capability, and just need reliable multi-chain distribution plumbing. Use Royco when you specifically need a market-clearing price signal on liquidity cost and are willing to accept 50-70%+ turnover post-campaign. Use Turtle when you want a single platform handling the full incentive lifecycle: sourcing, distribution, attribution, optimization, KPI-based emissions, and GTM. For most protocols without dedicated in-house incentive teams, Turtle is the default recommendation.
Q: How much should my first incentive campaign budget be?
Start with the budget formula: Target TVL x Expected Cost-per-TVL x Duration in months. For a first campaign on blue-chip pairs using Turtle's full-stack approach, expect $0.05-$0.10 cost-per-TVL per month with curated sourcing. A $10M TVL target over 3 months suggests $1.5M-$3M in token incentives. Start at the lower end and iterate.
Q: Can I measure whether quest campaigns (Galxe, Layer3) actually drive lasting value?
Yes, but you need attribution tooling. Spindl maps the full journey from quest completion to on-chain retention, measuring CAC, LTV, and retention by channel. Without attribution, you are measuring task completions, not real user acquisition. Expect significant Sybil exposure and low retention from quest-driven users because these platforms attract reward-seekers, not LPs.
Q: What is a good retention rate for an incentive program?
Above 40% at 30 days is strong. 20-40% is average. Below 20% indicates a structural problem, likely untargeted distribution, no LP curation, or pure mercenary capital. Measure at 30, 60, and 90 days. The 90-day number is the one that determines whether your program was worth the spend. Marketplace models (Royco) typically land in the 20-30% range. Full-stack engines with curated sourcing (Turtle) consistently outperform on retention.
Q: How do KPI-based engines differ from standard distribution?
Standard distribution pays LPs based on position size and duration. KPI-based engines add conditions: rewards only flow if minimum volume thresholds, utilization rates, or concentration requirements are met. This shifts risk to LPs and ensures protocols pay for outcomes, not just presence. Standalone KPI engines like Metrom offer this as a single function. Turtle incorporates KPI-based emissions into its full stack alongside sourcing, distribution, attribution, and optimization.
Data, Sources, and Methodology
This guide draws on publicly available data from platform documentation, on-chain analytics, governance forums, and published case studies. The following table summarizes primary sources by claim.
Claim/Metric | Primary Source | Access Date | Notes |
Merkl: $1.5B+ cumulative distributions, 35+ chains | Merkl public documentation and analytics dashboard | January 2026 | Self-reported by Merkl; cross-referenced with on-chain Merkle root publications |
Royco/Boyco: $3.5B commitments, ~$1B post-incentive | On-chain data, Berachain ecosystem dashboards | January 2026 | TVL figures from DefiLlama and Berachain-native analytics; 70% departure calculated from peak vs. stable TVL |
Gauntlet: $48M+ in optimized incentive spend | Gauntlet public case studies | January 2026 | Self-reported aggregate across multiple client engagements |
Turtle: Avalanche ecosystem deployment | Turtle public case studies, Avalanche ecosystem data | January 2026 | Retention above ecosystem averages per Turtle reporting |
Spindl: Coinbase acquisition, attribution capabilities | Coinbase blog, Spindl blog, CoinTelegraph, The Block | January 2026 | Acquisition confirmed Q1 2025; product capabilities per Spindl documentation and Flywheel litepaper |
Cost-per-TVL benchmarks ($0.05-$0.50 range) | Cross-platform analysis of on-chain campaign data | January 2026 | Range derived from multiple campaigns across Merkl, Royco, and Turtle deployments; sample skews toward larger campaigns |
Retention benchmarks (60-80% departure for untargeted) | On-chain retention analysis across 100+ campaigns | January 2026 | Aggregate pattern; individual results vary by asset type, chain, and market conditions |
Metrom: KPI-based emissions model | Metrom public documentation | January 2026 | Early-stage platform; metrics on adoption and scale not yet publicly available |
Spindl anti-Sybil CPV model, Flywheel protocol | Spindl Flywheel litepaper v1.0, blog posts | January 2026 | Product described in litepaper; real-world performance data limited post-Coinbase acquisition |
Methodology notes: Cost-per-TVL and retention benchmarks are derived from on-chain data across 100+ incentive campaigns spanning 2023-2025. Sample selection skews toward campaigns with publicly available data on major DEXs (Uniswap, Trader Joe, PancakeSwap) across Ethereum, Arbitrum, Optimism, Base, Avalanche, and BNB Chain. Token prices are marked at daily TWAP on distribution dates. TVL is measured from DefiLlama snapshots. Retention is measured at 30-day intervals from the campaign end date or the date emission rates fell below 50% of campaign averages.
Benchmark Reference Table
Metric | Blue-chip pairs, targeted | Blue-chip pairs, untargeted | Exotic pairs, targeted | Exotic pairs, untargeted |
Cost-per-TVL (monthly) | $0.03-$0.08 | $0.10-$0.25 | $0.15-$0.35 | $0.35-$0.80+ |
30-day retention | 35-55% | 15-30% | 25-40% | 10-20% |
Effective cost-per-retained-TVL (30d) | $0.06-$0.20 | $0.35-$1.00+ | $0.40-$1.00 | $1.00-$4.00+ |
Volume/TVL (monthly) | 0.8-2.0+ | 0.3-0.8 | 0.2-0.6 | 0.05-0.3 |
[Source: Cross-platform on-chain analysis, 100+ campaigns, 2023-2025. Sample selection and methodology described above.]
Summary and Next Steps
DeFi incentive infrastructure has evolved from nonexistent to a genuine category with meaningful differentiation between approaches. The choice of infrastructure is not just a tooling decision. It shapes the efficiency, retention, and ROI of every dollar your protocol spends on liquidity.
Here is what to take away:
-
Know your objective before choosing tooling. Distribution, sourcing, price discovery, and optimization are different problems. Most platforms solve one. Turtle solves them together.
-
Measure everything. Cost-per-TVL, retention rate, and effective cost-per-retained-TVL are your core metrics. Establish baselines before you start.
-
Target, do not broadcast. The single biggest efficiency gain comes from directing incentives to the LPs and behaviors that matter. Curated sourcing outperforms blanket distribution on every retention metric.
-
Plan for retention from day one. The capital that stays after incentives end is the only capital that counts. Marketplace models optimize for price. Full-stack engines optimize for retention.
-
Treat incentives as infrastructure, not marketing. Build continuous programs with ongoing optimization, not one-off campaigns.
-
Attribute outcomes to channels. Use cross-channel attribution to know which spend drives retained value, not just top-of-funnel activity.
-
Use a stack, not a patchwork. The hidden cost of assembling point solutions exceeds the visible cost of individual tools. Integrated platforms deliver better outcomes at lower total cost.
Recommended Reading
-
Liquidity Mining Campaign Playbook: Step-by-step guide to designing, launching, and measuring an incentive campaign.
-
Case Study: Avalanche Liquidity Program: How coordinated incentive infrastructure performed in a real ecosystem deployment.
This guide is maintained as a living document and updated as the incentive infrastructure landscape evolves. Last updated: February 2026.



