Interpretation of the Incentive Mechanism

We interpret the incentive mechanism through the dual problem

One of the benefits of a convex program is that it can be interpreted as its dual problem which flips the primal problem to its opposite mirror image representation. Intuitively the dual problem identifies which rules are actually stopping optimality, and how valuable would it be to relax them. The dual form assigns a “shadow price” to each constraint. That shadow price is like the exchange rate between bending a rule and gaining more outcome. These shadow prices are called “duals” and allow us to have pinpoint control over our incentive mechanism.

Think of the duals as a mirror image of the optimization problem. They allow the subnet to look at the activity from all angles and make sure the incentive mechanism is putting its best face forward and there are no lost emissions leaking to miners.

To compute the duals we form a special function called a Lagrangian. The important output is the dual form of the Phase 1 primal which is the aforementioned mirror image. Formally:

although intimidating at first glance, these relationships can be reduced cleanly to the optimal dual values: Optimal budget dual Optimal rate (κ) dual: Optimal diversity dual: Optimal ramp duals: Optimal bounds duals: Optimal eligibility dual:

The optimal duals are the positive multipliers attached to whichever constraints are tight in the final allocation. Everything else is zero. That’s what makes the “scoreboard” of duals so readable: each dual (number) corresponds directly to a rule, and it only shows up (nonzero) when that rule is the bottleneck. The dual’s magnitude tells us the marginal benefit of relaxing the specific constraint when its pressed up against the guardrail just a little. Examples:

  • If you gave the system a bit more budget, how much more total volume T could you route?

  • If you loosened κ slightly, how much more flow would come through?

  • If you allowed a miner’s share to exceed the diversity cap by a sliver, how much more would the objective improve?

Last updated