Reward Rate Formula
Last updated
Last updated
The foundation of our system is a dynamic reward rate (ρ) that adjusts based on market activity and total budget allocation. While we use the following formula as a guiding principle, please note that specific parameters like β₀, the moving average of trading volume’s duration observed, and the adjustment factor h are initially chosen by the Mangrove’s DAO Council and may vary for different epochs.
Reward Rate Formula:
ρ(n) = ρ₀ / (1 + (V₂₄/V₀)ʰ) × (1 - β(t)/β₀)
Where:
- ρ₀ is the base reward rate (initially set by the Mangrove DAO Council)
- V₂₄ is the 24-hour moving average of trading volume (24-hour is an example)
- V₀ is our reference volume (for eg. $1M) at which the reward rate halves
- h is an adjustment factor that controls how quickly rewards decrease with volume
- β(t) is the total MGV distributed up to time t
- β₀ is the maximum MGV allocation for the epoch (set by Mangrove’s DAO Council)
This formula has two main components:
Volume Adjustment: ρ₀ / (1 + (V₂₄/V₀)ʰ) :
Reduces rewards as trading volume increases
Helps distribute rewards more evenly across time
Encourages activity during lower-volume periods
Budget Control: (1 - β(t)/β₀) :
Gradually reduces rewards as we approach the epoch's budget limit
Ensures we don't exceed the planned token distribution
Provides a smooth transition as the epoch's allocation is consumed
Important Considerations:
Parameter selection:
β₀, V₂₄, and h are selected by Mangrove’s DAO Council for each epoch to align with platform goals and the incentives program’s principles.
Consequently, ρ will be communicated as a value during the first epochs.
Principle that will be applied:
When trading volume is high, the number of incentives per dollar traded will decrease.
When platform volume is low, the incentives to be earned will be higher to bootstrap activity.
This mechanism is designed to favor genuine activity on the platform and reward truly active participants.
This approach ensures that:
Token distribution remains within planned limits:
By controlling β₀, we manage the total incentives distributed during an epoch.
Rewards decrease gradually:
The reward rate reduces smoothly rather than stopping abruptly, providing predictability.
Users can anticipate how rewards might change as the epoch progresses.
Encourages genuine activity:
By adjusting rewards based on volume, we incentivize real, meaningful engagement on the platform.
Higher rewards during low-volume periods encourage participation when it benefits the network most.