REP-54: Network operation rewards distribution adjustment

This is a proposal to revamp network rewards distribution.

The current 8:2 reward ratio (Staking: Operation) and the taxation of Operation Rewards for Operators’ income inadequately covers the operational costs of a Node, particularly when running multi-worker deployment.

Multiple Operators have approached us to express this concern.

To address this, it is necessary to revamp the way Operation Rewards are calculated and distributed:

  1. Calculation should be linked to more quantifiable metrics, such as the number of workers, the amount of storage used (as it’s the main cost), or something related to the Node’s coverage. Currently it’s linked to the number of requests, which in some cases, isn’t always fair. It is also vulnerable to manipulation since requests are not billed as of now.
  2. Distribution of Operation Rewards should go to Operation Pool. By separating Operation Rewards from the Staking Pool, Operators retain full control over their earnings, improving cost recovery for running Nodes and enhancing financial viability for multi-worker deployments (also incentivizes such a deployment)
8 Likes

Is it possible to adjust the reward ratio from 8:2 to 9:1, with Operational Rewards making up 10% of the total reward? It seems that 20% is a little bit high for node rewards, which would mean users receive even less.

A method can be designed to calculate the potential contribution of Nodes in order to allocate Operation Rewards. This calculation method could include factors such as the total Node stake, whether it is a public good Node, uptime duration, the number of valid requests, and the number of networks and workers it supports. This would allow for a more comprehensive evaluation of the Node’s potential contribution to the network.

u sound like a GPT bot lol

Based on the white paper’s description, the Operation Rewards Calculation should focus more on the Node’s contribution to the network. To align with this, I’ve further categorized and named the key factors as follows:

  • Request Distribution (measuring processing capacity and efficiency):
  1. Valid request count (valid_count)

  2. Potential invalid request count (invalid_count)

  • Data Indexing (measuring network coverage and load capacity):
  1. Number of supported networks (network_count)

  2. Worker count (worker_count)

  3. Activity count (activity_count)

  • Stability (measuring reliability and continuous uptime):
  1. Node version (version)

  2. Continuous uptime (uptime)

These metrics provide a detailed framework to evaluate a Node’s contribution accurately and ensure the rewards are distributed fairly based on performance and reliability.

Formula Design

The total score for Node i is represented as S_i, with the formula:

S_i = W_1 \cdot R_i + W_2 \cdot D_i + W_3 \cdot E_i

Where:

  • R_i: Request distribution score
  • D_i: Data indexing score
  • E_i: Stability score
  • W_1, W_2, W_3: Weights for each dimension, satisfying W_1 + W_2 + W_3 = 1

1. Request Distribution Score (R_i)

R_i = \frac{\text{valid_count}_i}{\max(\text{valid_count})} - \alpha \cdot \frac{\text{invalid_count}_i}{\max(\text{invalid_count})}

  • \alpha: Penalty coefficient for invalid requests, where 0 < \alpha < 1.
  • Higher valid requests increase the score, while more invalid requests decrease the score.

2. Data Indexing Score (D_i)

D_i = \beta_1 \cdot \frac{\text{network_count}_i}{\max(\text{network_count})} + \beta_2 \cdot \frac{\text{worker_count}_i}{\max(\text{worker_count})} + \beta_3 \cdot \frac{\text{activity_count}_i}{\max(\text{activity_count})}

  • \beta_1, \beta_2, \beta_3: Weights for sub-metrics, satisfying \beta_1 + \beta_2 + \beta_3 = 1.
  • Data indexing evaluates the Node’s network coverage, worker coverage, and activity indexing quantity.

3. Stability Score (E_i)

E_i = \gamma_1 \cdot \frac{\text{uptime}_i}{\max(\text{uptime})} + \gamma_2 \cdot \text{version_score}_i

  • \text{version_score}_i = 1 if the node uses the latest version, otherwise 0.
  • \gamma_1, \gamma_2: Weights for sub-metrics, satisfying \gamma_1 + \gamma_2 = 1.
  • Stability combines uptime and the Node’s software version.

Weight Assignments

To reflect the importance of each dimension:

  • W_1 = 0.6, W_2 = 0.3, W_3 = 0.1
  • \alpha = 0.5
  • Data indexing weights: \beta_1 = 0.3, \beta_2 = 0.6, \beta_3 = 0.1
  • Stability weights: \gamma_1 = 0.7, \gamma_2 = 0.3

Final Formula

\begin{aligned} S_i = & 0.6 \cdot \left(\frac{\text{valid_count}_i}{\max(\text{valid_count})} - 0.5 \cdot \frac{\text{invalid_count}_i}{\max(\text{invalid_count})}\right) \\ & + 0.3 \cdot \left(0.3 \cdot \frac{\text{network_count}_i}{\max(\text{network_count})} + 0.6 \cdot \frac{\text{worker_count}_i}{\max(\text{worker_count})} + 0.1 \cdot \frac{\text{activity_count}_i}{\max(\text{activity_count})}\right) \\ & + 0.1 \cdot \left(0.7 \cdot \frac{\text{uptime}_i}{\max(\text{uptime})} + 0.3 \cdot \text{version_score}_i\right) \end{aligned}

If you have any questions or better suggestions, feel free to leave a comment for discussion.

Hi folks, the REP draft has been prepared

Need some time to grind through these formulas :rofl:

1 Like