At its core, the Sportstensor scoring mechanism operates on the principle that true predictive edge manifests through consistent outperformance across diverse market conditions. The scoring function evaluates miner contributions through a rigorous set of metrics including prediction accuracy, market impact, and cross-sectional performance relative to the network's collective intelligence. This creates a natural selection pressure where only strategies delivering genuine value survive and thrive.
Critical to the mechanism's integrity is Bittensor's validation architecture. Every prediction is independently verified by our network of validators, with performance metrics permanently recorded on-chain. This distributed consensus approach makes manipulation statistically infeasible while creating an immutable track record of model performance.
Core Components
1. Statistical Significance (ρ)
Purpose
Statistical significance (also known as rho) measures how consistently a miner participates in the prediction network relative to required thresholds. This component ensures that miners maintain a steady stream of predictions rather than making sporadic contributions.
ρ=1+e−α(x−threshold)1
Parameters
x: Number of miner predictions in the evaluation period
threshold: Required prediction threshold for the league
α: Sensitivity parameter (typically between 0.1 and 0.5)
# Early prediction with good CLV
time_comp = 0.056 # from earlier example
clv_comp = 0.723 # from earlier example
v_early_good = time_comp + (1 - time_comp) * clv_comp # 0.737
# Late prediction with poor CLV
time_comp_late = 0.886
clv_comp_poor = 0.412
v_late_poor = time_comp_late + (1 - time_comp_late) * clv_comp_poor # 0.459
3. Closing Line Value (CLV)
Purpose
CLV measures the value captured between prediction odds and closing odds, indicating a miner's ability to identify market inefficiencies.
clv=closing_odds−prediction_odds
Detailed Example:
# Scenario 1: Value Captured
prediction = {
'team': 'TeamA',
'prediction_odds': 2.00, # Implied probability 40%
'closing_odds': 2.50, # Implied probability 50%
'actual_winner': 'TeamA'
}
clv = 2.50 - 2.00 # 0.50 (positive value captured)
# Scenario 2: Value Lost
prediction = {
'team': 'TeamB',
'prediction_odds': 2.20, # Implied probability 55.6%
'closing_odds': 1.80, # Implied probability 45.5%
'actual_winner': 'TeamB'
}
clv = 1.80 - 2.20 # -0.40 (negative value lost)
4. Gaussian Filter
Sigma (σ):
σ represents the standard deviation in the Gaussian filter and is used to adjust the sensitivity of the scoring system. This ensures that the influence of predictions aligns with the market consensus, suppressing extreme deviations.
σ=log(closing_odds21)
Weight (w):
w is a threshold parameter that defines the acceptable range of deviations from market consensus. It depends on the closing_odds and their logarithmic value. The term (closing_odds - 1.0) scales the sensitivity, while the logarithmic factor provides a diminishing impact as odds grow, ensuring predictions are not overly penalized for slight deviations.
w=(closing_odds−1.0)⋅2log(closing_odds)
Absolute Difference (diff)
diff represents the absolute difference between the closing_odds (market consensus) and the implied probability derived from the prediction (1 / prediction_probability). It quantifies how far the prediction deviates from the market’s expected outcome. A smaller diff value indicates closer alignment with market expectations, while larger values highlight significant deviations.
diff=closing_odds−prediction_probability1
Filter:
The filter applies a scoring adjustment based on diff. If the diff is within the threshold w, no penalty is applied (filter = 1.0). For larger deviations, an exponential penalty proportional to -diff² / (4σ²) is applied, which sharply reduces the score as the deviation increases. This discourages predictions that significantly diverge from market consensus.
filter={1.0exp(−4σ2diff2)if diff≤wotherwise
Example Calculations:
# Conservative prediction close to market
closing_odds = 1.90
prediction_prob = 0.54 # implied odds ≈ 1.85
sigma = math.log(1/1.90**2)
w = (1.90 - 1.0) * math.log(1.90)/2
diff = abs(1.90 - 1/0.54)
filter_conservative = 1.0 # Within acceptable range
# Aggressive prediction far from market
closing_odds = 1.90
prediction_prob = 0.80 # implied odds = 1.25
diff_aggressive = abs(1.90 - 1/0.80)
filter_aggressive = math.exp(-diff_aggressive**2/(4*sigma**2)) # ≈ 0.342
5. Return On Income (ROI)
Purpose
Calculating ROI sheds light on an additional aspect of predictions and is a primary metric most investors and participants will look at.
roi=predictions⋅bet_amountearned_payouts
6. Incremental Return On Income (Incr. ROI)
Incremental ROI is the same calculation as 5. Return On Income (ROI), except only accounting for a percentage of the total most recent predictions.
Purpose
Calculating Incremental ROI gives an indication of how well the miner has performed over the most recent subset of predictions. We use Incremental ROI to determine if we should apply penalties to their overall ROI score if they are copy trading the market (favorites) too closely.
7. Minimum Rho
Purpose
Minimum Rho represents the minimum 1. Statistical Significance (ρ) value necessary to earn emissions. This essentially requires a minimum number of predictions, calculated per league based on each league's rolling predictions threshold and their rho alpha value, necessary to be eligible to earn rewards.
Score Calculation Process
1. Individual Prediction Edge Score
Each prediction's edge score is calculated by combining all components:
def calculate_prediction_score(prediction, match_data):
# Calculate components
v = calculate_incentive_score(
delta_t=get_time_delta(prediction.time, match_data.start_time),
clv=calculate_clv(prediction, match_data)
)
sigma = calculate_closing_edge(prediction, match_data)
gfilter = apply_gaussian_filter(prediction, match_data)
# Combine for final prediction score
return v * sigma * gfilter
Each prediction has its ROI calculated and aggregated for the miner.
Subsequently, each prediction has its market ROI calculated and aggregated. This is the ROI of what the market predicted, that is the ROI of the team who was the favorite to win.
Example Calculations:
# Calculate ROI for the prediction
ROI_BET_AMOUNT = 1.0
league_roi_counts[league][index] += 1
# If prediction was correct, update aggregate with positive earned payout
if pwmd.prediction.get_predicted_team() == pwmd.get_actual_winner():
league_roi_payouts[league][index] += ROI_BET_AMOUNT * \
(pwmd.get_actual_winner_odds()-1)
# If prediction was incorrect, update aggregate with negative bet amount
else:
league_roi_payouts[league][index] -= ROI_BET_AMOUNT
# Calculate the market ROI for the prediction
if pwmd.actualHomeTeamScore > pwmd.actualAwayTeamScore and pwmd.homeTeamOdds < pwmd.awayTeamOdds:
league_roi_market_payouts[league][index] += ROI_BET_AMOUNT * (pwmd.get_actual_winner_odds()-1)
elif pwmd.actualAwayTeamScore > pwmd.actualHomeTeamScore and pwmd.awayTeamOdds < pwmd.homeTeamOdds:
league_roi_market_payouts[league][index] += ROI_BET_AMOUNT * (pwmd.get_actual_winner_odds()-1)
elif pwmd.actualHomeTeamScore == pwmd.actualAwayTeamScore and pwmd.drawOdds < pwmd.homeTeamOdds and pwmd.drawOdds < pwmd.awayTeamOdds:
league_roi_market_payouts[league][index] += ROI_BET_AMOUNT * (pwmd.get_actual_winner_odds()-1)
else:
league_roi_market_payouts[league][index] -= ROI_BET_AMOUNT
3. Base ROI Score - Miner vs Market
To calculate the base ROI score, the difference between the overall miner and market ROI is calculated. If the miner is not beating the market, they are not eligible for emissions.
Additionally, if the miner is beating the market, but has a negative ROI, their base ROI score is penalized by the percent distance from 0.
Example Calculations:
ROI_BET_AMOUNT = 1.0
# Calculate market ROI
market_roi = league_roi_market_payouts[league][index] / (league_roi_counts[league][index] * ROI_BET_AMOUNT) if league_roi_counts[league][index] > 0 else 0.0
# Calculate final ROI score
roi = league_roi_payouts[league][index] / (league_roi_counts[league][index] * ROI_BET_AMOUNT) if league_roi_counts[league][index] > 0 else 0.0
# Calculate the difference between the miner's ROI and the market ROI
roi_diff = roi - market_roi
# Base ROI score requires the miner is beating the market
base_roi_score = round(rho * ((roi_diff if roi_diff>0 else 0)*100), 4)
# If ROI is less than 0, but greater than market ROI, penalize the ROI score by distance from 0
if roi < 0 and roi_diff > 0:
base_roi_score = base_roi_score + (base_roi_score * roi)
4. Incremental ROI Adjustment Factor
Each prediction has its ROI calculated and aggregated for the miner. Once the number of ROI-calculated predictions has reached the number representing the incremental ROI threshold, the values are used to calculate the adjustment factor, which will be applied to the final ROI score calculation.
If the number of miner's predictions are eligible for incremental ROI analysis, we calculate the difference between the miner incremental ROI and the market favorite incremental ROI. If the difference is between 0 and MAX_INCR_ROI_DIFF_PERCENTAGE, we then apply exponential decay scaling to determine a penalty between 0 and 0.99.
Example Calculations:
# Calculate ROI for the prediction
ROI_INCR_PRED_COUNT_PERCENTAGE = 0.24
MAX_INCR_ROI_DIFF_PERCENTAGE = 0.10
if league_roi_counts[league][index] == round(vali.ROLLING_PREDICTION_THRESHOLD_BY_LEAGUE[league] * ROI_INCR_PRED_COUNT_PERCENTAGE, 0):
league_roi_incr_counts[league][index] = league_roi_counts[league][index]
league_roi_incr_payouts[league][index] = league_roi_payouts[league][index]
league_roi_incr_market_payouts[league][index] = league_roi_market_payouts[league][index]
if league_roi_incr_counts[league][index] == round(vali.ROLLING_PREDICTION_THRESHOLD_BY_LEAGUE[league] * ROI_INCR_PRED_COUNT_PERCENTAGE, 0) and final_roi_score > 0:
market_roi_incr = league_roi_incr_market_payouts[league][index] / (league_roi_incr_counts[league][index] * ROI_BET_AMOUNT) if league_roi_incr_counts[league][index] > 0 else 0.0
roi_incr = league_roi_incr_payouts[league][index] / (league_roi_incr_counts[league][index] * ROI_BET_AMOUNT) if league_roi_incr_counts[league][index] > 0 else 0.0
roi_incr_diff = roi_incr - market_roi_incr
# if incremental ROI and incremental market ROI is within the difference threshold, calculate penalty
if abs(roi_incr_diff) <= MAX_INCR_ROI_DIFF_PERCENTAGE:
# Exponential decay scaling
k = 30 # Decay constant; increase for steeper decay
# Scale the penalty factor to max at 0.99
penalty_factor = 0.99 * np.exp(-k * abs(roi_incr_diff))
incr_roi_adjustment_factor = 1 - penalty_factor
5. League Score Calculation
Purpose
League scores are the sum of the total aggregated prediction edge score and the total roi score while considering league-specific requirements and weightings as well as the weightings for edge and roi scores, respectfully.
Both the edge and roi scores have rho (ρ), or the statistical significance, applied.
# Sample prediction scores for a miner
predictions = {
'match_1': {
'market_odds_for_favorite': 1.54,
'odds': 2.85,
'timestamp': '2024-01-01 15:00:00',
'correct': True
},
'match_2': {
'market_odds_for_favorite': 1.44,
'odds': 3.32,
'timestamp': '2024-01-02 20:00:00',
'correct': False
},
'match_3': {
'market_odds_for_favorite': 1.64,
'odds': 1.64,
'timestamp': '2024-01-03 19:30:00',
'correct': True
}
}
ROI_BET_AMOUNT = 1.0
# Calculate components
for p in predictions.values():
market_roi_sum += ROI_BET_AMOUNT * (p['market_odds_for_favorite'] - 1)
if p['correct'] == True:
roi_sum += ROI_BET_AMOUNT * (p['odds'] - 1)
else:
roi_sum -= ROI_BET_AMOUNT
# roi_sum == (1.0 * 2.85-1) - (1.0) + (1.0 * 1.64-1) == 1.49
# market_roi_sum == - (1.0) + (1.0 * 1.44-1) + (1.0 * 1.64-1) == 0.08
num_predictions = len(predictions) # 3
rho = compute_significance_score(num_predictions, threshold, alpha=0.2) # 0.731
roi_diff = roi_sum - market_roi_sum # 1.41
base_roi_score = round(rho * ((roi_diff if roi_diff>0 else 0)*100), 4) # 103.071
# If ROI is less than 0, but greater than market ROI, penalize the ROI score by distance from 0
if roi_sum < 0 and roi_diff > 0:
base_roi_score += base_roi_score * roi
# Apply incremental ROI adjustment factor (see #4 above for details)
base_roi_score *= incr_roi_adjustment_factor
6. Score Normalization and Minimum Rho with Score Weighting
Purpose
Calculates edge and roi scores to the same scale and applies scoring weights. We also check if the miner has reached minimum rho value. We apply rho again after normalizing and combining scores.
Normalization Calculation Example:
# Normalize edge scores
min_edge, max_edge = min(league_scores[league]), max(league_scores[league])
normalized_edge = [(score - min_edge) / (max_edge - min_edge) \
if score > 0 else 0 for score in league_scores[league]]
# Normalize ROI scores
min_roi, max_roi = min(league_roi_scores[league]), max(league_roi_scores[league])
normalized_roi = [(score - min_roi) / (max_roi - min_roi) \
if (max_roi - min_roi) > 0 else 0 for score in league_roi_scores[league]]
Score Weighting with Minimum Rho Check Example:
# Apply weights, combine, and apply rho to set final league scores
ROI_SCORING_WEIGHT = 0.5 # calculates 50/50 weighting to edge and roi scores
league_scores[league] = [
((1-ROI_SCORING_WEIGHT) * e + ROI_SCORING_WEIGHT * r) * rho
if r > 0 and e > 0 and rho >= MIN_RHO else 0
for e, r, rho in zip(normalized_edge, normalized_roi, league_rhos[league])
]
7. Overall Score Aggregation
Purpose
Combines scores across different leagues while respecting league weights.
Mathematical Definition
overall_score=∑(league_score⋅league_weight)
Implementation
Calculate total positive scores for each league
league_totals = {league: 0.0 for league in ACTIVE_LEAGUES}
for league in ACTIVE_LEAGUES:
league_totals[league] = sum(score for score in league_scores[league] if score > 0)
Scale scores within each league to match allocation percentage
scaled_scores_per_league = {league: [0.0] * len(all_uids) for league in ACTIVE_LEAGUES}
for league in ACTIVE_LEAGUES:
total_league_score = league_totals[league]
allocation = LEAGUE_SCORING_PERCENTAGES[league] * 100 # Convert to percentage
if total_league_score > 0:
scaling_factor = allocation / total_league_score # Factor to scale league scores
scaled_scores_per_league[league] = [
(score * scaling_factor if score > 0 else 0) for score in league_scores[league]
]
Aggregate scaled scores across all leagues
for i in range(len(all_uids)):
all_scores[i] = sum(scaled_scores_per_league[league][i] for league in ACTIVE_LEAGUES)
Penalty System
1. League Commitment Penalties
Purpose
Ensures miners maintain active participation across leagues by penalizing those without a league commitment.
Implementation
LeagueCommitmentRequests are sent to miners every 15 minutes. For every consecutive request that is missed, an accumulating penalty of -0.1 is calculated and applied to a miner's final score.
As soon as a miner properly responds with a league commitment, the penalty resets to 0.
Additionally, if a miner has failed to respond to a LeagueCommitmentRequest for 24 hours, their final score will be set to 0 until they properly commit to a league again or their UID is deregistered.
2. No-Response Penalties
Purpose
Ensures miners respond to prediction requests in a timely manner.
Implementation
A validator performs a scoring step every 30 minutes. For each 30 minute window leading up to a scoring step, if a miner fails to respond to a MatchPredictionRequest, they will accrue a penalty of -0.1.
For example, if a miner fails to respond to 6 MatchPredictionRequests from a validator, that miner will have a total of -0.6 points applied to their final score in the next scoring step.
After the scoring step has completed, the no-response penalties are reset to 0.
A miner has 15 seconds to respond to a validator request before it is considered a no-response.
Final Score Distribution
1. Pareto Distribution Application
Purpose
Transforms final scores to maintain competitive differentiation while preventing extreme outliers.
Purpose
The Gaussian filter prevents gaming of the system by suppressing scores for predictions that deviate significantly from market consensus. The code can be found .