At its core, the Sportstensor scoring mechanism operates on the principle that true predictive edge manifests through consistent outperformance across diverse market conditions. The scoring function evaluates miner contributions through a rigorous set of metrics including prediction accuracy, market impact, and cross-sectional performance relative to the network's collective intelligence. This creates a natural selection pressure where only strategies delivering genuine value survive and thrive.
Critical to the mechanism's integrity is Bittensor's validation architecture. Every prediction is independently verified by our network of validators, with performance metrics permanently recorded on-chain. This distributed consensus approach makes manipulation statistically infeasible while creating an immutable track record of model performance.
Core Components
1. Statistical Significance (ρ)
Purpose
Statistical significance measures how consistently a miner participates in the prediction network relative to required thresholds. This component ensures that miners maintain a steady stream of predictions rather than making sporadic contributions.
ρ=1+e−α(x−threshold)1
Parameters
x: Number of miner predictions in the evaluation period
threshold: Required prediction threshold for the league
α: Sensitivity parameter (typically between 0.1 and 0.5)
# Early prediction with good CLV
time_comp = 0.056 # from earlier example
clv_comp = 0.723 # from earlier example
v_early_good = time_comp + (1 - time_comp) * clv_comp # 0.737
# Late prediction with poor CLV
time_comp_late = 0.886
clv_comp_poor = 0.412
v_late_poor = time_comp_late + (1 - time_comp_late) * clv_comp_poor # 0.459
3. Closing Line Value (CLV)
Purpose
CLV measures the value captured between prediction odds and closing odds, indicating a miner's ability to identify market inefficiencies.
Detailed Example:
# Scenario 1: Value Captured
prediction = {
'team': 'TeamA',
'prediction_odds': 2.00, # Implied probability 40%
'closing_odds': 2.50, # Implied probability 50%
'actual_winner': 'TeamA'
}
clv = 2.50 - 2.00 # 0.50 (positive value captured)
# Scenario 2: Value Lost
prediction = {
'team': 'TeamB',
'prediction_odds': 2.20, # Implied probability 55.6%
'closing_odds': 1.80, # Implied probability 45.5%
'actual_winner': 'TeamB'
}
clv = 1.80 - 2.20 # -0.40 (negative value lost)
4. Gaussian Filter
Purpose
The Gaussian filter prevents gaming of the system by suppressing scores for predictions that deviate significantly from market consensus. The code can be found here.
Sigma (σ):
σ represents the standard deviation in the Gaussian filter and is used to adjust the sensitivity of the scoring system. This ensures that the influence of predictions aligns with the market consensus, suppressing extreme deviations.
Weight (w):
w is a threshold parameter that defines the acceptable range of deviations from market consensus. It depends on the closing_odds and their logarithmic value. The term (closing_odds - 1.0) scales the sensitivity, while the logarithmic factor provides a diminishing impact as odds grow, ensuring predictions are not overly penalized for slight deviations.
Absolute Difference (diff)
diff represents the absolute difference between the closing_odds (market consensus) and the implied probability derived from the prediction (1 / prediction_probability). It quantifies how far the prediction deviates from the market’s expected outcome. A smaller diff value indicates closer alignment with market expectations, while larger values highlight significant deviations.
Filter:
The filter applies a scoring adjustment based on diff. If the diff is within the threshold w, no penalty is applied (filter = 1.0). For larger deviations, an exponential penalty proportional to -diff² / (4σ²) is applied, which sharply reduces the score as the deviation increases. This discourages predictions that significantly diverge from market consensus.
Example Calculations:
# Conservative prediction close to market
closing_odds = 1.90
prediction_prob = 0.54 # implied odds ≈ 1.85
sigma = math.log(1/1.90**2)
w = (1.90 - 1.0) * math.log(1.90)/2
diff = abs(1.90 - 1/0.54)
filter_conservative = 1.0 # Within acceptable range
# Aggressive prediction far from market
closing_odds = 1.90
prediction_prob = 0.80 # implied odds = 1.25
diff_aggressive = abs(1.90 - 1/0.80)
filter_aggressive = math.exp(-diff_aggressive**2/(4*sigma**2)) # ≈ 0.342
5. Return On Income (ROI)
Purpose
Calculating ROI sheds light on an additional aspect of predictions and is a primary metric most investors and participants will look at.
Score Calculation Process
1. Individual Prediction Edge Score
Each prediction's edge score is calculated by combining all components:
def calculate_prediction_score(prediction, match_data):
# Calculate components
v = calculate_incentive_score(
delta_t=get_time_delta(prediction.time, match_data.start_time),
clv=calculate_clv(prediction, match_data)
)
sigma = calculate_closing_edge(prediction, match_data)
gfilter = apply_gaussian_filter(prediction, match_data)
# Combine for final prediction score
return v * sigma * gfilter
Each prediction has its ROI calculated and aggregated for the miner.
Example Calculations:
# Calculate ROI for the prediction
ROI_BET_AMOUNT = 1.0
league_roi_counts[league][index] += 1
# If prediction was correct, update aggregate with positive earned payout
if pwmd.prediction.get_predicted_team() == pwmd.get_actual_winner():
league_roi_payouts[league][index] += ROI_BET_AMOUNT * \
(pwmd.get_actual_winner_odds()-1)
# If prediction was incorrect, update aggregate with negative bet amount
else:
league_roi_payouts[league][index] -= ROI_BET_AMOUNT
3. League Score Calculation
Purpose
League scores are the sum of the total aggregated prediction edge score and the total roi score while considering league-specific requirements and weightings as well as the weightings for edge and roi scores, respectfully.
Both the edge and roi scores have rho (ρ), or the statistical significance, applied.
Purpose
Calculates edge and roi scores to the same scale and applies scoring weights. We apply rho again after normalizing and combining scores.
Normalization Calculation Example:
# Normalize edge scores
min_edge, max_edge = min(league_scores[league]), max(league_scores[league])
normalized_edge = [(score - min_edge) / (max_edge - min_edge) \
if score > 0 else 0 for score in league_scores[league]]
# Normalize ROI scores
min_roi, max_roi = min(league_roi_scores[league]), max(league_roi_scores[league])
normalized_roi = [(score - min_roi) / (max_roi - min_roi) \
if (max_roi - min_roi) > 0 else 0 for score in league_roi_scores[league]]
Score Weighting Example:
# Apply weights, combine, and apply rho to set final league scores
ROI_SCORING_WEIGHT = 0.5 # calculates 50/50 weighting to edge and roi scores
league_scores[league] = [
((1-ROI_SCORING_WEIGHT) * e + ROI_SCORING_WEIGHT * r) * rho
if r > 0 else 0
for e, r, rho in zip(normalized_edge, normalized_roi, league_rhos[league])
]
5. Overall Score Aggregation
Purpose
Combines scores across different leagues while respecting league weights.
Mathematical Definition
Implementation
Calculate total positive scores for each league
league_totals = {league: 0.0 for league in ACTIVE_LEAGUES}
for league in ACTIVE_LEAGUES:
league_totals[league] = sum(score for score in league_scores[league] if score > 0)
Scale scores within each league to match allocation percentage
scaled_scores_per_league = {league: [0.0] * len(all_uids) for league in ACTIVE_LEAGUES}
for league in ACTIVE_LEAGUES:
total_league_score = league_totals[league]
allocation = LEAGUE_SCORING_PERCENTAGES[league] * 100 # Convert to percentage
if total_league_score > 0:
scaling_factor = allocation / total_league_score # Factor to scale league scores
scaled_scores_per_league[league] = [
(score * scaling_factor if score > 0 else 0) for score in league_scores[league]
]
Aggregate scaled scores across all leagues
for i in range(len(all_uids)):
all_scores[i] = sum(scaled_scores_per_league[league][i] for league in ACTIVE_LEAGUES)
Penalty System
1. League Commitment Penalties
Purpose
Ensures miners maintain active participation across leagues by penalizing those without a league commitment.
Implementation
LeagueCommitmentRequests are sent to miners every 15 minutes. For every consecutive request that is missed, an accumulating penalty of -0.1 is calculated and applied to a miner's final score.
As soon as a miner properly responds with a league commitment, the penalty resets to 0.
Additionally, if a miner has failed to respond to a LeagueCommitmentRequest for 24 hours, their final score will be set to 0 until they properly commit to a league again or their UID is deregistered.
2. No-Response Penalties
Purpose
Ensures miners respond to prediction requests in a timely manner.
Implementation
A validator performs a scoring step every 30 minutes. For each 30 minute window leading up to a scoring step, if a miner fails to respond to a MatchPredictionRequest, they will accrue a penalty of -0.1.
For example, if a miner fails to respond to 6 MatchPredictionRequests from a validator, that miner will have a total of -0.6 points applied to their final score in the next scoring step.
After the scoring step has completed, the no-response penalties are reset to 0.
A miner has 15 seconds to respond to a validator request before it is considered a no-response.
Final Score Distribution
1. Pareto Distribution Application
Purpose
Transforms final scores to maintain competitive differentiation while preventing extreme outliers.