Comprehensive Technical Docs About the Scoring Mechanism

# Table of Contents
1. Core Components
2. Score Calculation Process
3. Penalty System
4. Final Score Distribution
5. Implementation Details
6. System Monitoring and Maintenance

# Core Components

## 1. Statistical Significance (ρ)
**Purpose**  
Statistical significance measures how consistently a miner participates in the prediction network relative to required thresholds. This component ensures that miners maintain a steady stream of predictions rather than making sporadic contributions.

### Mathematical Definition
```math
ρ = 1 / (1 + e^(-α(x-threshold)))

Parameters

  • x: Number of miner predictions in the evaluation period

  • threshold: Required prediction threshold for the league

  • α: Sensitivity parameter (typically between 0.1 and 0.5)

Example Calculation

Example scenario:

miner_predictions = 45
league_threshold = 40
alpha = 0.2

Calculate ρ:

difference = miner_predictions - league_threshold  # 5
exponent = -alpha * difference  # -1
denominator = 1 + math.exp(exponent)  # 1.368
rho = 1 / denominator  # 0.731

Impact Analysis

How ρ changes with different prediction counts:

Predictions
Threshold
ρ Value

20

40

0.018

30

40

0.122

40

40

0.500

50

40

0.878

60

40

0.982

This sigmoid curve ensures:

  • Scores below threshold are heavily penalized.

  • Scores near threshold receive moderate weights.

  • Diminishing returns for far exceeding threshold.

2. Incentive Score (v)

Purpose The incentive score combines timing and market value components to reward predictions that are both early and capture market inefficiencies.

Components

A. Time Component

timecomponent=exp(−γ∗Δt)time_component = exp(-γ * Δt) timec​omponent=exp(−γ∗Δt)

Parameters:

  • γ: Time decay parameter (typically 0.001 to 0.005)

  • Δt: Minutes between prediction and match start

Example:

# Early prediction (24 hours before)
gamma = 0.002
delta_t = 24 * 60  # 1440 minutes
time_score_early = math.exp(-gamma * delta_t)  # 0.056

# Late prediction (1 hour before)
delta_t_late = 60  # 60 minutes
time_score_late = math.exp(-gamma * delta_t_late)  # 0.886

B. CLV Component

clvcomponent=(1−(2β))/(1+exp(κ∗clv))+βclv_component = (1 - (2β)) / (1 + exp(κ * clv)) + β clvc​omponent=(1−(2β))/(1+exp(κ∗clv))+β

Parameters:

  • clv: Closing Line Value (difference between prediction odds and closing odds)

  • κ: Transition parameter (typically 1-5)

  • β: Extremis parameter (typically 0.1-0.3)

Example:

# Favorable CLV scenario
clv = 0.15  # 15% better than closing odds
kappa = 2
beta = 0.2
favorable_clv = (1 - (2*0.2)) / (1 + math.exp(2 * 0.15)) + 0.2  # 0.723

# Unfavorable CLV scenario
clv = -0.10  # 10% worse than closing odds
unfavorable_clv = (1 - (2*0.2)) / (1 + math.exp(2 * -0.10)) + 0.2  # 0.412

Combined Incentive Score

v=timecomponent+(1−timecomponent)∗clvcomponentv = time_component + (1 - time_component) * clv_component v=timec​omponent+(1−timec​omponent)∗clvc​omponent

Example Full Calculation:

# Early prediction with good CLV
time_comp = 0.056  # from earlier example
clv_comp = 0.723   # from earlier example
v_early_good = time_comp + (1 - time_comp) * clv_comp  # 0.737

# Late prediction with poor CLV
time_comp_late = 0.886
clv_comp_poor = 0.412
v_late_poor = time_comp_late + (1 - time_comp_late) * clv_comp_poor  # 0.459

3. Closing Line Value (CLV)

Purpose CLV measures the value captured between prediction odds and closing odds, indicating a miner's ability to identify market inefficiencies.

Calculation

clv=predictionodds−closingoddsclv = prediction_odds - closing_odds clv=predictiono​dds−closingo​dds

Detailed Example:

# Scenario 1: Value Captured
prediction = {
    'team': 'TeamA',
    'prediction_odds': 2.50,  # Implied probability 40%
    'closing_odds': 2.00,     # Implied probability 50%
    'actual_winner': 'TeamA'
}
clv = 2.50 - 2.00  # 0.50 (positive value captured)

# Scenario 2: Value Lost
prediction = {
    'team': 'TeamB',
    'prediction_odds': 1.80,  # Implied probability 55.6%
    'closing_odds': 2.20,     # Implied probability 45.5%
    'actual_winner': 'TeamB'
}
clv = 1.80 - 2.20  # -0.40 (negative value lost)

4. Gaussian Filter

Purpose The Gaussian filter prevents gaming of the system by suppressing scores for predictions that deviate significantly from market consensus.

Mathematical Definition

sigma = log(1/closing_odds²)

w = (closing_odds - 1.0) * log(closing_odds)/2

diff = abs(closing_odds - 1/prediction_probability)

filter = 1.0 if diff <= w else exp(-diff²/(4*sigma²))

Example Calculations:

# Conservative prediction close to market
closing_odds = 1.90
prediction_prob = 0.54  # implied odds ≈ 1.85
sigma = math.log(1/1.90**2)
w = (1.90 - 1.0) * math.log(1.90)/2
diff = abs(1.90 - 1/0.54)
filter_conservative = 1.0  # Within acceptable range

# Aggressive prediction far from market
closing_odds = 1.90
prediction_prob = 0.80  # implied odds = 1.25
diff_aggressive = abs(1.90 - 1/0.80)
filter_aggressive = math.exp(-diff_aggressive**2/(4*sigma**2))  # ≈ 0.342

Score Calculation Process

1. Individual Prediction Score

Each prediction's score is calculated by combining all components:

def calculate_prediction_score(prediction, match_data):
    # Calculate components
    v = calculate_incentive_score(
        delta_t=get_time_delta(prediction.time, match_data.start_time),
        clv=calculate_clv(prediction, match_data)
    )
    sigma = calculate_closing_edge(prediction, match_data)
    gfilter = apply_gaussian_filter(prediction, match_data)
    
    # Combine for final prediction score
    return v * sigma * gfilter

Example Calculation:

prediction_data = {
    'time': '2024-01-01 12:00:00',
    'odds': 2.50,
    'probability': 0.40
}
match_data = {
    'start_time': '2024-01-02 15:00:00',
    'closing_odds': 2.00,
    'actual_winner': 'TeamA'
}

score = calculate_prediction_score(prediction_data, match_data)

2. League Score Calculation

Purpose League scores aggregate individual prediction scores while considering league-specific requirements and weightings.

Mathematical Definition

leaguescore=ρ∗Σ(predictionscores)league_score = ρ * Σ(prediction_scores) leagues​core=ρ∗Σ(predictions​cores)

Detailed Example:

# Sample prediction scores for a miner in Premier League
predictions = {
    'match_1': {
        'score': 0.85,
        'timestamp': '2024-01-01 15:00:00',
        'correct': True
    },
    'match_2': {
        'score': -0.32,
        'timestamp': '2024-01-02 20:00:00',
        'correct': False
    },
    'match_3': {
        'score': 0.64,
        'timestamp': '2024-01-03 19:30:00',
        'correct': True
    }
}

# Calculate components
prediction_sum = sum(p['score'] for p in predictions.values())  # 1.17
num_predictions = len(predictions)  # 3
threshold = ROLLING_PREDICTION_THRESHOLD_BY_LEAGUE['PREMIER_LEAGUE']  # e.g., 5
rho = compute_significance_score(num_predictions, threshold, alpha=0.2)  # 0.731

league_score = rho * prediction_sum  # 0.855

3. Overall Score Aggregation

Purpose Combines scores across different leagues while respecting league importance weights.

Mathematical Definition

overall_score = Σ(league_score * league_weight)

Detailed Example:

# League weights
LEAGUE_WEIGHTS = {
    'PREMIER_LEAGUE': 0.35,
    'LA_LIGA': 0.25,
    'BUNDESLIGA': 0.20,
    'SERIE_A': 0.20
}

# Sample league scores for a miner
league_scores = {
    'PREMIER_LEAGUE': 0.855,
    'LA_LIGA': 0.623,
    'BUNDESLIGA': 0.741,
    'SERIE_A': 0.512
}

# Calculate weighted score
weighted_score = sum(
    score * LEAGUE_WEIGHTS[league]
    for league, score in league_scores.items()
)

# Example calculation:
# (0.855 * 0.35) + (0.623 * 0.25) + (0.741 * 0.20) + (0.512 * 0.20)
# = 0.299 + 0.156 + 0.148 + 0.102
# = 0.705

Penalty System

1. League Commitment Penalties

Purpose Ensures miners maintain active participation across leagues by penalizing those without sufficient league commitments.

Implementation

class LeaguePenaltySystem:
    def __init__(self):
        self.NO_LEAGUE_COMMITMENT_PENALTY = -0.25
        self.accumulated_penalties = {}
        
    def calculate_penalty(self, miner_id: int, active_leagues: List[str]) -> float:
        if not active_leagues:
            # Accumulate penalty
            current_penalty = self.accumulated_penalties.get(miner_id, 0)
            new_penalty = current_penalty + self.NO_LEAGUE_COMMITMENT_PENALTY
            self.accumulated_penalties[miner_id] = new_penalty
            return new_penalty
        else:
            # Reset penalty
            self.accumulated_penalties[miner_id] = 0
            return 0

Example Usage:

penalty_system = LeaguePenaltySystem()

# Scenario 1: Miner with no leagues
miner_1_penalty = penalty_system.calculate_penalty(1, [])  # -0.25
miner_1_penalty = penalty_system.calculate_penalty(1, [])  # -0.50 (accumulated)

# Scenario 2: Miner with active leagues
miner_2_penalty = penalty_system.calculate_penalty(2, ['PREMIER_LEAGUE'])  # 0

2. No-Response Penalties

Purpose Ensures miners respond to prediction requests in a timely manner.

Implementation

class ResponsePenaltySystem:
    def __init__(self):
        self.NO_RESPONSE_PENALTY = -0.15
        self.PENALTY_DECAY = 0.95
        self.penalties = {}
    
    def update_penalty(self, miner_id: int, responded: bool) -> float:
        current_penalty = self.penalties.get(miner_id, 0)
        
        if not responded:
            # Accumulate penalty
            new_penalty = current_penalty + self.NO_RESPONSE_PENALTY
        else:
            # Decay penalty
            new_penalty = current_penalty * self.PENALTY_DECAY
            
        self.penalties[miner_id] = new_penalty
        return new_penalty

Example Scenarios:

response_system = ResponsePenaltySystem()

# Scenario 1: Missing responses
day1_penalty = response_system.update_penalty(1, False)  # -0.15
day2_penalty = response_system.update_penalty(1, False)  # -0.30

# Scenario 2: Recovery behavior
day3_penalty = response_system.update_penalty(1, True)   # -0.285
day4_penalty = response_system.update_penalty(1, True)   # -0.271

Final Score Distribution

1. Pareto Distribution Application

Purpose Transforms final scores to maintain competitive differentiation while preventing extreme outliers.

Mathematical Implementation

def apply_pareto(scores: List[float], mu: float, alpha: int) -> List[float]:
    """
    Apply Pareto distribution to scores.
    
    Parameters:
        scores: Raw scores
        mu: Minimum value parameter
        alpha: Shape parameter
    """
    scores_array = np.array(scores)
    positive_mask = scores_array > 0
    positive_scores = scores_array[positive_mask]
    
    transformed_scores = np.zeros_like(scores_array)
    
    if len(positive_scores) > 0:
        # Transform positive scores
        min_score = np.min(positive_scores)
        range_transformed = (positive_scores - min_score) + 1
        transformed_positive = mu * np.power(range_transformed, alpha)
        transformed_scores[positive_mask] = transformed_positive
    
    return transformed_scores

Example Scenario:

raw_scores = [0.705, 0.432, 0.891, 0.156, 0.543]
mu = 0.1
alpha = 2

transformed_scores = apply_pareto(raw_scores, mu, alpha)

# Example output analysis:
# Raw scores: [0.705, 0.432, 0.891, 0.156, 0.543]
# Transformed: [1.247, 0.583, 2.103, 0.112, 0.891]

2. Score Normalization

Purpose Ensures final scores are properly scaled and distributed for network consensus.

Implementation

def normalize_scores(scores: List[float], target_sum: float = 1.0) -> List[float]:
    """
    Normalize scores to sum to target value while preserving relative differences.
    """
    scores_array = np.array(scores)
    score_sum = np.sum(scores_array)
    
    if score_sum == 0:
        return scores_array
        
    normalized = (scores_array / score_sum) * target_sum
    return normalized

Example Scenario:

transformed_scores = [1.247, 0.583, 2.103, 0.112, 0.891]
final_scores = normalize_scores(transformed_scores)

# Example output:
# [0.252, 0.118, 0.425, 0.023, 0.182]
# Sum = 1.0

Implementation Details

1. Score Update Mechanism

class ScoringSystem:
    def __init__(self, device='cpu'):
        self.device = device
        self.scores = torch.zeros(num_miners).to(device)
        self.ema_alpha = 0.2  # Exponential moving average parameter
        
    def update_scores(self, new_scores: torch.FloatTensor, uids: List[int]):
        """
        Update miner scores using exponential moving average.
        """
        uids_tensor = torch.tensor(uids).to(self.device)
        
        # Handle NaN values
        new_scores = torch.nan_to_num(new_scores, 0)
        
        # Apply EMA update
        current_scores = self.scores[uids_tensor]
        updated_scores = (
            self.ema_alpha * new_scores + 
            (1 - self.ema_alpha) * current_scores
        )
        
        # Update scores
        self.scores = self.scores.scatter(
            0, uids_tensor, updated_scores
        )
        
        return self.scores

Example Usage:

scoring_system = ScoringSystem()
new_scores = torch.tensor([0.252, 0.118, 0.425, 0.023, 0.182])
uids = [0, 1, 2, 3, 4]
updated_scores = scoring_system.update_scores(new_scores, uids)

2. Performance Monitoring

class PerformanceMonitor:
    def __init__(self):
        self.score_history = []
        self.penalty_history = {}
        self.participation_rates = {}
        
    def log_scores(self, scores: Dict[int, float]):
        self.score_history.append({
            'timestamp': datetime.now(),
            'scores': scores,
            'statistics': {
                'mean': np.mean(list(scores.values())),
                'std': np.std(list(scores.values())),
                'min': min(scores.values()),
                'max': max(scores.values())
            }
        })
        
    def generate_report(self):
        """Generate performance report."""
        return {
            'score_trends': self.analyze_score_trends(),
            'penalty_analysis': self.analyze_penalties(),
            'participation_metrics': self.analyze_participation()
        }

System Maintenance

1. Parameter Optimization

def optimize_parameters(
    historical_data: Dict,
    current_params: Dict
) -> Dict[str, float]:
    """
    Optimize system parameters based on historical performance.
    """
    # Example parameter optimization logic
    new_params = {
        'gamma': analyze_time_decay(historical_data),
        'kappa': optimize_clv_impact(historical_data),
        'beta': optimize_extremis(historical_data),
        'alpha': optimize_sensitivity(historical_data)
    }
    
    return new_params

2. Health Checks

def system_health_check():
    """
    Perform system health checks and parameter validation.
    """
    checks = {
        'score_distribution': check_score_distribution(),
        'penalty_rates': check_penalty_rates(),
        'participation_levels': check_participation_levels(),
        'parameter_ranges': check_parameter_ranges()
    }
    
    return checks

Conclusion

This scoring mechanism creates a comprehensive system for evaluating sports predictions that:

  1. Rewards accurate and timely predictions

  2. Maintains fair competition

  3. Prevents gaming and manipulation

  4. Adapts to changing market conditions

  5. Provides clear incentives for participation

Regular monitoring and maintenance ensure the system remains effective and balanced over time.

Last updated