Metrics¶
policy_arena.metrics
¶
compute_cooperation_rate(model)
¶
Fraction of all actions this round that were COOPERATE.
Source code in src/policy_arena/metrics/cooperation.py
16 17 18 19 20 21 | |
compute_strategy_entropy(model)
¶
Shannon entropy over actions this round (reads model._round_all_actions).
Source code in src/policy_arena/metrics/entropy.py
64 65 66 67 | |
shannon_entropy(values)
¶
Shannon entropy H = -sum(p * log2(p)) over the distribution of values.
Returns 0.0 for empty sequences or single-value sequences.
Source code in src/policy_arena/metrics/entropy.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | |
gini_coefficient(values)
¶
Compute the Gini coefficient of a list of values.
Returns a value in [0, 1] where 0 = perfect equality, 1 = maximum inequality. Returns 0.0 for empty or all-zero inputs.
Source code in src/policy_arena/metrics/gini.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | |
compute_nash_distance(model)
¶
Fraction of this round's pairwise interactions deviating from NE.
Reads model._round_actions: list of (agent_i_id, agent_j_id, action_i, action_j).
Source code in src/policy_arena/metrics/nash_distance.py
19 20 21 22 23 24 25 26 27 28 29 30 | |
reciprocity_index(history_a, history_b)
¶
Compute reciprocity between two agents' action histories.
Measures how often agents match each other's previous action. Returns a value in [-1, 1]: +1 = perfect reciprocity (always copies opponent's last move) 0 = no correlation -1 = perfect anti-reciprocity (always does opposite)
Only meaningful for histories of length >= 2.
Source code in src/policy_arena/metrics/reciprocity.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | |
compute_individual_regret(agent, payoff_matrix)
¶
Compute regret for a single agent over its full interaction history.
Args: agent: The agent whose regret to compute. payoff_matrix: Maps (my_action, opponent_action) -> (my_payoff, opponent_payoff).
Returns: Non-negative regret value (0 = played optimally in hindsight).
Source code in src/policy_arena/metrics/regret.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | |
compute_social_welfare(model)
¶
Total payoffs this round as fraction of theoretical max.
Reads model._round_total_payoff and model._round_max_payoff.
Source code in src/policy_arena/metrics/social_welfare.py
16 17 18 19 20 21 22 23 24 25 | |