Detailed API

StrategicGames.VerbosityType
Verbosity

Many functions accept a verbosity parameter.

Choose between: NONE, LOW, STD [default], HIGH and FULL.

Under default verbosity (STD) no output is printed, unless something unexpected in most conditions (but not necessarily an error) is detected

source
StrategicGames.batchMethod

batch(n,bsize;sequential=false,rng)

Return a vector of bsize vectors of indeces from 1 to n. Randomly unless the optional parameter sequential is used.

Example:

julia julia> Utils.batch(6,2,sequential=true) 3-element Array{Array{Int64,1},1}: [1, 2] [3, 4] [5, 6]

source
StrategicGames.best_responseMethod
best_response(payoff_array,strategy_profile,nplayer;solver)

Return (possibly one of many) best strategy and corrsponding expected payoff for a given player.

Parameters:

  • payoff_array: the N_players+1 dimensional array of payoffs
  • strategy_profile: the vector of vectors defining the strategies for the N players (i.e. vectors of the probabilities for each action for each player). The strategy for player n for which the best response is computed is used as initial values in the inner optimisation
  • nplayer: counter of the player for which we want to compute the best_response (e.g. 1 or 3)
  • solver: currently either "GLPK" or "HiGHS" [default]
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Returns:

  • A named tuple with: expected_payoff, optimal_strategy, status (of the underlying optimisation)

Example:

julia> using StrategicGames
julia> payoff_array  = [(3,4) (1,5); (4,2) (2,3)] # prisoner's dilemma
2×2 Matrix{Tuple{Int64, Int64}}:
 (3, 4)  (1, 5)
 (4, 2)  (2, 3)
julia> best_response(expand_dimensions(payoff_array),[[0.5,0.5],[0.5,0.5]],2)
(expected_payoff = 4.0, optimal_strategy = [0.0, 1.0], status = MathOptInterface.OPTIMAL)
source
StrategicGames.dominated_actionsMethod
dominated_actions(payoff;iterated=true,domrem_strict,domrem_mixedactions,domrem_tol,domrem_solver,verbosity)

Implements the "Iterated [by default] Removal of Strictly [by default] Dominated Strategies" (IRSDS) algorithm, returning a vector (for each player) of vectors (action positions) of actions that for a given player are dominates by at least one of his other actions or by a mixture. This function is iterative (recursive) by default.

Parameters

  • payoff: the Nplayers+1 dimensional array of payoffs for the N players
  • domrem_strict: wheter to look for strictly dominated actions [def: true]
  • domrem_strict: wheter to look for strictly dominated actions [def: true]
  • domrem_mixedactions: wheter to look only for dominating pure actions (cheap) or look also to any possible mixed action domination (require solving a linear programming problem) [def: true]
  • domrem_tol: tollerance for the optimization problem. Used only if domrem_mixedactions is true. [def: 1e-08]
  • domrem_solver: the solver to use for the optimization problem, either "HiGHS", "GLPK" or "Ipopt". Used only if domrem_mixedactions is true [def: "HiGHS"]
  • iterated: wheter to look for dominated actions iteractively [def: true]
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Notes

  • This function is available also as dominated_actions(payoff,player;strict) returning a vector of dominated actions for a given players (computed not iteractively)
  • The function has also two arguments that are internally used, dominated, a vector of vectors of actions that can be skip-checked as already deemed as dominated (default to empty vectors) and support, a vector of indices reppresenting the support profile to consider, defaulting to all actions by all players.
  • The former is used by this same function to recursivelly look-up for dominated actions, the later is of importance when the function is called by algorithms that work over a specific support. While for looking at the actions of other players these two function arguments work similarly (I can skip considering actions that are either dominated or not in their supports) the difference is for the own player actions, where I can skip checking dominated/not in the support actions, but I can consider in the comparation also actions not in the specific support I am looking at.
  • To get the list of retrieved dominated actions at each iteration, use a verbosity level higher than STD

Example

julia> using StrategicGames
julia> payoff = [(13,3) (1,4) (7,3); (4,1) (3,3) (6,2); (-1,9) (2,8) (8,-1)]
3×3 Matrix{Tuple{Int64, Int64}}:
 (13, 3)  (1, 4)  (7, 3)
 (4, 1)   (3, 3)  (6, 2)
 (-1, 9)  (2, 8)  (8, -1)
julia> payoff_array = expand_dimensions(payoff);
julia> dominated_player2 = dominated_actions(payoff_array,2)
1-element Vector{Int64}:
 3
julia> dominated = dominated_actions(payoff_array,verbosity=HIGH)
Dominated strategies at step 1: [Int64[], [3]]
Dominated strategies at step 2: [[3], [3]]
Dominated strategies at step 3: [[3], [3, 1]]
Dominated strategies at step 4: [[3, 1], [3, 1]]
Dominated strategies at step 5: [[3, 1], [3, 1]]
2-element Vector{Vector{Int64}}:
 [3, 1]
 [3, 1]
source
StrategicGames.expected_valueMethod
expected_value(v::Array{N,Real},p::Vector{Vector{Real}}) --> Real

Compute the expected value (scalar) of a N-dimensional value array given a vector of N probability vectors, one per each dimension of the value array.

source
StrategicGames.is_best_responseMethod
is_best_response(payoff_array,strategy_profile,nplayer;atol=1e-07,rtol=1e-07,solver,verbosity=STD)

Determine if a given strategy for player nplayer is a best response to a given payoff array and strategies of the other players

Parameters:

  • payoff_array: the Nplayers+1 dimensional array of payoffs
  • strategy_profile: the vector of vectors defining the strategies for the N players
  • nplayer: counter of the player for which we want to verify if its strategy is a best_response (e.g. 1 or 3)
  • atol: absolute tollerance in comparing the expected payoff from the given strategy and those from the optimal one [def: 1e-07]
  • rtol: relative tollerance in comparing the expected payoff from the given strategy and those from the optimal one [def: 1e-07]
  • solver: currently either "GLPK" or "HiGHS"
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Example :

julia> using StrategicGames
julia> payoff_array = [(3,4) (1,5); (4,2) (2,3)] # prisoner's dilemma
2×2 Matrix{Tuple{Int64, Int64}}:
 (3, 4)  (1, 5)
 (4, 2)  (2, 3)
julia> is_best_response(expand_dimensions(payoff_array),[[0,1],[0.5,0.5]],1)
true
julia> is_best_response(expand_dimensions(payoff_array),[[0,1],[0.5,0.5]],2)
false
source
StrategicGames.is_nashMethod
is_nash(payoff_array,strategy_profile;atol=1e-07,rtol=1e-07,solver,verbosity)

Determine if a strategy profile is a Nash equilibrium for a given payoff array, i.e. if all strategies are (at least weak) best responses.

Parameters:

  • payoff_array: the Nplayers+1 array of payoffs
  • strategy_profile: the vector of vectors defining the strategies for the N players
  • atol: absolute tollerance in comparing the expected payoffs from the given strategies and those from the optimal ones [def: 1e-07]
  • rtol: relative tollerance in comparing the expected payoffs from the given strategies and those from the optimal ones [def: 1e-07]
  • solver: currently either "GLPK" or "HiGHS"
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Example :

julia> using StrategicGames
julia> payoff_array  = [(3,4) (1,5); (4,2) (2,3)] # prisoner's dilemma
2×2 Matrix{Tuple{Int64, Int64}}:
 (3, 4)  (1, 5)
 (4, 2)  (2, 3)
julia> is_nash(expand_dimensions(payoff_array),[[0,1],[0,1]])
true
source
StrategicGames.nash_cpMethod
nash_cp(payoff_array;init,verbosity)

Find a Nash Equilibrium for N-players simultaneous games when mixed strategies are allowed using the Complementarity Problem formulation and implementing iterated removal of dominated strategies

Parameters

  • payoff_array: the Nplayers+1 dimensional array of payoffs for the N players
  • init: a vector of vector of mixed strategies (i.e. PMFs) for each players to start the algorithm with. Different init points may reach different equilibrium points [def: equal probabilities for each available action of the players]
  • domrem_strict: wheter to look for strictly dominated actions in the presolver [def: true]
  • domrem_mixedactions: wheter to look only for dominating pure actions (cheap) or look also to any possible mixed action domination (require solving a linear programming problem) [def: true]
  • domrem_tol: tollerance for the optimization problem of dominated actions removal. Used only if domrem_mixedactions is true. [def: 1e-08]
  • domrem_solver: the solver to use for the optimization problem of dominated action removal, either "HiGHS", "GLPK" or "Ipopt". Used only if domrem_mixedactions is true [def: "HiGHS"]
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Notes

  • This function uses a complementarity formulation. For N <= 2 the problem, except the complementarity equation, is linear and known as LCP (Linear Complementarity Problem)
  • This implementation uses the JuMP modelling language with the Ipopt solver engine (and hence it uses an interior point method instead of the pivotal approach used in the original Lemke-Howson [1964] algorithm)
  • There is no guarantee on timing and even that the algorithm converge to an equilibrium. Different Nash equilibriums may be reached by setting different initial points
  • By default the iterative removal of dominated strategies concerns only strictly dominated ones. For some games where the algorithm doesn't computationally find a Nash equilibrium, you can often get success setting the algorithm to remove also weakly dominated actions, altought the opposite is also true and removing weackly dominated actions may lead to removing a Nash equilibrium.

Returns

  • A named tuple with the following elements: status,equilibrium_strategies,expected_payoffs

Example

julia> using StrategicGames
julia> payoff = [(-1,-1) (-3,0); (0, -3) (-2, -2)] # prisoner's dilemma
2×2 Matrix{Tuple{Int64, Int64}}:
 (-1, -1)  (-3, 0)
 (0, -3)   (-2, -2)
julia> eq     = nash_cp(expand_dimensions(payoff));
julia> eq_strategies = eq.equilibrium_strategies
2-element Vector{Vector{Float64}}:
 [-4.049752569180346e-11, 1.0000000000404976]
 [-4.0497525691839856e-11, 1.0000000000404976]
source
StrategicGames.nash_on_supportFunction
nash_on_support(payoff_array,support;init,verbosity)

Find (if it exists) a Nash equilibrium for N-players simultaneous games when mixed strategies are allowed on a specific support.

Parameters

  • payoff_array: the Nplayers+1 dimensional array of payoffs for the N players
  • support: vector of vector of action counts that are in the tested support for each player [def: full support]
  • init: a vector of vector of mixed strategies (i.e. PMFs) for each players to start the algorithm with. Different init points may reach different equilibrium points [def: equal probabilities for each available action of the players]
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Notes

  • This implementation uses the JuMP modelling language with the Ipopt solver engine

Returns

  • A named tuple with the following elements: status,equilibrium_strategies,expected_payoffs, solved

Example

julia> using StrategicGames
julia> payoff = [(-1,-1) (-3,0); (0, -3) (-2, -2)] # prisoner's dilemma. Only Nash eq is [[0,1],[0,1]]
2×2 Matrix{Tuple{Int64, Int64}}:
 (-1, -1)  (-3, 0)
 (0, -3)   (-2, -2)
julia> payoff_array = expand_dimensions(payoff);
julia> nash_on_support(payoff_array,[[1,2],[1,2]]).solved # false
false
julia> nash_on_support(payoff_array,[[2],[2]]).solved     # true
true
source
StrategicGames.nash_on_support_2pFunction
nash_on_support_2p(payoff_array,support;init,verbosity)

Find (if it exists) a Nash equilibrium for 2-players simultaneous games when mixed strategies are allowed on a specific support. This is a specialised 2-players version of the nash_on_support function.

Parameters

  • payoff_array: the 3 dimensional array of payoffs for the 2 players
  • support: vector of vector of action counts that are in the tested support for each player [def: full support]
  • init: a vector of vector of mixed strategies (i.e. PMFs) for each players to start the algorithm with. Different init points may reach different equilibrium points [def: equal probabilities for each available action of the players]
  • solver: the linear solver to use. Currently only "HiGHS" [default] is supported, as "GLPK" seems to have problems as this function is multithreaded
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Notes

  • This implementation uses the JuMP modelling language with either the HiGHS solver engine

Returns

  • A named tuple with the following elements: status,equilibrium_strategies,expected_payoffs, solved

Example

julia> using StrategicGames
julia> payoff = [(-1,-1) (-3,0); (0, -3) (-2, -2)] # prisoner's dilemma. Only Nash eq is [[0,1],[0,1]]
2×2 Matrix{Tuple{Int64, Int64}}:
 (-1, -1)  (-3, 0)
 (0, -3)   (-2, -2)
julia> payoff_array = expand_dimensions(payoff);
julia> nash_on_support2p(payoff_array,[[1,2],[1,2]]).solved # false
false
julia> nash_on_support2p(payoff_array,[[2],[2]]).solved     # true
true
source
StrategicGames.nash_seMethod
nash_se(payoff_array; allow_mixed=true, max_samples=1, isolated_eq_only=true, mt=true, verbosity=STD)

Compute max_samples (default one) Nash equilibria for a N-players generic game in normal form using support enumeration method.

Parameters

  • payoff_array: the Nplayers+1 dimensional array of payoffs for the N players
  • allow_mixed: wether to look and report also mixed strategies (default) or look only for pure strategies (if any)
  • max_samples: number of found sample Nash equilibria needed to stop the algorithm [def: 1]. Set it to Inf to look for all the possible isolated equilibria of the game
  • mt: wheter to use multithreads (def: true). Note that currently multithreading is always disable for a single eq search due to performance issues
  • isolated_eq_only: wheter to look only for isolated equilibria (def: true)
  • domrem_strict: wheter to look for strictly dominated actions in the presolver [def: true]
  • domrem_mixedactions: wheter to look only for dominating pure actions (cheap) or look also to any possible mixed action domination (require solving a linear programming problem) [def: true]
  • domrem_tol: tollerance for the optimization problem of dominated actions removal. Used only if domrem_mixedactions is true. [def: 1e-08]
  • domrem_solver: the solver to use for the optimization problem of dominated action removal, either "HiGHS", "GLPK" or "Ipopt". Used only if domrem_mixedactions is true [def: "HiGHS"]
  • verbosity: either NONE, LOW, STD [default], HIGH or FULL

Notes

  • This function uses a support enumeration method to avoid the complementarity conditions and solve simpler problems conditional to a specific support. More specifically we use the heuristic of Porter-Nudelman-Shoham (2008) and a dominance check, altought not recursively as in Turocy (2007)
  • To reduce computational costs, in 2 players game by default only (and all) existing isolated Nash equilibria that are unique for a given support are returned by this function. To retrieve a "sample" of equilbria that are not isolated but represent instead a continuum in degenerated games in two players games you can set isolated_eq_only option to false. This will allow the search to eq within supports of different sizes for the two players. This option is always false for more than 2 players.
  • The algorithm implements the iterative removal of dominated strategies. However this is run for every possible support. If it is taking too long, you may want to consider setting domrem_mixedactions to false in order to look only for pure dominating strategies.

Returns

  • A vector of named tuples (even for the default single max_samples) with the following information: equilibrium_strategies, expected_payoffs, supports

Example

julia> using StrategicGames
julia> payoff = expand_dimensions([(3,3) (3,2);
                                   (2,2) (5,6);
                                   (0,3) (6,1)]);
julia> eqs = nash_se(payoff,max_samples=Inf)
3-element Vector{NamedTuple{(:equilibrium_strategies, :expected_payoffs, :supports), Tuple{Vector{Vector{Float64}}, Vector{Float64}, Vector{Vector{Int64}}}}}:
 (equilibrium_strategies = [[0.9999999999999999, 0.0, 0.0], [0.9999999999999999, 0.0]], expected_payoffs = [2.9999999999999516, 2.9999999999999516], supports = [[1], [1]])
 (equilibrium_strategies = [[0.8, 0.2, 0.0], [0.6666666666666666, 0.33333333333333337]], expected_payoffs = [3.0, 2.8000000000000003], supports = [[1, 2], [1, 2]])
 (equilibrium_strategies = [[0.0, 0.33333333333333337, 0.6666666666666666], [0.33333333333333315, 0.6666666666666669]], expected_payoffs = [4.000000000000001, 2.6666666666666665], supports = [[2, 3], [1, 2]])
source
StrategicGames.restrict_payoffMethod
restrict_payoff(payoff_n, player;dominated,support)

Compose a sub-payoff considering only non dominated actions and, for the other players, only the actions in the specified support (while for the given player we ignore the support)

source
StrategicGames.unstack_payoffMethod
unstack_payoff(x::AbstractMatrix)

Unstack a payoff encoded in long format, where the first half of the columns are the action positions for each player and the second half of the columns are the payoff for the various players, to the Nplayers+1 dimensional array used in the library.

Example

julia> # 2 players with 2 and 3 actions respectively
       long_payoff = [
            1 1 0.1 0.3;
            1 2 4 6;
            1 3 4 2;
            2 2 4 5;
            2 1 0.1 0.3;
            2 3 1.3 2;];
julia> unstack_payoff(long_payoff)
2×3×2 Array{Float64, 3}:
[:, :, 1] =
 0.1  4.0  4.0
 0.1  4.0  1.3
[:, :, 2] =
 0.3  6.0  2.0
 0.3  5.0  2.0
source