Chadwick Gambit: Software Tools for Game Theory

User guide#

Example: One-shot trust game with binary actions#

[Kre90] introduced a game commonly referred to as the trust game. We will build a one-shot version of this game using pygambit’s game transformation operations.

There are two players, a Buyer and a Seller. The Buyer moves first and has two actions, Trust or Not trust. If the Buyer chooses Not trust, then the game ends, and both players receive payoffs of 0. If the Buyer chooses Trust, then the Seller has a choice with two actions, Honor or Abuse. If the Seller chooses Honor, both players receive payoffs of 1; if the Seller chooses Abuse, the Buyer receives a payoff of -1 and the Seller receives a payoff of 2.

We create a game with an extensive representation using Game.new_tree():

In [1]: import pygambit as gbt

In [2]: g = gbt.Game.new_tree(players=["Buyer", "Seller"],
   ...:                       title="One-shot trust game, after Kreps (1990)")
   ...: 

The tree of the game contains just a root node, with no children:

In [3]: g.root
Out[3]: Node(game=Game(title='One-shot trust game, after Kreps (1990)'), path=[])

In [4]: g.root.children
Out[4]: NodeChildren(parent=Node(game=Game(title='One-shot trust game, after Kreps (1990)'), path=[]))

To extend a game from an existing terminal node, use Game.append_move():

In [5]: g.append_move(g.root, "Buyer", ["Trust", "Not trust"])

In [6]: g.root.children
Out[6]: NodeChildren(parent=Node(game=Game(title='One-shot trust game, after Kreps (1990)'), path=[]))

We can then also add the Seller’s move in the situation after the Buyer chooses Trust:

In [7]: g.append_move(g.root.children[0], "Seller", ["Honor", "Abuse"])

Now that we have the moves of the game defined, we add payoffs. Payoffs are associated with an Outcome; each Outcome has a vector of payoffs, one for each player, and optionally an identifying text label. First we add the outcome associated with the Seller proving themselves trustworthy:

In [8]: g.set_outcome(g.root.children[0].children[0], g.add_outcome([1, 1], label="Trustworthy"))

Next, the outcome associated with the scenario where the Buyer trusts but the Seller does not return the trust:

In [9]: g.set_outcome(g.root.children[0].children[1], g.add_outcome([-1, 2], label="Untrustworthy"))

And, finally the outcome associated with the Buyer opting out of the interaction:

In [10]: g.set_outcome(g.root.children[1], g.add_outcome([0, 0], label="Opt-out"))

Nodes without an outcome attached are assumed to have payoffs of zero for all players. Therefore, adding the outcome to this latter terminal node is not strictly necessary in Gambit, but it is useful to be explicit for readability.

[Kre90]

Kreps, D. (1990) “Corporate Culture and Economic Theory.” In J. Alt and K. Shepsle, eds., Perspectives on Positive Political Economy, Cambridge University Press.

Example: A one-card poker game with private information#

To illustrate games in extensive form, [Mye91] presents a one-card poker game. A version of this game also appears in [RUW08], as a classroom game under the name “stripped-down poker”. This is perhaps the simplest interesting game with imperfect information.

In our version of the game, there are two players, Alice and Bob. There is a deck of cards, with equal numbers of King and Queen cards. The game begins with each player putting $1 in the pot. One card is dealt at random to Alice; Alice observes her card but Bob does not. After Alice observes her card, she can choose either to Raise or to Fold. If she chooses to Fold, Bob wins the pot and the game ends. If she chooses to Raise, she adds another $1 to the pot. Bob then chooses either to Meet or Pass. If he chooses to Pass, Alice wins the pot and the game ends. If he chooses to Meet, he adds another $1 to the pot. There is then a showdown, in which Alice reveals her card. If she has a King, then she wins the pot; if she has a Queen, then Bob wins the pot.

We can build this game using the following script:

g = gbt.Game.new_tree(players=["Alice", "Bob"],
                      title="One card poker game, after Myerson (1991)")
g.append_move(g.root, g.players.chance, ["King", "Queen"])
for node in g.root.children:
    g.append_move(node, "Alice", ["Raise", "Fold"])
g.append_move(g.root.children[0].children[0], "Bob", ["Meet", "Pass"])
g.append_infoset(g.root.children[1].children[0],
                 g.root.children[0].children[0].infoset)
alice_winsbig = g.add_outcome([2, -2], label="Alice wins big")
alice_wins = g.add_outcome([1, -1], label="Alice wins")
bob_winsbig = g.add_outcome([-2, 2], label="Bob wins big")
bob_wins = g.add_outcome([-1, 1], label="Bob wins")
g.set_outcome(g.root.children[0].children[0].children[0], alice_winsbig)
g.set_outcome(g.root.children[0].children[0].children[1], alice_wins)
g.set_outcome(g.root.children[0].children[1], bob_wins)
g.set_outcome(g.root.children[1].children[0].children[0], bob_winsbig)
g.set_outcome(g.root.children[1].children[0].children[1], alice_wins)
g.set_outcome(g.root.children[1].children[1], bob_wins)

All extensive games have a chance (or nature) player, accessible as .Game.players.chance. Moves belonging to the chance player can be added in the same way as to personal players. At any new move created for the chance player, the action probabilities default to uniform randomization over the actions at the move.

In this game, information structure is important. Alice knows her card, so the two nodes at which she has the move are part of different information sets. The loop:

for node in g.root.children:
    g.append_move(node, "Alice", ["Raise", "Fold"])

causes each of the newly-appended moves to be in new information sets. In contrast, Bob does not know Alice’s card, and therefore cannot distinguish between the two nodes at which he has the decision. This is implemented in the following lines:

g.append_move(g.root.children[0].children[0], "Bob", ["Meet", "Pass"])
g.append_infoset(g.root.children[1].children[0],
                 g.root.children[0].children[0].infoset)

The call Game.append_infoset() adds a move at a terminal node as part of an existing information set (represented in pygambit as an Infoset).

[Mye91]

Myerson, Roger B. (1991) Game Theory: Analysis of Conflict. Cambridge: Harvard University Press.

[RUW08]

Reiley, David H., Michael B. Urbancic and Mark Walker. (2008) “Stripped-down poker: A classroom game with signaling and bluffing.” The Journal of Economic Education 39(4): 323-341.

Building a strategic game#

Games in strategic form, also referred to as normal form, are represented solely by a collection of payoff tables, one per player. The most direct way to create a strategic game is via Game.from_arrays(). This function takes one n-dimensional array per player, where n is the number of players in the game. The arrays can be any object that can be indexed like an n-times-nested Python list; so, for example, numpy arrays can be used directly.

For example, to create a standard prisoner’s dilemma game in which the cooperative payoff is 8, the betrayal payoff is 10, the sucker payoff is 2, and the noncooperative payoff is 5:

In [11]: import numpy as np

In [12]: m = np.array([[8, 2], [10, 5]])

In [13]: g = gbt.Game.from_arrays(m, np.transpose(m))

In [14]: g
Out[14]: Game(title='Untitled strategic game')

The arrays passed to Game.from_arrays() are all indexed in the same sense, that is, the top level index is the choice of the first player, the second level index of the second player, and so on. Therefore, to create a two-player symmetric game, as in this example, the payoff matrix for the second player is transposed before passing to Game.from_arrays().

Representation of numerical data of a game#

Payoffs to players and probabilities of actions at chance information sets are specified as numbers. Gambit represents the numerical values in a game in exact precision, using either decimal or rational representations.

To illustrate, we consider a trivial game which just has one move for the chance player:

In [15]: import pygambit as gbt

In [16]: g = gbt.Game.new_tree()

In [17]: g.append_move(g.root, g.players.chance, ["a", "b", "c"])

In [18]: [act.prob for act in g.root.infoset.actions]
Out[18]: [Rational(1, 3), Rational(1, 3), Rational(1, 3)]

The default when creating a new move for chance is that all actions are chosen with equal probability. These probabilities are represented as rational numbers, using pygambit’s Rational class, which is derived from Python’s fractions.Fraction. Numerical data can be set as rational numbers:

In [19]: g.set_chance_probs(g.root.infoset,
   ....:                    [gbt.Rational(1, 4), gbt.Rational(1, 2), gbt.Rational(1, 4)])
   ....: 

In [20]: [act.prob for act in g.root.infoset.actions]
Out[20]: [Rational(1, 4), Rational(1, 2), Rational(1, 4)]

They can also be explicitly specified as decimal numbers:

In [21]: g.set_chance_probs(g.root.infoset,
   ....:                    [gbt.Decimal(".25"), gbt.Decimal(".50"), gbt.Decimal(".25")])
   ....: 

In [22]: [act.prob for act in g.root.infoset.actions]
Out[22]: [Decimal('0.25'), Decimal('0.50'), Decimal('0.25')]

Although the two representations above are mathematically equivalent, pygambit remembers the format in which the values were specified.

Expressing rational or decimal numbers as above is verbose and tedious. pygambit offers a more concise way to express numerical data in games: when setting numerical game data, pygambit will attempt to convert text strings to their rational or decimal representation. The above can therefore be written more compactly using string representations:

In [23]: g.set_chance_probs(g.root.infoset, ["1/4", "1/2", "1/4"])

In [24]: [act.prob for act in g.root.infoset.actions]
Out[24]: [Rational(1, 4), Rational(1, 2), Rational(1, 4)]

In [25]: g.set_chance_probs(g.root.infoset, [".25", ".50", ".25"])

In [26]: [act.prob for act in g.root.infoset.actions]
Out[26]: [Decimal('0.25'), Decimal('0.50'), Decimal('0.25')]

As a further convenience, pygambit will accept Python int and float values. int values are always interpreted as Rational values. pygambit attempts to render float values in an appropriate Decimal equivalent. In the majority of cases, this creates no problems. For example,

In [27]: g.set_chance_probs(g.root.infoset, [.25, .50, .25])

In [28]: [act.prob for act in g.root.infoset.actions]
Out[28]: [Decimal('0.25'), Decimal('0.5'), Decimal('0.25')]

However, rounding can cause difficulties when attempting to use float values to represent values which do not have an exact decimal representation

In [29]: g.set_chance_probs(g.root.infoset, [1/3, 1/3, 1/3])
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[29], line 1
----> 1 g.set_chance_probs(g.root.infoset, [1/3, 1/3, 1/3])

File src/pygambit/game.pxi:632, in pygambit.gambit.Game.set_chance_probs()

ValueError: set_chance_probs(): must specify non-negative probabilities that sum to one

This behavior can be slightly surprising, especially in light of the fact that in Python,

In [30]: 1/3 + 1/3 + 1/3
Out[30]: 1.0

In checking whether these probabilities sum to one, pygambit first converts each of the probabilitiesto a Decimal representation, via the following method

In [31]: gbt.Decimal(str(1/3))
Out[31]: Decimal('0.3333333333333333')

and the sum-to-one check then fails because

In [32]: gbt.Decimal(str(1/3)) + gbt.Decimal(str(1/3)) + gbt.Decimal(str(1/3))
Out[32]: Decimal('0.9999999999999999')

Setting payoffs for players also follows the same rules. Representing probabilities and payoffs exactly is essential, because pygambit offers (in particular for two-player games) the possibility of computation of equilibria exactly, because the Nash equilibria of any two-player game with rational payoffs and chance probabilities can be expressed exactly in terms of rational numbers.

It is therefore advisable always to specify the numerical data of games either in terms of Decimal or Rational values, or their string equivalents. It is safe to use int values, but float values should be used with some care to ensure the values are recorded as intended.

Reading a game from a file#

Games stored in existing Gambit savefiles can be loaded using Game.read_game():

In [33]: g = gbt.Game.read_game("e02.nfg")

In [34]: g
Out[34]: Game(title='Selten (IJGT, 75), Figure 2, normal form')

Computing Nash equilibria#

Interfaces to algorithms for computing Nash equilibria are provided in pygambit.nash.

Method

Python function

gambit-enumpure

pygambit.nash.enumpure_solve()

gambit-enummixed

pygambit.nash.enummixed_solve()

gambit-lp

pygambit.nash.lp_solve()

gambit-lcp

pygambit.nash.lcp_solve()

gambit-liap

pygambit.nash.liap_solve()

gambit-logit

pygambit.nash.logit_solve()

gambit-simpdiv

pygambit.nash.simpdiv_solve()

gambit-ipa

pygambit.nash.ipa_solve()

gambit-gnm

pygambit.nash.gnm_solve()

We take as an example the one-card poker game. This is a two-player, constant sum game, and so all of the equilibrium-finding methods can be applied to it.

For two-player games, lcp_solve() can compute Nash equilibria directly using the extensive representation. Assuming that g refers to the game

In [35]: result = gbt.nash.lcp_solve(g)

In [36]: result
Out[36]: NashComputationResult(method='lcp', rational=True, use_strategic=False, equilibria=[[[[Rational(1, 1), Rational(0, 1)], [Rational(1, 3), Rational(2, 3)]], [[Rational(2, 3), Rational(1, 3)]]]], parameters={'stop_after': 0, 'max_depth': 0})

In [37]: len(result.equilibria)
Out[37]: 1

The result of the calculation is returned as a NashComputationResult object. The set of equilibria found is reported in NashComputationResult.equilibria; in this case, this is a list of mixed behavior profiles. A mixed behavior profile specifies, for each information set, the probability distribution over actions at that information set. Indexing a MixedBehaviorProfile by a player gives a MixedBehavior, which specifies probability distributions at each of the player’s information sets:

In [38]: eqm = result.equilibria[0]

In [39]: eqm["Alice"]
Out[39]: [[Rational(1, 1), Rational(0, 1)], [Rational(1, 3), Rational(2, 3)]]

In this case, at Alice’s first information set, the one at which she has the King, she always raises. At her second information set, where she has the Queen, she sometimes bluffs, raising with probability one-third. The probability distribution at an information set is represented by a MixedAction. MixedBehavior.mixed_actions() iterates over these for the player:

In [40]: for infoset, mixed_action in eqm["Alice"].mixed_actions():
   ....:     print(infoset)
   ....:     print(mixed_action)
   ....: 
Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), number=0)
[Rational(1, 1), Rational(0, 1)]
Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), number=1)
[Rational(1, 3), Rational(2, 3)]

So we could extract Alice’s probabilities of raising at her respective information sets like this:

In [41]: {infoset: mixed_action["Raise"] for infoset, mixed_action in eqm["Alice"].mixed_actions()}
Out[41]: 
{Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), number=0): Rational(1, 1),
 Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), number=1): Rational(1, 3)}

In larger games, labels may not always be the most convenient way to refer to specific actions. We can also index profiles directly with Action objects. So an alternative way to extract the probabilities of playing “Raise” would be by iterating Alice’s list of actions:

In [42]: {action.infoset: eqm[action] for action in g.players["Alice"].actions if action.label == "Raise"}
Out[42]: 
{Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), number=0): Rational(1, 1),
 Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), number=1): Rational(1, 3)}

Looking at Bob’s strategy,

In [43]: eqm["Bob"]
Out[43]: [[Rational(2, 3), Rational(1, 3)]]

Bob meets Alice’s raise two-thirds of the time. The label “Raise” is used in more than one information set for Alice, so in the above we had to specify information sets when indexing. When there is no ambiguity, we can specify action labels directly. So for example, because Bob has only one action named “Meet” in the game, we can extract the probability that Bob plays “Meet” by:

In [44]: eqm["Bob"]["Meet"]
Out[44]: Rational(2, 3)

Moreover, this is the only action with that label in the game, so we can index the profile directly using the action label without any ambiguity:

In [45]: eqm["Meet"]
Out[45]: Rational(2, 3)

Because this is an equilibrium, the fact that Bob randomizes at his information set must mean he is indifferent between the two actions at his information set. MixedBehaviorProfile.action_value() returns the expected payoff of taking an action, conditional on reaching that action’s information set:

In [46]: {action: eqm.action_value(action) for action in g.players["Bob"].infosets[0].actions}
Out[46]: 
{Action(infoset=Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Bob'), number=0), label='Meet'): Rational(-1, 1),
 Action(infoset=Infoset(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Bob'), number=0), label='Pass'): Rational(-1, 1)}

Bob’s indifference between his actions arises because of his beliefs given Alice’s strategy. MixedBehaviorProfile.belief() returns the probability of reaching a node, conditional on its information set being reached:

In [47]: {node: eqm.belief(node) for node in g.players["Bob"].infosets[0].members}
Out[47]: 
{Node(game=Game(title='One card poker game, after Myerson (1991)'), path=[0, 0]): Rational(3, 4),
 Node(game=Game(title='One card poker game, after Myerson (1991)'), path=[0, 1]): Rational(1, 4)}

Bob believes that, conditional on Alice raising, there’s a 75% chance that she has the king; therefore, the expected payoff to meeting is in fact -1 as computed. MixedBehaviorProfile.infoset_prob() returns the probability that an information set is reached:

In [48]: eqm.infoset_prob(g.players["Bob"].infosets[0])
Out[48]: Rational(2, 3)

The corresponding probability that a node is reached in the play of the game is given by MixedBehaviorProfile.realiz_prob(), and the expected payoff to a player conditional on reaching a node is given by MixedBehaviorProfile.node_value().

In [49]: {node: eqm.node_value("Bob", node) for node in g.players["Bob"].infosets[0].members}
Out[49]: 
{Node(game=Game(title='One card poker game, after Myerson (1991)'), path=[0, 0]): Rational(-5, 3),
 Node(game=Game(title='One card poker game, after Myerson (1991)'), path=[0, 1]): Rational(1, 1)}

The overall expected payoff to a player given the behavior profile is returned by MixedBehaviorProfile.payoff():

In [50]: eqm.payoff("Alice")
Out[50]: Rational(1, 3)

In [51]: eqm.payoff("Bob")
Out[51]: Rational(-1, 3)

The equilibrium computed expresses probabilities in rational numbers. Because the numerical data of games in Gambit are represented exactly, methods which are specialized to two-player games, lp_solve(), lcp_solve(), and enummixed_solve(), can report exact probabilities for equilibrium strategy profiles. This is enabled by default for these methods.

When a game has an extensive representation, equilibrium finding methods default to computing on that representation. It is also possible to compute using the strategic representation. pygambit transparently computes the reduced strategic form representation of an extensive game

In [52]: [s.label for s in g.players["Alice"].strategies]
Out[52]: ['11', '12', '21', '22']

In the strategic form of this game, Alice has four strategies. The generated strategy labels list the action numbers taken at each information set. We can therefore apply a method which operates on a strategic game to any game with an extensive representation

In [53]: result = gbt.nash.gnm_solve(g)

In [54]: result
Out[54]: NashComputationResult(method='gnm', rational=False, use_strategic=True, equilibria=[[[0.33333333333866677, 0.6666666666613335, 0.0, 0.0], [0.6666666666559997, 0.3333333333440004]]], parameters={'perturbation': [[1.0, 0.0, 0.0, 0.0], [1.0, 0.0]], 'end_lambda': -10.0, 'steps': 100, 'local_newton_interval': 3, 'local_newton_maxits': 10})

gnm_solve() can be applied to any game with any number of players, and uses a path-following process in floating-point arithmetic, so it returns profiles with probabilities expressed as floating-point numbers. This method operates on the strategic representation of the game, so the returned results are of type MixedStrategyProfile, and specify, for each player, a probability distribution over that player’s strategies. Indexing a MixedStrategyProfile by a player gives the probability distribution over that player’s strategies only.

In [55]: eqm = result.equilibria[0]

In [56]: eqm["Alice"]
Out[56]: [0.33333333333866677, 0.6666666666613335, 0.0, 0.0]

In [57]: eqm["Bob"]
Out[57]: [0.6666666666559997, 0.3333333333440004]

The expected payoff to a strategy is provided by MixedStrategyProfile.strategy_value():

In [58]: {strategy: eqm.strategy_value(strategy) for strategy in g.players["Alice"].strategies}
Out[58]: 
{Strategy(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), label='11'): 0.33333333334400045,
 Strategy(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), label='12'): 0.33333333332799997,
 Strategy(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), label='21'): -0.9999999999839995,
 Strategy(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Alice'), label='22'): -1.0}

In [59]: {strategy: eqm.strategy_value(strategy) for strategy in g.players["Bob"].strategies}
Out[59]: 
{Strategy(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Bob'), label='1'): -0.33333333333066656,
 Strategy(player=Player(game=Game(title='One card poker game, after Myerson (1991)'), label='Bob'), label='2'): -0.3333333333386667}

The overall expected payoff to a player is returned by MixedStrategyProfile.payoff():

In [60]: eqm.payoff("Alice")
Out[60]: 0.33333333333333354

In [61]: eqm.payoff("Bob")
Out[61]: -0.33333333333333354

When a game has an extensive representation, we can convert freely between MixedStrategyProfile and the corresponding MixedBehaviorProfile representation of the same strategies using MixedStrategyProfile.as_behavior() and MixedBehaviorProfile.as_strategy().

In [62]: eqm.as_behavior()
Out[62]: [[[1.0, 0.0], [0.3333333333386667, 0.6666666666613333]], [[0.6666666666559997, 0.3333333333440004]]]

In [63]: eqm.as_behavior().as_strategy()
Out[63]: [[0.3333333333386667, 0.6666666666613333, 0.0, 0.0], [0.6666666666559997, 0.3333333333440004]]

Acceptance criteria for Nash equilibria#

Some methods for computing Nash equilibria operate using floating-point arithmetic and/or generate candidate equilibrium profiles using methods which involve some form of successive approximations. The outputs of these methods therefore are in general \(\varepsilon\)-equilibria, for some positive \(\varepsilon\).

To provide a uniform interface across methods, where relevant Gambit provides a parameter maxregret, which specifies the acceptance criterion for labeling the output of the algorithm as an equilibrium. This parameter is interpreted proportionally to the range of payoffs in the game. Any profile returned as an equilibrium is guaranteed to be an \(\varepsilon\)-equilibrium, for \(\varepsilon\) no more than maxregret times the difference of the game’s maximum and minimum payoffs.

As an example, consider solving the standard one-card poker game using logit_solve(). The range of the payoffs in this game is 4 (from +2 to -2).

In [64]: g = gbt.Game.read_game("poker.efg")

In [65]: g.max_payoff, g.min_payoff
Out[65]: (Rational(2, 1), Rational(-2, 1))

logit_solve() is a globally-convergent method, in that it computes a sequence of profiles which is guaranteed to have a subsequence that converges to a Nash equilibrium. The default value of maxregret for this method is set at \(10^{-8}\):

In [66]: result = gbt.nash.logit_solve(g, maxregret=1e-8)

In [67]: result.equilibria
Out[67]: [[[[1.0, 0.0], [0.33333338649882943, 0.6666666135011707]], [[0.6666667065407631, 0.3333332934592369]]]]

In [68]: result.equilibria[0].max_regret()
Out[68]: 3.987411578698641e-08

The value of MixedBehaviorProfile.max_regret() of the computed profile exceeds \(10^{-8}\) measured in payoffs of the game. However, when considered relative to the scale of the game’s payoffs, we see it is less than \(10^{-8}\) of the payoff range, as requested:

In [69]: result.equilibria[0].max_regret() / (g.max_payoff - g.min_payoff)
Out[69]: 9.968528946746602e-09

In general, for globally-convergent methods especially, there is a tradeoff between precision and running time. Some methods may be slow to converge on some games, and it may be useful instead to get a more coarse approximation to an equilibrium. We could instead ask only for an \(\varepsilon\)-equilibrium with a (scaled) \(\varepsilon\) of no more than \(10^{-4}\):

In [70]: result = gbt.nash.logit_solve(g, maxregret=1e-4)

In [71]: result.equilibria[0]
Out[71]: [[[1.0, 0.0], [0.3338351656285656, 0.666164834417892]], [[0.6670407651644306, 0.3329592348608147]]]

In [72]: result.equilibria[0].max_regret()
Out[72]: 0.00037581039824041707

In [73]: result.equilibria[0].max_regret() / (g.max_payoff - g.min_payoff)
Out[73]: 9.395259956010427e-05

The convention of expressing maxregret scaled by the game’s payoffs standardises the behavior of methods across games. For example, consider solving the poker game instead using liap_solve().

In [74]: result = gbt.nash.liap_solve(g.mixed_behavior_profile(), maxregret=1.0e-4)

In [75]: result.equilibria[0]
Out[75]: [[[0.9999979852809777, 2.0147190221904606e-06], [0.3333567614695495, 0.6666432385304505]], [[0.6668870457043463, 0.33311295429565374]]]

In [76]: result.equilibria[0].max_regret()
Out[76]: 0.00022039452688993322

In [77]: result.equilibria[0].max_regret() / (g.max_payoff - g.min_payoff)
Out[77]: 5.5098631722483304e-05

If, instead, we double all payoffs, the output of the method is unchanged.

In [78]: for outcome in g.outcomes:
   ....:     outcome["Alice"] = outcome["Alice"] * 2
   ....:     outcome["Bob"] = outcome["Bob"] * 2
   ....: 

In [79]: result = gbt.nash.liap_solve(g.mixed_behavior_profile(), maxregret=1.0e-4)

In [80]: result.equilibria[0]
Out[80]: [[[0.9999979852809777, 2.0147190221904606e-06], [0.3333567614695495, 0.6666432385304505]], [[0.6668870457043463, 0.33311295429565374]]]

In [81]: result.equilibria[0].max_regret()
Out[81]: 0.00044078905377986644

In [82]: result.equilibria[0].max_regret() / (g.max_payoff - g.min_payoff)
Out[82]: 5.5098631722483304e-05

Generating starting points for algorithms#

Some methods for computation of Nash equilibria take as an initial condition a MixedStrategyProfile or MixedBehaviorProfile which is used as a starting point. The equilibria found will depend on which starting point is selected. To facilitate generating starting points, Game provides methods Game.random_strategy_profile() and Game.random_behavior_profile(), to generate profiles which are drawn from the uniform distribution on the product of simplices.

As an example, we consider a three-player game from McKelvey and McLennan (1997), in which each player has two strategies. This game has nine equilibria in total, and in particular has two totally mixed Nash equilibria, which is the maximum possible number of regular totally mixed equilbria in games of this size.

We first consider finding Nash equilibria in this game using liap_solve(). If we run this method starting from the centroid (uniform randomization across all strategies for each player), liap_solve() finds one of the totally-mixed equilibria.

In [83]: g = gbt.Game.read_game("2x2x2.nfg")

In [84]: gbt.nash.liap_solve(g.mixed_strategy_profile())
Out[84]: NashComputationResult(method='liap', rational=False, use_strategic=True, equilibria=[[[0.4000000330601743, 0.5999999669398257], [0.499999791913678, 0.5000002080863221], [0.3333334438355959, 0.6666665561644041]]], parameters={'start': [[0.5, 0.5], [0.5, 0.5], [0.5, 0.5]], 'maxregret': 0.0001, 'maxiter': 1000})

Which equilibrium is found depends on the starting point. With a different starting point, we can find, for example, one of the pure-strategy equilibria.

In [85]: gbt.nash.liap_solve(g.mixed_strategy_profile([[.9, .1], [.9, .1], [.9, .1]]))
Out[85]: NashComputationResult(method='liap', rational=False, use_strategic=True, equilibria=[[[0.9999999952144591, 4.785540892853837e-09], [0.9999999998451463, 1.5485358776552024e-10], [0.9999999988047714, 1.1952285809728638e-09]]], parameters={'start': [[0.9, 0.1], [0.9, 0.1], [0.9, 0.1]], 'maxregret': 0.0001, 'maxiter': 1000})

To search for more equilibria, we can instead generate strategy profiles at random.

In [86]: gbt.nash.liap_solve(g.random_strategy_profile())
Out[86]: NashComputationResult(method='liap', rational=False, use_strategic=True, equilibria=[[[0.5000000003195658, 0.49999999968043424], [0.5000000000897917, 0.4999999999102082], [0.9999962549019348, 3.7450980652243227e-06]]], parameters={'start': [[0.9575199198933635, 0.04248008010663638], [0.4834432377589795, 0.5165567622410203], [0.8866991264791747, 0.11330087352082523]], 'maxregret': 0.0001, 'maxiter': 1000})

Note that methods which take starting points do record the starting points used in the result object returned. However, the random profiles which are generated will differ in different runs of a program. To support making the generation of random strategy profiles reproducible, and for finer-grained control of the generation of these profiles if desired, Game.random_strategy_profile() and Game.random_behavior_profile() optionally take a numpy.random.Generator object, which is used as the source of randomness for creating the profile.

In [87]: import numpy as np

In [88]: gen = np.random.default_rng(seed=1234567890)

In [89]: p1 = g.random_strategy_profile(gen=gen)

In [90]: p1
Out[90]: [[0.8359110871883912, 0.16408891281160876], [0.7422931336795922, 0.25770686632040796], [0.977833473596193, 0.02216652640380706]]

In [91]: gen = np.random.default_rng(seed=1234567890)

In [92]: p2 = g.random_strategy_profile(gen=gen)

In [93]: p2
Out[93]: [[0.8359110871883912, 0.16408891281160876], [0.7422931336795922, 0.25770686632040796], [0.977833473596193, 0.02216652640380706]]

In [94]: p1 == p2
Out[94]: True

When creating profiles in which probabilities are represented as floating-point numbers, Game.random_strategy_profile() and Game.random_behavior_profile() internally use the Dirichlet distribution for each simplex to generate correctly uniform sampling over probabilities. However, in some applications generation of random profiles with probabilities as rational numbers is desired. For example, simpdiv_solve() takes such a starting point, because it operates by successively refining a triangulation over the space of mixed strategy profiles. Game.random_strategy_profile() and Game.random_behavior_profile() both take an optional parameter denom which, if specified, generates a profile in which probabilities are generated uniformly from the grid in each simplex in which all probabilities have denominator denom.

In [95]: gen = np.random.default_rng(seed=1234567890)

In [96]: g.random_strategy_profile(denom=10, gen=gen)
Out[96]: [[Rational(1, 2), Rational(1, 2)], [Rational(7, 10), Rational(3, 10)], [Rational(0, 1), Rational(1, 1)]]

In [97]: g.random_strategy_profile(denom=10, gen=gen)
Out[97]: [[Rational(1, 10), Rational(9, 10)], [Rational(3, 5), Rational(2, 5)], [Rational(3, 5), Rational(2, 5)]]

These can then be used in conjunction with simpdiv_solve() to search for equilibria from different starting points.

In [98]: gbt.nash.simpdiv_solve(g.random_strategy_profile(denom=10, gen=gen))
Out[98]: NashComputationResult(method='simpdiv', rational=True, use_strategic=True, equilibria=[[[Rational(1, 1), Rational(0, 1)], [Rational(1, 1), Rational(0, 1)], [Rational(1, 1), Rational(0, 1)]]], parameters={'start': [[Rational(7, 10), Rational(3, 10)], [Rational(4, 5), Rational(1, 5)], [Rational(0, 1), Rational(1, 1)]], 'maxregret': Rational(1, 10000000), 'refine': 2, 'leash': None})

In [99]: gbt.nash.simpdiv_solve(g.random_strategy_profile(denom=10, gen=gen))
Out[99]: NashComputationResult(method='simpdiv', rational=True, use_strategic=True, equilibria=[[[Rational(1, 1), Rational(0, 1)], [Rational(0, 1), Rational(1, 1)], [Rational(0, 1), Rational(1, 1)]]], parameters={'start': [[Rational(4, 5), Rational(1, 5)], [Rational(1, 5), Rational(4, 5)], [Rational(0, 1), Rational(1, 1)]], 'maxregret': Rational(1, 10000000), 'refine': 2, 'leash': None})

In [100]: gbt.nash.simpdiv_solve(g.random_strategy_profile(denom=10, gen=gen))
Out[100]: NashComputationResult(method='simpdiv', rational=True, use_strategic=True, equilibria=[[[Rational(1, 1), Rational(0, 1)], [Rational(1, 1), Rational(0, 1)], [Rational(1, 1), Rational(0, 1)]]], parameters={'start': [[Rational(1, 2), Rational(1, 2)], [Rational(1, 1), Rational(0, 1)], [Rational(1, 2), Rational(1, 2)]], 'maxregret': Rational(1, 10000000), 'refine': 2, 'leash': None})

Estimating quantal response equilibria#

Alongside computing quantal response equilibria, Gambit can also perform maximum likelihood estimation, computing the QRE which best fits an empirical distribution of play.

As an example we consider an asymmetric matching pennies game studied in [Och95], analysed in [McKPal95] using QRE.

In [101]: g = gbt.Game.from_arrays(
   .....:       [[1.1141, 0], [0, 0.2785]],
   .....:       [[0, 1.1141], [1.1141, 0]],
   .....:       title="Ochs (1995) asymmetric matching pennies as transformed in McKelvey-Palfrey (1995)"
   .....: )
   .....: 

In [102]: data = g.mixed_strategy_profile([[128*0.527, 128*(1-0.527)], [128*0.366, 128*(1-0.366)]])

Estimation of QRE is done using fit_fixedpoint().

In [103]: fit = gbt.qre.fit_fixedpoint(data)

The returned LogitQREMixedStrategyFitResult object contains the results of the estimation. The results replicate those reported in [McKPal95], including the estimated value of lambda, the QRE profile probabilities, and the log-likelihood. Because data contains the empirical counts of play, and not just frequencies, the resulting log-likelihood is correct for use in likelihoood-ratio tests. [1]

In [104]: print(fit.lam)
1.8456097536855864

In [105]: print(fit.profile)
[[0.615651314427859, 0.3843486855721409], [0.3832909400456291, 0.616709059954371]]

In [106]: print(fit.log_like)
-174.76453191087444

Footnotes