Level 9: Intransitive Mechanics

Readings/Playings

See “additional resources” at the end of this blog post for further reading.

This Week

Welcome back! Today we’re going to learn about how to balance intransitive mechanics. As a reminder, “intransitive” is just a geeky way of saying “games like Rock-Paper-Scissors” – that is, games where there is no single dominant strategy, because everything can be beaten by something else.

We see intransitive mechanics in games all the time. In fighting games, a typical pattern is that normal attacks are defeated by blocks, blocks are defeated by throws, and throws are defeated by attacks. In real-time strategy games, a typical pattern is that you have fliers that can destroy infantry, infantry that works well against archers, and archers are great at bringing down fliers. Turn-based strategy games often have some units that work well against others, an example pattern being that heavy tanks lose to anti-tank infantry which loses to normal infantry which lose to heavy tanks. First-person shooters sometimes have an intransitive relationship between different weapons or vehicles, like rocket launchers being good against tanks (since they’re slow and easy to hit) which are good against light vehicles (which are destroyed by the tank’s fast rate of fire once they get in range) which in turn are good against rocket launchers (since they can dodge and weave around the slow incoming rockets). MMOs and tabletop RPGs often have some character classes that are particularly good at fighting against other classes, as well. So you can see that intransitive mechanics are in all kinds of places.

Some of these relationships might not be immediately obvious. For example, consider a game where one kind of unit has long-range attacks, which is defeated by a short-range attacker who can turn invisible; this in turn is defeated by a medium-range attacker with radar that reveals invisible units; and the medium-range attacker is of course weak against the long-range attacker.

Sometimes it’s purely mathematical; in Magic: the Gathering, a 1/3 creature will lose in combat to a 3/2 creature, which loses to a 2/1 First Strike creature, which in turn loses to the original 1/3 creature. Within the metagame of a CCG you often have three or four dominant decks, each one designed to beat one or more of the other ones. These kinds of things aren’t even necessarily designed with the intention of being intransitive, but that is what ends up happening.

Solutions to intransitive mechanics

Today we’re going to get our hands pretty dirty with some of the mathiest math we’ve done so far, borrowing when needed from the tools of algebra, linear algebra, and game theory. In the process we’ll learn how to solve intransitive mechanics, so that we can learn more about how these work within our game and what we can expect from player behavior at the expert level.

What does a “solution” look like here? It can’t be a cost curve, because each choice wins sometimes and loses sometimes. Instead it’s a ratio of how often you choose each available option, and how often you expect your opponent to choose each of their options. For example, building an army of 30% archers, 50% infantry, 20% fliers (or 3:5:2) might be a solution to an intransitive game featuring those units, under certain conditions.

As a game designer, you might desire certain game objects to be used more or less frequently than others, and by changing the relative costs and availability of each object you can change the optimal mix of objects that players will use in play. By designing your game specifically to have one or more optimal strategies of your choosing, you will know ahead of time how the game is likely to develop. For example, you might want certain things to only happen rarely during normal play but be spectacular when they do, and if you understand how your costs affect relative frequencies, you can design a game to be like that intentionally. (Or, if it seems like in playtesting, your players are using one thing a lot more than another, this kind of analysis may be able to shed light on why that is.)

Who Cares?

It may be worth asking, if all intransitive mechanics are just glorified versions of Rock-Paper-Scissors, what’s the appeal? Few people play Rock-Paper-Scissors for fun, so why should they enjoy a game that just uses the same mechanics and dresses them differently?

For one thing, an intransitive game is at least more interesting than one with a single dominant strategy (“Rock-Rock-Rock”) because you will see more variety in play. For another, an intransitive mechanic embedded in a larger game may still allow players to change or modify their strategies in mid-game. Players may make certain choices in light of what they observe other players doing now (in real-time), particularly in action-based games where you must react to your opponent’s reaction to your reaction to their action in the space of a few milliseconds.

In games with bluffing mechanics, players may make choices based on what they’ve observed other players doing in the past and trying to use that to infer their future moves, which is particularly interesting in games of partial but incomplete information (like Poker). So, hopefully you can see that just because a game has an intransitive mechanic, does not mean it’s as dull as Rock-Paper-Scissors.

Additionally, intransitive mechanics serve as a kind of “emergency brake” on runaway dominant strategies. Even if you don’t know exactly what the best strategy in your game is, if all strategies have an intransitive relationship, you can at least know that there will not be a single dominant strategy that invalidates all of the others, because it will be weak against at least one other counter-strategy. Even if the game itself is unbalanced, intransitive mechanics allow for a metagame correction – not an ideal thing to rely on exclusively (such a thing would be very lazy design), but better to have a safety net than not if you’re releasing a game where major game balance changes can’t be easily made after the fact.

So, if I’ve managed to convince you that intransitive mechanics are worth including for at least some kinds of games, get ready and let’s learn how to solve them!

Solving the basic RPS game

Let’s start by solving the basic game of Rock-Paper-Scissors to see how this works. Since each throw is theoretically as good as any other, we would expect the ratio to be 1:1:1, meaning you choose each throw equally often. And that is what we’ll find, but it’s important to understand how to get there so that we can solve more complex problems.

First, let’s look at the outcomes. Let’s call our opponent’s throws r, p and s, and our throws R, P and S (we get the capital letters because we’re awesome). Since winning and losing are equal and opposite (that is, one win + one loss balances out) and draws are right in the middle, let’s call a win +1 point, a loss -1 point, and a draw 0 points. The math here would actually work for any point values really, but these numbers make it easiest. We now construct a table of results:

r           p          s

R          0          -1         +1

P          +1        0          -1

S          -1         +1        0

Of course, this is from our perspective – for example, if we throw (R)ock and opponent throws (s)cissors, we win, for a net +1 to our score. Our opponent’s table would be the reverse.

Let’s re-frame this a little bit, by calling r, p and s probabilities that the opponent will make each respective throw. For example, suppose you know ahead of time that your opponent is using a strategy of r=0.5, p=s=0.25 (that is, they throw 2 rock for every paper or scissors). What’s the best counter-strategy?

To answer that question, we can construct a set of three equations that tells you your payoffs for each throw:

  • Payoff for R = 0r + (-1)p + 1s = s-p
  • Payoff for P = 1r + 0p + (-1)s = r-s
  • Payoff for S = (-1)r + 1p + 0s = p-r

So based on the probabilities, you can calculate the payoffs. In the case of our rock-heavy opponent, the payoffs are R=0, P=0.25, S=-0.25. Since P has the best payoff of all three throws, assuming the opponent doesn’t vary their strategy at all, our best counter-strategy is to throw Paper every time, and we expect that we will gain 0.25 per throw – that is, out of every four throws, we’ll win one more game than we lose. In fact, we’ll find that if our opponent merely throws rock the tiniest, slightest bit more often than the others, the net payoff for P will be better than the others, and our best strategy is still to throw Paper 100% of the time, until our opponent modifies their strategy. This is significant; it tells us that an intransitive mechanic is very fragile, and that even a slight imbalance on the player’s part can lead to a completely dominant strategy on the part of the opponent.

Of course, against a human opponent who notices we’re always throwing P, their counter-strategy would be to throw a greater proportion of s, which then forces us to throw some R, which then causes them to throw p, which makes us throw S, which makes them throw r, and around and around we go. If we’re both constantly adjusting our strategies to counter each other, do we ever reach any point where both of us are doing the best we can? Over time, do we tend towards a stable state of some kind?

Some Math Theorems

Before answering that question, there are a couple of things I’m going to ask you to trust me on; people smarter than me have actually proved these mathematically, but this isn’t a course in math proofs so I’m handwaving over that part of things. I hope you’ll forgive me for that.

First is that if the game mechanics are symmetric (that is, both players have exactly the same set of options and they work the same way), the solution will end up being the same for both players; the opponent’s probability of choosing Rock is the same as our probability.

Second is that each payoff must be the same as the other payoffs; that is, R = P = S; if any strategy is worth choosing at all, it will provide the same payoff as all other valid strategies, because if the payoff were instead any less than the others it would no longer be worth choosing (you’d just take something else with a higher payoff), and if it were any higher than the others you’d choose it exclusively and ignore the others. Thus, all potential moves that are worth taking have the same payoff.

Lastly, in symmetric zero-sum games specifically, the payoff for everything must be zero (because the payoffs are going to be the same for both players due to symmetry, and the only way for the payoffs to sum to zero and still be equal is if they’re both zero).

To summarize:

  • All payoffs that are worth taking at all, give an equal payoff to each other.
  • Symmetric zero-sum games have all payoffs equal to zero.
  • Symmetric games have the same solution for all players.

Finishing the RPS Solution

Let’s go back to our equations. Rock-Paper-Scissors is a symmetric zero-sum game, so:

  • R = P = S = 0.

Since the opponent must select exactly one throw, we also know the probabilities of their throw add up to 100%:

  • r + p + s = 1

From here we can solve the system of equations by substitution:

  • R = 0 = s-p, therefore p=s
  • P = 0 = r-s, therefore r=s
  • S = 0 = p-r, therefore p=r
  • r+p+s = r+r+r = 1, therefore r=1/3
  • Since r=p=s, p=1/3, s=1/3

So our solution is that the opponent should throw r, p and s each with probabilities of 1/3. This suggests that against a completely random opponent it doesn’t matter what we choose, our odds of winning are the same no matter what. Of course, the opponent knows this too, so if we choose an unbalanced strategy they can alter their throw ratio to beat us; our best strategy is also to choose each throw with 1/3 probability.

Note that in actual play, this does not mean that the best strategy is to actually play randomly (say, by rolling a die secretly before each throw)! As I’ve said before, when humans try to play randomly, they tend to not do a very good job of it, so in the real world the best strategy is still to play each throw about as often as any other, but at the same time which throw you choose depends on your ability to detect and exploit patterns in your opponent’s play, while at the same time masking any apparent patterns in your own play. So our solution of 1:1:1 does not say which throw you must choose at any given time (that is in fact where the skill of the game comes in), but just that over time we expect the optimal strategy to be a 1:1:1 ratio (because any deviation from that hands your opponent a strategy that wins more often over you until you readjust your strategy back to 1:1:1).

Solving RPS with Unequal Scoring

The previous example is all fine and good for Rock-Paper-Scissors, but how can we apply this to something a little more interesting? As our next step, let’s change the scoring mechanism. For example, in fighting games there’s a common intransitive system that attacks beat throws, throws beat blocks, and blocks beat attacks, but each of these does a different amount of damage, so they tend to have different results in the sense that each choice puts a different amount at risk. How does Rock-Paper-Scissors change when we mess with the costs?

Here’s an example. Suppose I make a new rule: every win using Rock counts double. You could just as easily frame it like this: in a fighting game, attacks do normal damage, and blocks do the same amount of damage as an attack (let’s say that a successful block allows for a counterattack), but that throws do twice as much damage as an attack or block. But let’s just say “every win with Rock counts double” for simplicity here. How does that affect our probabilities?

Again we start with a payoff table:

r           p          s

R          0          -1         +2

P          +1        0          -1

S          -2         +1        0

We then use this to construct our three payoff equations:

  • R = 2s-p
  • P = r-s
  • S = p-2r

Again, the game is zero-sum and symmetric, and both us and our opponent must choose exactly one throw, so we still have:

  • R = P = S = 0
  • r+p+s = 1

Again we solve:

  • R = 0 = 2s-p, therefore 2s = p
  • P = 0 = r-s, therefore r = s
  • S = 0 = p-2r, therefore 2r = p
  • r+p+s = r+2r+r = 1, therefore r=1/4
  • r=s, therefore s=1/4
  • 2r=p, therefore p=1/2

So here we get a surprising result: if we double the wins for Rock, the end result is that Paper gets chosen half of the time, while Rock and Scissors each get chosen a quarter of the time! This is an answer you’d be unlikely to come up with on your own without doing the math, but in retrospect it makes sense: since Scissors is such a risky play, players are less likely to choose it. If you know your opponent is not likely to play Scissors, Paper is more likely to either draw or win, so it is actually Paper (and not Rock) that is played more frequently.

So if you had a fighting game where a successful throw does twice as much damage as a successful attack or a successful block, but you do as much damage with a block or an attack, then you’d actually expect to see twice as many attack attempts as throws or blocks!

Solving RPS with Incomplete Wins

Suppose we factor resource costs into this. Fighting games typically don’t have a “cost” associated with performing a move (other than time, perhaps), but RTS games usually have actual resource costs to produce units.

Let’s take a simple RTS game where you have knights that beat archers, archers beat fliers, and fliers beat knights. Let’s say further that if you send one type of unit against the same type, they kill each other mutually so there is no net gain or loss on either side, but that it’s a little different with winners. Let’s say that when knights attack archers, they win, but they still lose 20% of their health to the initial arrow volley before they close the ranks. And let’s say against fliers, archers lose 40% of their health to counterattacks. But against knights, fliers take no damage at all, because the knights can’t do anything other than stand there and take it (their swords don’t work too well against enemies a hundred feet above them, dropping rocks down on them from above). Finally, let’s say that knights cost 50 gold, archers cost 75, and fliers cost 100. Now how does this work?

We start with the payoff table:

k                                              a                                              f

K         50-50=0                                  (-50*0.2)+75=+65                  -50

A         -75+(0.2*50)= -65                  75-75=0                                  (-75*0.4)+100=+70

F          +50                                          -100+(75*0.4)= -70                100-100=0

To explain: if we both take the same unit it ends up being zero, that’s just common sense, but really what’s going on is that we’re both paying the same amount and both lose the unit. So we both actually have a net loss, but relative to each other it’s still zero-sum (for example, with Knight vs Knight, we gain +50 Gold relative to the opponent by defeating their Knight, but also lose -50 Gold because our own Knight dies as well, and adding those results together we end up with a net gain of zero).

What about when our Knight meets an enemy Archer? We kill their Archer, which is worth a 75-gold advantage, but they also reduced our Knight’s HP by 20%, so you could say we lost 20% of our Knight cost of 50, which means we lost an equivalent of 10 gold in the process. So the actual outcome is we’re up by 65 gold.

When our Knight meets an enemy Flier, we lose the Knight so we’re down 50 gold. It didn’t hurt the opponent at all. Where does the Flier cost of 100 come in? In this case it doesn’t, really – the opponent still has a Flier after the exchange, so they still have 100 gold worth of Flier in play, they’ve lost nothing… at least, not yet!

So in the case of different costs or incomplete victories, the hard part is just altering your payoff table. From there, the process is the same:

  • K = 0k + 65a + (-50)f = 65a-50f
  • A = (-65)k + 0a + 70f = 70f-65k
  • F = 50k + (-70)a + 0f = 50k-70a
  • K = A = F = 0
  • k+a+f = 1

Solving, we find:

  • K = 0 = 65a-50f, therefore 65a = 50f
  • A = 0 = 70f-65k, therefore 70f = 65k, therefore f = (13/14)k
  • F = 0 = 50k-70a, therefore 50k = 70a, therefore a = (10/14)k
  • k+a+f = k + (10/14)k + (13/14)k = (37/14)k = 1, therefore k = 14/37
  • f = (13/14)k = (13/14)(14/37), therefore f = 13/37
  • a = (10/14)k = (10/14)(14/37), therefore a = 10/37

In this case you’d actually see a pretty even mix of units, with knights being a little more common and archers a little less. If you wanted fliers to be more rare you could play around with their costs, or allow knights to do a little bit of damage to them, or something.

Solving RPS with Asymmetric Scoring

So far we’ve assumed a game that’s symmetric: we both have the exact same set of throws, and we both win or lose the same amount according to the same set of rules. But not all intransitive games are perfectly symmetric. For example, suppose I made a Rock-Paper-Scissors variant where each round, I flip up a new card that alters the win rewards. This round, my card says that my opponent gets two points for a win with Rock, but I don’t (I would just score normally). How does this change things?

It actually complicates the situation a great deal, because now both players must figure out the probabilities of their opponents’ throws, and those probabilities may not be the same anymore! Let’s say that Player A has the double-Rock-win bonus, and Player B does not. What’s the optimal strategy for both players? And how much of an advantage does this give to Player A, if any? Let’s find out by constructing two payoff tables.

Player A’s payoff table looks like this:

rB        pB        sB

RA       0          -1         +2

PA       +1        0          -1

SA       -1         +1        0

Player B’s payoff table looks like this:

rA        pA       sA

RB       0          -1         +1

PB       +1        0          -1

SB       -2         +1        0

Here we can assume that RA=PA=SA and RB=PB=SB, and also that rA+pA+sA = rB+pB+sB = 1. However, we cannot assume that RA=PA=SA=RB=PB=SB=0, because we don’t actually know that the payoffs for players A and B are equal; in fact, intuition tells us they probably aren’t! We now have this intimidating set of equations:

  • RA = 2sB – pB
  • PA = rB – sB
  • SA = pB – rB
  • RB = sA – pA
  • PB = rA – sA
  • SB = pA – 2rA
  • RA = PA = SA
  • RB = PB = SB
  • rA + pA + sA = 1
  • rB + pB + sB = 1

We could do this the hard way through substitution, but an easier way is to use matrices. Here’s how it works: we rewrite the payoff tables as matrices. Here’s the first one:

RA       0          -1         +2

[           PA       +1        0          -1         ]

SA       -1         +1        0

Here, the left column represents the left side of the first three equations above, the second column is rA, the third column is pA, and the fourth column is sA. Two changes for clarity: first, let’s move the leftmost column to the right instead, which will make it easier to work with; and second, since RA=PA=SA, let’s just replace them all with a single variable X, which represents the net payoff for Player A:

0          -1         +2        X

[           +1        0          -1         X         ]

-1         +1        0          X

This is just a shorthand way of writing down these three equations, omitting the variable names but keeping them all lined up in the same order so that each column represents a different variable:

0rB      -1pB    +2sB    = X

1rB      +0pB   -1sB     = X

-1rB     +1pB   +0sB    = X

Algebra tells us we can multiply everything in an equation by a constant and it’s still true (which means we could multiply any row of the matrix by any value and it’s still valid, as long as we multiply all four entries in the row by the same amount). Algebra also tells us that we can add both sides of two equations together and the result is still true, meaning we could add each entry of two rows together and the resulting row is still a valid entry (which we could use to add to the rows already there, or even replace an existing row with the new result). And we can also rearrange the rows, because all of them are still true no matter what order we put them in. What we want to do here is put this matrix in what’s called triangular form, that is, of the form where everything under the diagonal is zeros, and the diagonals themselves (marked here with an asterisk) have to be non-zero:

*          ?          ?          ?

[           0          *          ?          ?          ]

0          0          *          ?

So, first we reorder them by swapping the top and middle rows:

-1         +1        0          X

[           0          -1         +2        X         ]

+1        0          -1         X

To eliminate the +1 in the bottom row, we add the top and bottom rows together and replace the bottom row with that:

-1         +1        0          X

+          +1        0          -1         X

0          +1        -1         2*X

Our matrix is now:

-1         +1        0          X

[           0          -1         +2        X         ]

0          +1        -1         2*X

Now we want to eliminate the +1 on the bottom row, so we add the middle and bottom rows together and replace the bottom row with the result:

-1         +1        0          X

[           0          -1         +2        X         ]

0          0          +1        3*X

Now we can write these in the standard equation forms and solve, going from the bottom up, using substitution:

  • +1(sB) = 3*X, therefore sB = 3*X
  • -1(pB) +2(sB) = X, therefore -1(pB)+2(3*X) = X, therefore pB = 5*X
  • -1(rB) + 1(pB) = X, therefore rB = 4*X

At this point we don’t really need to know what X is, but we do know that the ratio for Player B is 3 Scissors to 5 Paper to 4 Rock. Since sB+pB+rB = 1, this means:

rB = 4/12         pB = 5/12        sB = 3/12

We can use the same technique with the second set of equations to figure out the optional ratio for Player A. Again, the payoff table is:

rA        pA       sA

RB       0          -1         +1

PB       +1        0          -1

SB       -2         +1        0

This becomes the following matrix:

0          -1         +1        RB

[           +1        0          -1         PB       ]

-2         +1        0          SB

Again we reorganize, and since RB=PB=SB, let’s call these all a new variable Y (we don’t use X to avoid confusion with the previous X; remember that the payoff for one player may be different from the other here). Let’s swap the bottom and top this time, along with replacing the payoffs by Y:

-2         +1        0          Y

[           +1        0          -1         Y         ]

0          -1         +1        Y

To eliminate the +1 in the center row, we have to multiply the center row by 2 before adding it to the top row (or, multiply the top row by 1/2, but I find it easier to multiply by whole numbers than fractions).

-2         +1        0          Y

+          +1*2    0*2      -1*2     Y*2

0          +1        -2         Y*3

Our matrix is now:

-2         +1        0          Y

[           0          +1        -2         Y*3     ]

0          -1         +1        Y

Adding second and third rows to eliminate the -1 in the bottom row we get:

-2         +1        0          Y

[           0          +1        -2         Y*3     ]

0          0          -1         Y*4

Again working backwards and substituting:

  • sA = -Y*4
  • pA – 2sA = Y*3, therefore pA = -Y*5
  • -2rA + pA = Y, therefore -2rA = 6Y, therefore rA = -Y*3

Now, it might seem kind of strange that we get a bunch of negative numbers here when we got positive ones before. This is probably just a side effect of the fact that the average payoff for Player A is probably positive while Player B’s is probably negative, but in either case it all factors out because we just care about the relative ratio of Rock to Paper to Scissors. For Player A, this is 3 Rock to 4 Scissors to 5 Paper:

rA = 3/12         pA = 5/12        sA = 4/12

This is slightly different from Player B’s optimal mix:

rB = 4/12         pB = 5/12        sB = 3/12

Now, we can use this to figure out the actual advantage for Player A. We could do this through actually making a 12×12 chart and doing all 144 combinations and counting them up using probability, or we could do a Monte Carlo simulation, or we could just plug these values into our existing equations. For me that last one is the easiest, because we already have a couple of equations from earlier that directly relate these together:

sA = -Y*4, therefore Y = -1/12

rB = X*4, therefore X = +1/12

We know that RA=PA=SA and RB=PB=SB, so this means the payoff for Player A is +1/12 and for Player B it’s -1/12. This makes a lot of sense and acts as a sanity check: since this is still a zero-sum game, we know that the payoff for A must be equal to the negative payoff for B. In a symmetric game both would have to be zero, but this is not symmetric. That said, it turns out that if both players play optimally, the advantage is surprisingly small: only one extra win out of every 12 games if both play optimally!

Solving Extended RPS

So far all of the relationships we’ve analyzed have had only three choices. Can we use the same technique with more? Yes, it just means we do the same thing but more of it.

Let’s analyze the game Rock-Paper-Scissors-Lizard-Spock. In this game, Rock beats Scissors and Lizard; Paper beats Rock and Spock; Scissors beats Paper and Lizard; and Lizard beats Spock (and Lizard beats Paper, Spock beats Scissors and Rock). Our payoff table is (with ‘k’ for Spock since there’s already an ‘s’ for Scissors, and ‘z’ for Lizard so it doesn’t look like the number one):

r           p          s           z           k

R          0          -1         +1        +1        -1

P          +1        0          -1         -1         +1

S          -1         +1        0          +1        -1

Z          -1         +1        -1         0          +1

K         +1        -1         +1        -1         0

We also know r+p+s+z+k=1, and R=P=S=L=K=0. We could solve this by hand as well, but there’s another way to do this using Excel which makes things slightly easier sometimes.

First, you would enter in the above matrix in a 5×5 grid of cells somewhere. You’d also need to add another 1×5 column of all 1s (or any non-zero number, really) to represent the variable X (the payoff) to the right of your 5×5 grid. Then, select a new 1×5 column that’s blank (just click and drag), and then enter this formula in the formula bar:

=MMULT(MINVERSE(A1:E5),F1:F5)

For the MINVERSE parameter, put the top left and lower right cells of your 5×5 grid (I use A1:E5 if the grid is in the extreme top left corner of your worksheet). For the final parameter (I use F1:F5 here), give the 1×5 column of all 1s. Finally, and this is important, press Ctrl+Shift+Enter when you’re done typing in the formula (not just Enter). This propagates the formula to all five cells that you’ve highlighted and treats them as a unified array, which is necessary.

One warning is that this method does not always work; in particular, if there are no solutions or infinite solutions, it will give you #NUM! as the result instead of an actual number. In fact, if you enter in the payoff table above, it will give you this error; by setting one of the entries to something very slightly different (say, changing one of the +1s to +0.999999), you will generate a unique solution that is only off by a tiny fraction, so round it to the nearest few decimal places for the “real” answer. Another warning is that anyone who actually knows a lot about math will wince when you do this, because it’s kind of cheating and you’re really not supposed to solve a matrix like that.

Excel gives us a solution of 0.2 for each of the five variables, meaning that it is equally likely that the opponent will choose any of the five throws. We can then verify that yes, in fact, R=P=S=L=K=0, so it doesn’t matter which throw we choose, any will do just as well as any other if the opponent plays randomly with equal chances of each throw.

Solving Extended RPS with Unequal Relationships

Not all intransitive mechanics are equally balanced. In some cases, even without weighted costs, some throws are just better than other throws. For example, let’s consider the unbalanced game of Rock-Paper-Scissors-Dynamite. The idea is that with this fourth throw, Dynamite beats Rock (by explosion), and Scissors beats Dynamite (by cutting the wick). People will argue which should win in a contest between Paper and Dynamite, but for our purposes let’s say Dynamite beats Paper. In theory this makes Dynamite and Scissors seem like really good choices, because they both beat two of the three other throws. It also makes Rock and Paper seem like poor choices, because they both lose to two of the other three throws. What does the actual math say?

Our payoff table looks like this:

r           p          s           d

R          0          -1         +1        -1

P          +1        0          -1         -1

S          -1         +1        0          +1

D         +1        +1        -1         0

Before we go any further, we run into a problem: if you look closely, you’ll see that Dynamite is better than or equal to Paper in every situation. That is, for every entry in the P row, it is either equal or less than the corresponding entry in the D row (and likewise, every entry in the p column is worse or equal to the d column). Both Paper and Dynamite lose to Scissors, both beat Rock, but against each other Dynamite wins. In other words, there is no logical reason to ever take Paper because whenever you’d think about it, you would take Dynamite instead! In game theory terms, we say that Paper is dominated by Dynamite. If we tried to solve this matrix mathematically like we did earlier, we would end up with some very strange answers and we’d quickly find it was unsolvable (or that the answers made no sense, like a probability for r, p, s or d that was less than zero or greater than one). The reason it wouldn’t work is that at some point we would make the assumption that R=P=S=D, but in this case that isn’t true – the payoff for Paper must be less than the payoff for Dynamite, so it is an invalid assumption. To fix this, before proceeding, we must systematically eliminate all choices that are dominated. In other words, remove Paper as a choice.

The new payoff table becomes:

R          s           d

R          0          +1        -1

S          -1         0          +1

D         +1        -1         0

We check again to see if, after the first set of eliminations, any other strategies are now dominated (sometimes a row or column isn’t strictly dominated by another, until you cross out some other dominated choices, so you do have to perform this procedure repeatedly until you eliminate everything). Again, to check for dominated strategies, you must compare every pair of rows to see if one dominates another, and then every pair of columns in the same way. Yes, this means a lot of comparisons if you give each player ten or twelve choices!

In this case eliminating Paper was all that was necessary, and in fact we’re back to the same exact payoff table as with the original Rock-Paper-Scissors, but with Paper being “renamed” to Dynamite. And now you know, mathematically, why it never made sense to add Dynamite as a fourth throw.

Another Unequal Relationship

What if instead we created a new throw that wasn’t weakly dominated, but that worked a little different than normal? For example, something that was equivalent to Scissors except it worked in reverse order, beating Rock but losing to Paper? Let’s say… Construction Vehicle (C), which bulldozes (wins against) Rock, is given a citation by (loses against) Paper, and draws with Scissors because neither of the two can really interact much. Now our payoff table looks like this:

r           p          s           c

R          0          -1         +1        -1

P          +1        0          -1         +1

S          -1         +1        0          0

C         +1        -1         0          0

Here, no single throw is strictly better than any other, so we start solving. We know r+p+s+c=1, and the payoffs R=P=S=D=0. Our matrix becomes:

0          -1         +1        -1         0

+1        0          -1         +1        0

[           -1         +1        0          0          0          ]

+1        -1         0          0          0

Rearranging the rows to get non-zeros along the diagonal, we get this by reversing the order from top to bottom:

+1        -1         0          0          0

-1         +1        0          0          0

[           +1        0          -1         +1        0          ]

0          -1         +1        -1         0

Zeroing the first column by adding the first two rows, and subtracting the third from the first, we get:

+1        -1         0          0          0

0          0          0          0          0

[           0          -1         +1        -1         0          ]

0          -1         +1        -1         0

Curious! The second row is all zeros (which gives us absolutely no useful information, as it’s just telling us that zero equals zero), and the bottom two rows are exactly the same as one another (which means the last row is redundant and again tells us nothing extra). We are left with only two rows of useful information. In other words, we have two equations (three if you count r+p+s+c=1) and four unknowns.

What this means is that there is actually more than one valid solution here, potentially an infinite number of solutions. We figure out the solutions by hand:

  • r-p=0, therefore r=p
  • -p+s-c=0, therefore c=s-p

Substituting into r+p+s+c=1, we get:

  • p+p+s+(s-p)=1, therefore p+2s=1, therefore p=1-2s (and therefore, r=1-2s).

Substituting back into c=s-p, we get c=s-1+2s, therefore c=3s-1.

We have thus managed to put all three other variables in terms of s:

  • p=1-2s
  • r=1-2s
  • c=3s-1

So it would seem at first that there are in fact an infinite number of solutions: choose any value for s, then that will give you the corresponding values for p, r, and c. But we can narrow down the ranges even further.

How? By remembering that all of these variables are probabilities, meaning they must all be in the range of 0 (if they never happen) to 1 (if they always happen). Probabilities can never be less than zero or greater than 1. This lets us limit the range of s. For one thing, we know it must be between 0 and 1.

From the equation c=3s-1, we know that s must be at least 1/3 (otherwise c would be negative) and s can be at most 2/3 (otherwise c would be greater than 100%). Looking instead at p and r, we know s can range from 0 up to 1/2. Combining the two ranges, s must be between 1/3 and 1/2. This is interesting: it shows us that no matter what, Scissors is an indispensible part of all ideal strategies, being used somewhere between a third and half of the time.

At the lower boundary condition (s=1/3), we find that p=1/3, r=1/3, c=0, which is a valid strategy. At the upper boundary (s=1/2), we find p=0, r=0, c=1/2. And we could also opt for any strategy in between, say s=2/5, p=1/5, r=1/5, c=1/5.

Are any of these strategies “better” than the others, such that a single one would win more than the others? That unfortunately requires a bit more game theory than I wanted to get into today, but I can tell you the answer is “it depends” based on certain assumptions about how rational your opponents are, whether the players are capable of making occasional mistakes when implementing their strategy, and how much the players know about how their opponents play, among other things. For our purposes, we can say that any of these is as good as any other, although I’m sure professional game theorists could philosophically argue the case for certain values over others.

Also, for our purposes, we could say that Construction Vehicle is probably not a good addition to the core game of Rock-Paper-Scissors, as it allows one winning strategy where the throw of C can be completely ignored, and another winning strategy where both P and R are ignored, making us wonder why we’re wasting development resources on implementing two or three throws that may never even see play once the players are sufficiently skilled!

Solving the Game of Malkav

So far we’ve systematically done away with each of our basic assumptions: that a game has a symmetric payoff, that it’s zero-sum, that there are exactly three choices. There’s one other thing that we haven’t covered in the two-player case, and that’s what happens if the players have a different selection of choices – not just an asymmetric payoff, but an asymmetric game. If we rely on there being exactly as many throws for one player as the other, what happens when one player has, say, six different throws when their opponent has only five? It would seem such a problem would be unsolvable for a unique solution (there are six unknowns and only five equations, right?) but in fact it turns out we can use a more powerful technique to solve such a game uniquely, in some cases.

Let us consider a card called “Game of Malkav” from an obscure CCG that most of you have probably never heard of. It works like this: all players secretly and simultaneously choose a number. The player who played this card chooses between 1 and 6, while all other players choose between 1 and 5. Each player gains as much life as the number they choose… unless another player chose a number exactly one less, in which case they lose that much life instead. So for example if you choose 5, you gain 5 life, unless any other player chose 4. If anyone else chose 4, you lose 5 life… and they gain 4, unless someone else also chose 3, and so on. This can get pretty complicated with more players, so let’s simply consider the two-player case. Let’s also make the simplifying assumption that the game is zero-sum, and that you gaining 1 life is equivalent in value to your opponent losing 1 life (I realize this is not necessarily valid, and this will vary based on relative life totals, but at least it’s a starting point for understanding what this card is actually worth).

We might wonder, what is the expected payoff of playing this card, overall? Does the additional option of playing 6 when your opponent can only play up to 5? What is the best strategy, and what is the expected end result? In short, is the card worth playing… and if so, when you play it, how do you decide what to choose?

As usual, we start with a payoff table. Let’s call the choices P1-P6 (for the Player who played the card), and O1-O5 (for the Opponent):

O1       O2       O3       O4       O5

P1        0          +3        -2         -3         -4

P2        -3         0          +5        -2         -3

P3        +2        -5         0          +7        -2

P4        +3        +2        -7         0          +9

P5        +4        +3        +2        -9         0

P6        +5        +4        +3        +2        -11

We could try to solve this, and there do not appear to be any dominated picks for either player, but we will quickly find that the numbers get very hairy very fast… and also that it ends up being unsolvable for reasons that you will find if you try. Basically, with 6 equations and 5 unknowns, there is redundancy… except in this case, no rows cancel, and instead you end up with at least two equations that contradict each other. So there must actually be some dominated strategies here… it’s just that they aren’t immediately obvious, because there are a set of rows or columns that are collectively dominated by another set, which is much harder to just find by looking. How do we find them?

We start by finding the best move for each player, if they knew what the opponent was doing ahead of time. For example, if the opponent knows we will throw P1, their best move is O5 (giving them a net +4 and us a net -4). But then we continue by reacting to their reaction: if the player knows the opponent will choose O5, their best move is P4. But against P4, the best move is O3. Against O3, the best move is P2. Against P2, there are two equally good moves: O1 and O5, so we consider both options:

  • Against O5, the best response is P4, as before (and we continue around in the intransitive sequence O5->P4->O3->P2->O5 indefinitely).
  • Against O1, the best response is P6. Against P6, the best response is O5, which again brings us into the intransitive sequence O5->P4->O3->P2->O1->P6->O5.

What if we start at a different place, say by initially throwing P3? Then the opponent’s best counter is O2, our best answer to that is P6, which then leads us into the O5->P4->O3->P2->O1->P6->O5 loop. If we start with P5, best response is O4, which gets the response P3, which we just covered. What if we start with O1, O2, O3, O4, O5, P2, P4 or P6? All of them are already accounted for in earlier sequences, so there’s nothing more to analyze.

Thus, we see that no matter what we start with, eventually after repeated play only a small subset of moves actually end up being part of the intransitive nature of this game because they form two intransitive loops (O5/P4/O3/P2, and O5/P4/O3/P2/O1/P6). Looking at these sequences, the only choices ever used by either player are O1, O3, O5 and P2, P4, P6. Any other choice ends up being strictly inferior: for example, at any point where it is advantageous to play P6 (that is, you are expecting a positive payoff), there is no reason you would prefer P5 instead (even if you expect your opponent to play O5, your best response is not P5, but rather P4).

By using this technique to find intransitive loops, you can often reduce a larger number of choices to a smaller set of viable ones… or at worst, you can prove that all of the larger set are in fact viable. Occasionally you will find a game (Prisoner’s Dilemma being a famous example, if you’ve heard of that) where there are one or more locations in the table that are equally advantageous for both players, so that after repeated play we expect all players to be drawn to those locations; game theorists call these Nash Equilibriums after the mathematician who first wrote about them. Not that you need to care.

So in this case, we can reduce the table to the set of meaningful values:

O1       O3       O5

P2        -3         +5        -3

P4        +3        -7         +9

P6        +5        +3        -11

From there we solve, being aware that this is not symmetric. Therefore, we know that O1=O3=O5 and P2=P4=P6, but we do not know if they are all equal to zero or if one is the negative of the other. (Presumably, P2 is positive and O1 is negative, since we would expect the person playing this card to have an advantage, but we’ll see.)

We construct a matrix, using X to stand for the Payoff for P2, P4 and P6:

-3         +5        -3         X

[           +3        -7         +9        X         ]

+5        +3        -11       X

This can be reduced to triangular form and then solved, the same as earlier problems. Feel free to try it yourself! I give the answer below.

Now, solving that matrix gets you the probabilities O1, O3 and O5, but in order to learn the probabilities of choosing P2, P4 and P6 you have to flip the matrix across the diagonal so that the Os are all on the left and the Ps are on the top (this is called a transpose). In this case we’d also need to make all the numbers negative, since such a matrix is from Player O’s perspective and therefore has the opposite payoffs:

+3        -3         -5         Y

[           -5         +7        -3         Y         ]

+3        -9         +11      Y

This, too, can be solved normally. If you’re curious, the final answers are roughly:

P2:P4:P6 = 49% : 37% : 14%

O1:O3:O5 = 35% : 41% : 24%

Expected payoff to Player P (shown as “X” above): 0.31, and the payoff for Player O (shown as “Y”) is the negative of X: -0.31.

In other words, in the two-player case of this game, when both players play optimally, the player who initiated this card gets ahead by an average of less than one-third of a life point – so while we confirm that playing the card and having the extra option of choosing 6 is in fact an advantage, it turns out to be a pretty small one. On the other hand, the possibility of sudden large swings may make it worthwhile in actual play (or maybe not), depending on the deck you’re playing. And of course, the game gets much more complicated in multi-player situations that we haven’t considered here.

Solving Three-Player RPS

So far we’ve covered just about every possible case for a two-player game, and you can combine the different methods as needed for just about any application, for any kind of two-player game. Can we extend this kind of analysis to multiple players? After all, a lot of these games involve more than just a single head-to-head, they may involve teams or free-for-all environments.

Teams are straightforward, if there’s only two teams: just treat each team as a single “player” for analysis purposes. Free-for-all is a little harder because you have to manage multiple opponents, and as we’ll see the complexity tends to explode with each successive player. Three-player games are obnoxious but still quite possible to solve; four-player games are probably the upper limit of what I’d ever attempt by hand using any of the methods I’ve mentioned today. If you have a six-player free-for-all intransitive game where each player has a different set of options and a massive payoff matrix that gives payoffs to each player for each combination… well, let’s just say it can be done, probably requiring the aid of a computer and a professional game theorist, but at this point you wouldn’t want to. One thing that game theorists have learned is that the more complex the game, the longer it tends to take human players in a lab to converge on the optimal strategies… which means for a highly complex game, playtesting will give you a better idea of how the game actually plays “in the field” than doing the math to prove optimal solutions, because the players probably won’t find the optimal solutions anyway. Thus, for a complicated system like that, you’re better off playtesting… or more likely, you’re better off simplifying your mechanics!

Let’s take a simple multi-player case: three-player Rock-Paper-Scissors. We define the rules like this: if all players make the same throw or if all players each choose different throws, we call it a draw. If two players make the same throw and the third player chooses a different one (“odd man out”), then whoever throws the winning throw gets a point from each loser. So if two players throw Rock and the third throws Scissors, each of the Rock players get +1 point and the unfortunate Scissors player loses 2 points. Or if it’s reversed, one player throwing Rock while two throw Scissors, the one Rock player gets +2 points while the other two players lose 1 point each. (The idea behind these numbers is to keep the game zero-sum, for simplicity, but you could use this method to solve for any other scoring mechanism.)

Of course, we know because of symmetry that the answer to this is 1:1:1, just like the two-player version. So let’s throw in the same wrinkle as before: wins with Rock count double (which also means, since this is zero-sum, that losses with Scissors count double). In the two-player case we found the solution of Rock=Scissors=1/4, Paper=1/2. Does this change at all in the three-player case, since there are now two opponents which make it even more dangerous to throw Scissors (and possibly even more profitable to throw Rock)?

The trick we need to use here to make this solvable is to look at the problem from a single player’s perspective, and treat all the opponents collectively as a single opponent. In this case, we end up with a payoff table that looks like this:

rr          rp         rs         pp        ps         ss

R          0          -1         +2        -2         0          +4

P          +2        +1        0          0          -1         -2

S          -4         0          -2         +2        +1        0

You might say: wait a minute, there’s three variables here and six unknowns (two ‘r’ and two ‘p’ and two ‘s’, one for each opponents) which means this isn’t uniquely solvable. But the good news is that this game is symmetric, so we actually can solve it, because the probabilities of the opponents are taken together and multiplied (recall that we multiply probabilities when we need two independent things to happen at the same time). One thing to be careful of: there’s actually nine possibilities for the opponents, not six, but some of them are duplicated. The actual table is like this:

rr          rp         pr         rs         sr         pp        ps         sp         ss

R          0          -1         -1         +2        +2        -2         0          0          +4

P          +2        +1        +1        0          0          0          -1         -1         -2

S          -4         0          0          -2         -2         +2        +1        +1        0

All this means is that when using the original matrix and writing it out in longhand form, we have to remember to multiply rp, rs and ps by 2 each, since there are two ways to get each of them (rp and pr, for example). Note that I haven’t mentioned which of the two opponents is which; as I said earlier, it doesn’t matter because this game is symmetric, so the probability of any player throwing Rock or Scissors is the same as that of the other players.

This payoff table doesn’t present so well in matrix form since we’re dealing with two variables rather than one. One way to do this would be to actually split this into three mini-matrices, one for each of the first opponent’s choices, and then comparing each of those to the second opponent’s choice… then solving each matrix individually, and combining the three solutions into one at the end. That’s a lot of work, so let’s try to solve it algebraically instead, writing it out in longhand form and seeing if we can isolate anything by combining like terms:

  • Payoff for R = -2rp+4rs-2pp+4ss = 0
  • Payoff for P = 2rr+2rp-2sp-2ss = 0
  • Payoff for S = -4rr-4rs+2pp+2sp = 0
  • r+s+p=1 (as usual)

The “=0” at the end is because we know this game is symmetric and zero sum.

Where do you start with something like this? A useful starting place is usually to use r+s+p=1 to eliminate one of the variables by putting it in terms of the other, then substituting into the three Payoff equations above. Eliminating Rock (r=1-s-p) and substituting, after multiplying everything out and combining terms, we get:

  • -4pp+2ps-2p+4s = 0
  • -2p-4s+2 = 0
  • 2pp-6ps+8p+4s-4 = 0

We could isolate either p or s in the first or last equations by using the Quadratic Formula (you know, “minus b plus or minus the square root of 4ac, all divided by 2a”). This would yield two possible solutions, although in most cases you’ll find you can eliminate one as it strays outside the bounds of 0 to 1 (which r, p and s must all lie within, as they are all probabilities).

However, the middle equation above makes our lives much easier, as we can solve for p or s in terms of the other:

  • p = 1-2s

Substituting that into the other two equations gives us the same result, which lets us know we’re probably on the right track since the equations don’t contradict each other:

  • 20ss-26s+6 = 0

Here we do have to use the dreaded Quadratic Formula. Multiplying everything out, we find s=(26+/-14)/40… that is, s=100% or s=30%. Are both of these valid solutions? To find out, we evaluate p=1-2s and any other equation with r.

For s=30%, we find p=40% and r=30%, so that is a valid solution. For s=100%, we get p= -100% and r=100%, which is invalid (p cannot be below zero), leaving us with only a single valid solution: r:p:s = 3:4:3.

It turns out that having multiple players does have an effect on the “rock wins count double” problem, but it might not be the result we expected; with three players it is actually closer to 1:1:1 than it was with two players! Perhaps it’s because the likelihood of drawing with one player choosing Rock, one choosing Paper and one choosing Scissors makes Scissors less risky than it would be in a two-player game, because even if one opponent chooses Rock, the other might choose Paper and turn your double-loss into a draw.

Summary

This week we looked at how to evaluate intransitive mechanics using math. It’s probably the most complicated thing we’ve done, as it brings together the cost curves of transitive mechanics, probability, and statistics which is why I’m doing it at the very end of the course only after covering those! To solve these, you go through this process:

  • Make a payoff table.
  • Eliminate all dominated choices from both players (by comparing all combinations of rows and columns and seeing if any pair contains one row or column that is strictly better or equal to another). Keep doing that until all remaining choices are viable.
  • Find all intransitive “loops” through finding the best opposing response to each player’s initial choice.
  • Calculate the payoffs of each choice for one of the players, setting the payoffs equal to the same variable X. In a zero-sum game, X for one player will be the negative of the X for the other player. In a symmetric game, X is zero, so just set all the payoffs to zero instead.
  • Add one more equation, that the probabilities of all choices sum to 1.
  • Using algebraic substitution, triangular-form matrices, Excel, or any other means you have at your disposal, solve for as many variables as you can. If you manage to learn the value of X, it tells you the expected gain (or loss) for that player. Summing all players’ X values tells you if the game is zero-sum (X1+X2+…=0), positive-sum (>0) or negative-sum (<0), and by how much overall.
  • If you can find a unique value for each choice that is between 0 and 1, those are the optimal probabilities with which you should choose each throw. For asymmetric games, you’ll need to do this individually for each player. This is your solution.
  • For games with more than two players each making a simultaneous choice, choose one player’s payoffs as your point of reference, and treat all other players as a single combined opponent. The math gets much harder for each player you add over two. After all, with two players all equations are strictly linear; with three players you have to solve quadratic equations, with four players there are cubic equations, with five players you see quartic equations, and so on.

I should also point out that the field of game theory is huge, and it covers a wide variety of other games we haven’t covered here. In particular, it’s also possible to analyze games where players choose sequentially rather than simultaneously, and also games where players are able to negotiate ahead of time, making pleas or threats, coordinating their movements or so on (as might be found in positive-sum games where two players can trade or otherwise cooperate to get ahead of their opponents). These are beyond the scope of this course, but if you’re interested, I’ll give a couple of references at the end.

If You’re Working on a Game Now…

Think about your game and whether it features any intransitive mechanics. If not, ask yourself if there are any opportunities or reasons to take some transitive mechanics and convert them to intransitive (for example, if you’re working on an RPG, maybe instead of just having a sequence of weapons where each is strictly better than the previous, perhaps there’s an opportunity at one point in the game to offer the player a choice of several weapons that are all equally good overall, but each is better than another in different situations).

If you do have any intransitive mechanics in your game, find the one that is the most prominent, and analyze it as we did today. Of the choices you offer the player, are any of them dominant or dominated choices? What is the expected ratio of how frequently the player should choose each of the available options, assuming optimal play? Is it what you expected? Is it what you want?

Homework

For practice, feel free to start by doing the math by hand to confirm all of the problems I’ve solved here today, to get the hang of it. When you’re comfortable, here’s a game derived from a mini-game I once saw in one of the Suikoden series of RPGs (I forget which one). The actual game there used 13 cards, but for simplicity I’m going to use a 5-card deck for this problem. Here are the rules:

  • Players: 2
  • Setup: Each player takes five cards, numbered 1 through 5. A third stack of cards numbered 1 through 5 is shuffled and placed face-down as a draw pile.
  • Progression of play: At the beginning of each round, a card from the draw pile is flipped face up; the round is worth a number of points equal to the face value of that card. Both players then choose one of their own cards, and play simultaneously. Whoever played the higher card gets the points for that round; in case of a tie, no one gets the points. Both players set aside the cards they chose to play in that round; those cards may not be used again.
  • Resolution: The game ends after all five rounds have been played, or after one player reaches 8 points. Whoever has the most points wins.

It is easy to see that there is no single dominant strategy. If the opponent plays completely randomly (20% chance of playing each card), you come out far ahead by simply playing the number in your hand that matches the points each round is worth (so play your 3 if a 3 is flipped, play your 4 on a 4, etc.). You can demonstrate this in Excel by shuffling the opponent’s hand so that they are playing randomly, and comparing that strategy to the “point matching” strategy I’ve described here, and you’ll quickly find that point matching wins the vast majority of the time. (You could also compute the odds exhaustively for this, as there are only 120 ways to rearrange 5 cards, if you wanted.)

Does that mean that “matching points” is the dominant strategy? Certainly not. If I know my opponent is playing this strategy, I can trounce them by playing one higher than matching on all cards, and playing my 1 card on the 5-point round. I’ll lose 5 points, but I’ll capture the other 10 points for the win. Does the “one higher” strategy dominate? No, playing “two higher” will beat “one higher”… and “three higher” will beat “two higher”, “four higher” will beat “three higher”, and “matching points” beats “four higher” – an intransitive relationship. Essentially, the goal of this game is to guess what your opponent will play, and then play one higher than that (or if you think your opponent is playing their 5, play your 1 on that).

Since each strategy is just as good as any other if choosing between those five, you might think that means you can do no worse than choosing one of those strategies at random… except that as we saw, if you play randomly, “matching points” beats you! So it is probably true that the optimal strategy is not 1:1:1:1:1, but rather some other ratio. Figure out what it is.

If you’re not sure where to start, think of it this way: for any given play there are only five strategies: matching, one higher, two higher, three higher, or four higher. Figure out the payoff table for following each strategy across all five cards. You may shift strategies from round to round, like rock-paper-scissors, but with no other information on the first round you only have five choices, and each of those choices may help or hurt you depending on what your opponent does. Therefore, for the first play at least, you would start with this payoff table (after all, for the first round there are only five strategies you can follow, since you only have five cards each):

matching           match+1           match+2           match+3           match+4

M         0                      -5                     +3                    +9                    +13

M+1    +5                    0                      -7                     -1                     +3

M+2    -3                     +7                    0                      -9                     -10

M+3    -9                     +1                    +9                    0                      -11

M+4    -13                   -3                     +10                  +11                  0

References

Here are a pair of references that I found helpful when putting together today’s presentation.

“Game Architecture and Design” (Rollings & Morris), Chapters 3 and 5. This is where I first heard of the idea of using systems of equations to solve intransitive games. I’ve tried to take things a little farther today than the authors did in this book, but of course that means the book is a bit simpler and probably more accessible than what I’ve done here. And there is, you know, the whole rest of the book dealing with all sorts of other topics. I’m still in the middle of reading it so I can’t give it a definite stamp of personal approval at this time, but neither can I say anything bad about it, so take a look and decide for yourself.

“Game Theory: a Critical Text” (Heap & Varoufakis). I found this to be a useful and fairly accessible introduction to Game Theory. My one warning would be that in the interest of brevity, the authors tend to define acronyms and then use them liberally in the remainder of the text. This makes it difficult to skip ahead, as you’re likely to skip over a few key definitions, and then run into sentences which have more unrecognizable acronyms than actual words!

Advertisements

One Response to “Level 9: Intransitive Mechanics”

  1. Down the Intransitive Rabbit Hole | Mike Alt Says:

    […] Further reading on this mechanic: Game Balance Concepts Blog: Intransitive Mechanics […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: