Archive for September, 2010

Level 10: Final Boss

September 8, 2010

This Week

Welcome to the final week of the season. This week I didn’t know ahead of time what to do, so I intentionally left it as an unknown on the syllabus as a catch-all for anything interesting that might have come up over the summer. As it turns out, there are four main topics I wanted to cover today, making this the longest post of all of them, so if you have limited time I suggest bookmarking this and coming back later. First I’d like to talk a bit about economic systems in games, and how to balance a system where the players are the ones in control of it through manual wealth generation and trading. Then, I’ll talk about some common multiplayer game balance problems that just didn’t fit anywhere else in the previous nine weeks. Third, I’ll get a bit technical and share a few tips and tricks in Excel. Lastly, I’ll return to last summer with this whole concept of “fun” and how this whole topic of game balance fits into the bigger picture of game design, because for all of the depth that we’ve gone into still feels like a pretty narrow topic sometimes.

Economic Systems

What is an economic system?

First, we use the word “economy” a lot, even in everyday life, so I should define it to be clear. In games, I’ll use the word “economy” to describe any in-game resource. That’s a pretty broad term, as you could take it to mean that the pieces in Chess are a “piece economy” – and I’d argue that yes, they could be thought of that way, it’s just not a particularly interesting economy because the resources can’t really be created, destroyed or transferred in any meaningful way. Most economies that we think of as such, though, have one or more of these mechanics:

  • Resource generation, where players craft or receive resources over time
  • Resource destruction, where players either burn resources for some use in the game, or convert one type of resource to another.
  • Resource trading, where players can transfer resources among themselves, usually involving some kind of negotiation or haggling.
  • Limited zero-sum resources, so that one player generating a resource for themselves reduces the available pool of resources for everyone else.

Still, any of those elements individually might be missing and we’d still think of it as an economy.

Like the creation of level design tools, tabletop RPGs, or metrics, creating an economic system in your game is a third-order design activity, which can make it pretty challenging. You’re not just creating a system that your players experience. You’re creating a system that influences player behavior, but then the players themselves are creating another social system within your economic system, and it is the combination of the two that the players actually experience. For example, in Settlers of Catan, players are regularly trading resources between them, but the relative prices of each resource are always fluctuating based on what each individual player needs at any given time (and usually, all the players need different things, with different levels of desperation and different ability to pay higher prices). The good news is that with economies, at least, a lot of human behavior can be predicted. The other good news is that in-game economies have a lot of little “design knobs” for us designers to change to modify the game experience, so we have a lot of options. I’ll be going over those options today.

Supply and Demand

First, a brief lesson from Economics 101 that we need to be aware of is the law of supply and demand, which some of you have probably heard of. We assume the simplest case possible: an economy with one resource, a really large population of people who produce the resource and want to sell it, and another really large population of people who consume the resource and want to buy it. We’ll also assume for our purposes that any single unit of the resource is identical to any other, so consumers don’t have to worry about choosing between different “brands” or anything.

The sellers each have a minimum price at which they’re willing to part with their goods. Maybe some of them have lower production costs or lower costs of living than others, so they can accept less of a price and still stay in business. Maybe others have a more expensive storefront, or they’re just greedy, so they demand a higher minimum price. At any rate, we can draw a supply curve on a graph that says that for any given price (on the x-axis), a certain number or percentage of sellers are willing to sell at that price (on the y-axis). So, maybe at $1, only two sellers in the world can part with their goods at that price, but at $5 maybe there are ten sellers, and at $20 you’ve got a thousand sellers, and eventually if you go up to $100 every single seller would be willing to sell at that price. Basically, the only thing you need to know about supply curves is that as the price increases, the supply increases; if ten people would sell their good at $5, then at $5.01 you know that at least those ten sellers would still accept (if they sold at $5 then they would clearly sell at $5.01), and you might have some more sellers that finally break down and say, okay, for that extra penny we’re in.

Now, on the other side, it works the same but in reverse. The consumers all have a maximum price that they’re willing (or able) to pay, for whatever reason. And we can draw a demand curve on the same graph that shows for any given price, how many people are willing to buy at that price. And unlike the supply curve, the demand curve is always decreasing; if ten people would buy a good at $5, then at $5.01 you might keep all ten people if you’re lucky, or some of them might drop out and say that’s too rich for their blood, but you certainly aren’t going to find anyone who wouldn’t buy at a lower price but would buy at a more expensive price.

Now of course, in the real world, these assumptions aren’t always true. More teenagers would rather buy $50 shoes than $20 shoes, because price has social cred. And some sellers might not be willing to sell at exorbitantly high prices because they’d consider that unethical, and they’d rather sell for less (or go out of business) than bleed their customers dry. But for our purposes, we can assume that most of the time in our games, supply curves will increase and demand curves will decrease as the price gets more expensive. And here’s the cool part: wherever the two curves cross, will generally turn out to be the actual market price that the players all somehow collectively agree to. Even if the players don’t know the curves, the market price will go there as if by magic. It won’t happen instantly if the players have incomplete information, but it does happen pretty fast, because players who sell at below the current market price will start seeing other people selling at higher prices (because they have to) and say, hey, if they can sell for more than I should be able to also! And likewise, if a consumer pays a lot for something and then sees the guy sitting next to them who paid half what they did for the same resource, they’re going to demand to pay a whole lot less next time.

Now, this can be interesting in online games that have resource markets. If you play an online game where players can sell or trade in-game items for in-game money, see if either the developer or a fansite maintains a historical list of selling prices (not unlike a ticker symbol in the stock market). If so, you’ll notice that the prices change over time slightly. So you might wonder what the deal is: why do prices fluctuate? And the answer is that the supply and demand are changing slightly over time. Supply changes as players craft items and put them on sale, and that supply is constantly changing; and demand also changes, because at any given time a different set of players is going to be online shopping for any given item. You can see this with other games that have any kind of resource buying and selling. And because the player population isn’t infinite, these things aren’t perfectly efficient, so you get unequal amounts of each item being produced and consumed over time.

Now, that points us to another interesting thing about economies: the fewer the players, the more we’ll tend to see prices fluctuate, because a single player controls more and more of the production or consumption. This is why the prices you’ll see for one Clay in the Catan games change a lot from game to game (or even within a single game) relative to the price of some piece of epic loot in World of Warcraft.

Now, this isn’t really something you can control directly as the game designer, but at least you can predict it. It also means if you’re designing a trading board game for 3 to 6 players, you can expect to see more drastic price fluctuations with fewer players, and you might decide to add some extra rules with less players to account for that if a stable market is important to the functioning of your game.

Multiple resources

Things get more interesting when we have multiple goods, because the demand curves can affect one another. For example, suppose you have two resources, but one can be substituted for another – maybe one gives you +50 health, and the other gives you +5 mana which you can use as a healing spell to get +50 health, so there’s two different items (with similar uses) so if one is really expensive and one is really cheap you can just buy the cheap one. Even if the two aren’t perfect substitutes, players may be willing to accept an imperfect substitute if the price is sufficiently lower than the market value for the thing they actually want, and the price difference between what people will pay for the good and what they’ll pay for the substitute tells you how efficient that substitute is (that is, how “perfect” it substitutes for the original).

On the flip side, you can also have multiple goods where the demand for one increases demand for the other, because they work better if you buy them all as a set (this is sort of the opposite of substitutes). For example, in games where collecting a complete set of matching gear gives your character a stat bonus, or where you can turn in one of each resource for a bonus, a greater demand for one resource pulls up the demand for all of the others… and once a player has some of the resources in the set, their demand for the others will increase even more because they’re already part of the way there.

By creating resources that are meant to be perfect or imperfect substitutes, or several resources that naturally “go together” with each other, you can change the demand (and therefore the market price) of each of them.

Marginal pricing

As we discussed a long time ago with numeric systems, sometimes demand is a function of how much of a good you already have. If you have none of a particular resource, the first one might be a big deal for you, but if you have a thousand of that resource then one more isn’t as meaningful to you, so demand may actually be on a decreasing curve based on how many of the thing you already have. Or maybe if you can use lots of resources more efficiently to get larger bonuses, it might be that collecting one resource means your demand for more of that resource increases. The same is true on the supply side, where producing lots of a given resource might be more or less expensive per-unit than producing smaller amounts. So you can add these kinds of mechanics in order to influence the price; for example, if you give increasing returns for each additional good that a player possesses, you’ll tend to see the game quickly organize into players going for monopolies of individual goods, as once a player has the majority of a good in the game they’re going to want to buy the rest of them. As an example, if you have decreasing returns for players if they collect a lot of one good, then adding a decreasing cost for producing the good might make a lot of sense if you want the price of that good to be a little more stable.

Scarcity

I probably don’t need to tell you this, but if the total goods are limited, that increases demand. You see this exploited all the time in marketing, when a company wants you to believe that they only have limited quantities of something, so that you’ll buy now (even at a higher price) because you don’t want to miss your chance. So you can really change the feeling of a game just by changing whether a given resource is limited or infinite.

As an example, consider a first-person-view shooting video game where you have limited ammunition. First, imagine it is strictly limited: you get what you find, but that’s it. A game like that feels more like a survival-horror game, where the player only uses their ammo cautiously, because they never know when they’ll find extra or when they’ll run out. Compare to a game where you have enemies that respawn in each area, random item drops, and stores where you can sell the random drops and buy as much extra ammo as you need. In a game like that, a player is going to be a lot more willing to experiment with different weapons, because they know they’ll get all of their ammo back when they reach the next ammo shop, which makes the game feel more like a typical FPS. Now compare that with a game where you have completely unlimited ammo so it’s not even a resource or an economy anymore, where you can expect the player to be shooting more or less constantly, like some of the more recent action-oriented FPSs. None of these methods is “right” or “wrong” but they all give very different player experiences, so my point is just that you increase demand for a good (and decrease the desire to actually consume it now because you might need it later) the more limited it is.

If the resources of your game drive players towards a victory condition, making the resource limited is a great way to control game length. For example, in most RTS games, the board has a limited number of places where players can mine a limited amount of resources, which they then use to create units and structures on the map. Since the core resources that are required to produce units are themselves limited, eventually players will run out, and once it runs out the players will be unable to produce any more units, giving the game a natural “time limit” of sorts. By adjusting the amounts of resources on the map, you can control when this happens; if the players drain the board dry of resources in the first 5 minutes, you’re going to have pretty short games… but if it takes an hour to deplete even the starting resources near your start location, then the players will probably come to a resolution through military force before resource depletion forces the issue, and the fact that they’re limited at all is just there to avoid an infinite stalemate, essentially to place an upper limit on game length.

With multiplayer games in a closed economy, you also want to be very careful with strictly limited goods, because there is sometimes the possibility that a single player will collect all of a good, essentially preventing anyone else from using it, and you should decide as a designer if that should be possible, if it’s desirable, and if not what you can do to prevent it. For example, if resources do no good until a player actually uses them (and using them puts them back in the public supply), then this is probably not going to be a problem, because the player who gets a monopoly on the good has incentive to spend them, which in turn removes the monopoly.

Open and closed economies

In systems, we say a system is “open” if it can be influenced by things from outside the system itself, and it is “closed” if the system is completely self-contained. Economies are systems, and an open economy has different design considerations than a closed economy.

Most game economies are closed systems; you can generate or spend money within the game, but that’s it, and in fact some people get very uncomfortable if you try to change it to an open system: next time you play Monopoly, try offering one of your opponents a real-world cash dollar in exchange for 500 of their Monopoly dollars as a trade, and see what happens — at least one other player will probably become very upset!

Closed systems are a lot easier to manage from a design standpoint, because we have complete control as designers over the system, we know how the system works, and we can predict how changes in the system will affect the game. Open economies are a lot harder, because we don’t necessarily have control over the system anymore.

A simple example of an open economy in a game is in Poker if additional player buy-ins are allowed. If players can bring as much money as they want to the table, a sufficiently rich player could have an unfair advantage; if skill is equal, they could just keep buying more chips until the luck in the game turns their way. To solve this balance problem, usually additional buy-ins are restricted or disallowed in tournament play.

Another place where this can be a problem is CCGs, where a player spending more money can buy more cards and have a greater collection. Ideally, for the game to be balanced, we would want larger collections to give players more options but not more power, which is why I think rarity shouldn’t be a factor in the cost curve of such a game, at least if you want to maximize your player base. If more money always wins, you set up an in-game economy that essentially has a minimum bar for money to spend if you want to be competitive, and in the real world we also have supply and demand curves, and the higher your initial required buy-in is, the fewer people who will be willing to pay (and thus the smaller your player base).

There are other games where you can buy in-game stuff with real-world cash; this is a typical pattern for free-to-play MMOs and Facebook games, and developers have to be careful with exactly what the player can and can’t buy; if the player can purchase an advantage over their opponents, especially in games that are competitive by nature, that can make the game very unbalanced very quickly. (It’s less of an issue in games like FarmVille where there isn’t really much competition anyway.)

Some designers intentionally unbalance their game in this way, assuming that if they create a financial incentive to gain a gameplay advantage, players will pay huge amounts; and to be fair, some of them do, and if the game itself isn’t very compelling in its core mechanics then this might be the only thing you can fall back on to make money, but if you set out to do this from the start I would call it lazy design. A better method is to create a game that’s actually worth playing for anyone, and then offer to trade money for time (so, maybe you get your next unlock after another couple of hours of gameplay, and the gameplay is fun enough that you can do that without feeling like the game is arbitrarily forcing you to grind… but if you want to skip ahead by paying a couple bucks, we’ll let you do that). In this way, money doesn’t give any automatic gameplay advantage, it just speeds up the progression that’s already there.

One final example of an open economy is in any game, most commonly MMOs, where players can trade or “gift” resources within the game, because in any of those cases you can be sure a secondary economy will emerge where players will exchange real-world money for virtual stuff. Just google “World of Warcraft Gold” and you’ll probably find a few hundred websites where you can either purchase Gold for real-world cash, or sell Gold to them and get paid in cash. There are a few options you can consider if you’re designing an online game like this with any kind of trade mechanic:

  • You could just say that trading items for cash is against your Terms of Service, and that any player found to have done so will have their account terminated. This is mostly a problem because it’s a huge support headache: you get all kinds of players complaining to you that their account was banned, and just sending them a form email with the TOS still takes time. In some cases, like Diablo where there isn’t really an in-game trading mechanism and instead the players just drop stuff on the ground and then go pick it up, it can also be really hard to track this. If it’s easy to track (because trades are centralized somewhere), if you really don’t want people to buy in-game goods for cash, you should ask yourself why your trading system that you designed and built even allows it.
  • You could say that an open economy is okay, but you don’t support it, so if someone takes your money and doesn’t give you the goods, it’s the player’s problem and not the developer’s. Unfortunately, it is still the developer’s problem, because you will receive all kinds of customer support emails from players claiming they were scammed, and whether you fix it or not you still have to deal with the email volume. If you don’t fix it, then you have to accept you’re going to lose some customers and generate some community badwill. If you do fix it, then accept that players will think of you as a “safety net” which actually makes them more likely to get scammed, since they’ll trust other people by assuming that if the other person isn’t honest, they’ll just send an email to support to get it fixed. Trying to enforce sanctions against scammers is an unwinnable game of whack-a-mole.
  • You can formalize trading within your game, including the ability to accept cash payments. The good news for this is that players have no excuses; my understanding is that when Sony Online did this for some of their games, the huge win for them was something like a 40% reduction in customer support costs, which can be significant for a large game. The bad news is that you will want to contact a lawyer on this, to make sure you don’t accidentally run afoul of any national banking laws since you are now storing players’ money.

You’ll also want to consider whether players are allowed to sell their entire character, password and all. For Facebook games this is less of an issue because a Facebook account links to all the games and it’s not so easy for a player to give that away. For an MMO where each player has an individual account on your server that isn’t linked to anything else, this is something that will happen, so you need to decide how to deal with that. (On the bright side, selling a whole character doesn’t unbalance the game.)

In any case, you again want to make sure that whatever players can trade in game does not unbalance the game if a single player uses cash to buy lots of in-game stuff. One common pattern to avoid this is to place restrictions on items, for example maybe you can purchase a really cool suit of armor but you have to be at least Level 25 to wear it.

Inflation

Now, remember from before that the demand curve is based on each player’s maximum willingness to pay for some resource. Normally we’d like to think of the demand curve as this fixed thing, maybe fluctuating slightly if a different set of players or situations happen to be online, but over time it should balance out. But there are a few situations that can permanently shift the demand curve in one direction or another, and the most important for our purpose is when each player’s maximum willingness to pay increases.

Why would you change the amount you’re willing to pay? Mostly, if you have more purchasing power. If I doubled your income overnight but Starbucks raised the price of its coffee from $5 to $6, if you liked their coffee before you would probably be willing to pay the new price, because you can afford it.

How does this work in games? Consider a game with a positive-sum economy: that is, it is possible for me to generate wealth and goods without someone else losing them. The cash economy in the board game Monopoly is like this, as we’ve discussed before; so is the commodity economy in Catan, as is the gold economy in most MMOs. This means that over time, players get richer. With more total money in the economy (and especially, more total money per player on average), we see what is called inflation: the demand curve shifts to the right as more people are willing to pay higher prices, which then increases the market price of each good to compensate.

In Catan, this doesn’t affect the balance of the game; by the time you’re in the late game and willing to trade vast quantities of stuff for what you need, you’re at the point where you’re so close to winning that no one else is willing to trade with you anyway. In Monopoly the main problem, as I mentioned earlier, is that the economy is positive-sum but the object of the game is to bankrupt your opponents; here we see that one possible solution to this is to change the victory condition to “be the first player to get $2500” or something like that. In MMOs, inflation isn’t a problem for the existing players, because after all they are willing to pay more; however, it is a major problem for new players, who enter the game to find that they’re earning one gold piece for every five hours of play, and anything worth having in the game costs millions of gold, and they can never really catch up because even once they start earning more money, inflation will just continue. So if you’re running an MMO where players can enter and exit the game freely, inflation is a big long-term problem you need to think about. There are two ways to fix this: reduce the positive-sum nature of the economy, or add negative-sum elements to counteract the positive-sum ones.

Negative-sum elements are sometimes called “money sinks,” that is, some kind of mechanism that permanently removes money from the player economy. The trick is balancing the two so that on average, it cancels out; a good way to know this is to actually take metrics on the total sum of money in the game, and the average money per person, and track that over time to see if it’s increasing or decreasing. Money sinks take many forms:

  • Any money paid to NPC shopkeepers for anything, especially if that something is a consumable item that the player uses and then it’s gone for good.
  • Any money players have to pay as maintenance and upkeep; for example, having to pay gold to repair your weapon and armor periodically.
  • Losing some of your money (or items or stats or other things that cost money to replace) when you die in the game.
  • Offering limited quantities of high-status items, especially if those items are purely cosmetic in nature and not something that gives a gameplay advantage, which can remove large amounts of cash from the economy when a few players buy them.
  • While I don’t know of any games that do this, it works in real life: have an “adventurer’s tax” that all players have to pay as a periodic percentage of their wealth. This not only gives an incentive to spend, but it also penalizes the players who are most at fault for the inflation. Another alternative would be to actually redistribute the wealth, so instead of just removing money from the economy, you could transfer some money from the richest players and distribute it among the poorest; that on its own would be zero-sum and wouldn’t necessarily fix the inflation problem, but it would at least give the newer players a chance to catch up and increase their wealth over time.

To reduce the positive-sum nature of the economy is a bit harder, because players are used to going out there, killing monsters and getting treasure drops. If you make the monsters limited (so they don’t respawn) then the world will become depopulated of monsters very quickly. If you give no rewards, players will wonder why they’re bothering to kill monsters at all. In theory you could do something like this:

  • Monsters drop treasure but not gold, and players can’t sell or trade the treasure that’s dropped; so it might make their current equipment a little better, but that’s about it.
  • Players receive gold from completing quests, but only the first time each quest, so the gold they have at any given point in the game is limited. Players can’t trade gold between themselves.
  • Players can use the gold to buy special items in shops, so essentially it is like the players have a choice of what special advantages to buy for their character.

One other, final solution here is the occasional server reset, when everyone loses everything and has to start over. This doesn’t solve the inflation problem – a new player coming in at the end of a cycle has no chance of catching up – but it does at least mean that if they wait for everything to reset they’ll have as good a chance as anyone else after the reset.

Trading

Some games use trading and bartering mechanics extensively within their economies. Trading can be a very interesting mechanic if players have a reason to trade; usually that reason is that you have multiple goods, and each good is more valuable to some players than others. In Settlers of Catan, sheep are nearly useless to you if you want to build cities, but they’re great for settlements. In Monopoly, a single color property isn’t that valuable, but it becomes much more powerful if you own the other matching ones. In World of Warcraft, a piece of gear that can’t be equipped by your character class isn’t very useful to you, no matter how big the stat bonuses are for someone else. By giving each player an assortment of things that are better for someone else than for them, you give players a reason to trade resources.

Trading mechanics usually serve as a negative-feedback loop, especially within a closed economy. Players are generally more willing to offer favorable trades to those who are behind, while they expect to get a better deal from someone who is ahead (or else they won’t trade at all).

There are a lot of ways to include trading in your game; it isn’t as simple as just saying “players can trade”… but this is a good thing, because it gives you a lot of design control over the player experience. Here are a few options to consider:

  • Can players make deals for future actions as part of the trade (“I’ll give you X now for Y now and Z later”)? If so, are future deals binding, or can players renege?
    • Disallowing future deals makes trades simpler; with future considerations, players can “buy on credit” so to speak, which tends to complicate trades. On the other hand, it also gives the players a lot more power to strike interesting deals.
    • If future deals are non-binding, players will tend to be a lot more cautious and paranoid about making them. Think about whether you want players to be inherently mistrustful and suspicious of each other, or whether you want to give players every incentive to find ways of cooperating.
  • Can players only trade certain resources but not others? For example, in Catan you can trade resource cards but not victory points or progress cards; in Monopoly you can trade anything except developed properties; in some other games you can trade anything and everything.
    • Resource that are tradable are of course a lot more fluid than others. Some resources may be so powerful (like Victory Points) that no one in their right mind would want to trade them, so simply making them untradeable stops the players from ruining the game by making bad trades.
  • Can players only trade at certain times? In Catan you can only trade with the active player before they build; in Bohnanza there is a trading phase as part of each player’s turn; in Monopoly you can trade with anyone at any time.
    • If players can trade at any time, consider if they can trade at “instant speed” in response to a game event, because sometimes the ability to react to game events can become unbalanced. For example, in Monopoly you could theoretically avoid Income Tax by trading all of your stuff to another player, then taking it all back after landing on the space, and you could offer the other player a lesser amount (say, 5%) in exchange for the service of providing a tax shelter. In short, be very clear about exactly when players can and can’t trade.
    • If trading events in the game are infrequent (say, you can only trade every few turns or something), expect trading phases to take longer, as players have had more time to amass tradable resources so they will probably have a lot of deals to make.
    • If this is a problem, consider adding a timer where players only have so much time to make deals within the trading phase.
  • Does the game require all trades to be even (e.g. one card for one card) or are uneven trades allowed (Catan can have uneven numbers of cards, but at least one per side; other games might allow a complete “gift” of a trade)?
    • Requiring even trades places restrictions on what players can do and will reduce the number of trades made, but it will also cause trading to move along faster because there’s less room for haggling, and there’s also less opportunity for one weak player to make a bad trade that hands the game to someone else.
    • I could even imagine a game where uneven trades are enforced: if you trade at all, someone must get the shaft.
  • Are trades limited in quantity, or unlimited? A specific problem here is the potential for the “kingmaker” problem, where one player realizes they can’t win, but the top two players are in a close game, and the losing player can choose to “gift” all of their stuff to one of the top two players to allow one of them to win. Sometimes social pressure prevents people from doing something like this, but you want to be very careful in tournament situations and other “official” games where the economic incentive of prizes might trump good sportsmanship (I actually played in a tournament game once where top prize was $20, second prize was $10, and I was in a position to decide who got what, so I auctioned off my pieces to the highest bidder.)
  • Are trades direct, or indirect? Usually a trade just happens, I give you X and you give me Y, but it’s also possible to have some kind of “trade tax” where maybe 10% of a gift or trade is removed and given to the bank for example, to limit trades. This seems strange – why offer trading as a mechanic at all, if you’re then going to disincentivize it? But in some games trading may be so powerful (if two players form a trading coalition for their mutual benefit, allowing them both to pull ahead of all other players, for example) to the point where you might need to apply some restrictions just to prevent trades from dominating the rest of the game.
  • Is there a way for any player to force a trade with another player against their will? Trades usually require both players to agree on the terms, but you can include mechanisms to allow one player to force a trade on another player under certain conditions. For instance, in a set collection game, you might allow a player to able to force-trade a more-valuable single item for a less-valuable one from an opponent, once per game.

Auctions

Auction mechanics are a special case of trading, when one player auctions off their stuff to the other players, or where the “bank” creates an item out of thin air and it is auctioned to the highest bidding player. Auctions often serve as a self-balancing mechanic in that the players are ultimately deciding on the price of how much something is worth, so if you don’t know what to cost something you can put it up for auction and let the players decide. (However, this is lazy design; auctions work the best when the actual cost is variable, different between players, and situational, so that figuring out how much it’s worth is something that changes from game to game, so that the players are actually making interesting choices each time. With an effect that is always worth the same amount, “auction” is meaningless once players figure out how much it’s worth; they’ll just bid what it’s worth and be done with it.)

Auctions are a very pure form of “willingness to pay” because each player has to decide what they’re actually willing to pay so that they can make an appropriate bid. A lot of times there are meta-considerations: not just “how much do I want this for myself” but also “how much do I not want one of my opponents to get it because it would give them too much power” or even “I don’t want this, but I want an opponent to pay more for it, so I’ll bid up the price and take the chance that I won’t get stuck with it at the end.”

An interesting point with auctions is that normally if the auction goes to the highest bidder, that the item up for auction sells for the highest willingness to pay among all of the bidders – that’s certainly what the person auctioning the item wants, is to get the highest price. But in reality, the actual auction price is usually somewhere between the highest and second highest willingness to pay, and in fact it’s usually closer to the second-highest, although that depends on the auction type: sometimes you end up selling for much lower.

Just as there are many kinds of trading, there are also many kinds of auctions. Here’s a few examples:

  • Open auction. This is the type most people think of when they think of auctions, where any player can call a higher bid at any time, and when no other bids happen someone says “going once, going twice, sold.” If everyone refuses to bid beyond their own maximum willingness to pay, the person with the highest willingness will purchase the item for one unit more than the second-highest willingness, making this auction inefficient (in the sense that you’d ideally want the item to go for the highest price), but as we’ll see that is a problem with most auctions.
  • Fixed price auction. In turn order, each player is offered the option to purchase the item or decline. It goes around until someone accepts, or everyone declines. This gives an advantage to the first player, who gets the option to buy (or not) before anyone else – and if it’s offered at less than the first player’s willingness to pay, they get to keep the extra in their own pocket, so how efficient this auction is depends on how well the fixed price is chosen.
  • Circle auction. In turn order, each player can either make a bid (higher than the previous one) or pass. It goes around once, with the final player deciding whether to bid one unit higher than the current highest bid, or let the other player take it. This gives an advantage to the last player, since it is a fixed-price auction for them and they don’t have to worry about being outbid, so they may be able to offer less than their top willingness to pay.
  • Silent auction. Here, everyone secretly and simultaneously chooses their bid, all reveal at once, and highest bid wins. You need to include some mechanism of resolving ties, since sometimes two or more players will choose the same highest bid. This can often have some intransitive qualities to it, as players are not only trying to figure out their own maximum willingness to pay, but also other players’ willingness. If the item for auction is more valuable for you than the other players, you may bid lower than your maximum willingness to pay because you expect other players’ bids to be lower, so you expect to bid low and still win.
  • Dutch auction. These are rare in the States as it requires some kind of special equipment. You have an item that starts for bid at high price, and there’s some kind of timer that counts down the price at a fixed rate (say, dropping by $1 per second, or something). The first player to accept at the current price wins. In theory this means as soon as the price hits the top player’s maximum willingness to pay, they should accept, but there may be some interesting tension if they’re willing to wait (and possibly lose out on the item) in an attempt to get a better price. If players can “read” each others’ faces in real-time to try to figure out who is interested and who isn’t, there may be some bluffing involved here.

Even once you decide on an auction format, there are a number of ways to auction items:

  • The most common is that there’s a single item up for auction at a time; the top bidder receives the item, and no one else gets anything.
  • Sometimes an entire set of items are auctioned off at the same time in draft form: top bid simply gets first pick, then second-highest bidder, and so on. The lowest bidder gets the one thing no one else wanted… or sometimes they get nothing at all, if you want to give players some incentive to bid higher. In other words, even if a player doesn’t particularly want any given item, they may be willing to pay a small amount in order to avoid getting stuck with nothing. Conversely, if you want to give players an incentive to save their money by bidding zero, giving the last-place bidder a “free” item is a good way to do that – but of course if multiple players bid zero, you’ll need some way of breaking the tie.
  • If it’s important to have negative feedback on auction wins so that a single player shouldn’t win too many auctions in a row, giving a bonus to everyone who didn’t win (or even just the bottom bidder) for winning the next auction is a way to do that.
  • Some auctions are what are called negative auctions because they work in reverse: instead of something good happening to the highest bidder, something bad happens to the lowest bidder. In this case players are bidding for the right to not have something bad happen to them. This can be combined with other auctions: if the top bidder takes something from the bottom bidder, that gives players an incentive to bid high even if they don’t want anything. The auction game Fist of Dragonstones had a really interesting variant of this, where the top bidder takes something from the second highest bidder, meaning that if you bid for the auction at all you wanted to be sure you won and didn’t come in second place! On the other hand, if only one person bids for it, then everyone else is in second place, and the bidder can choose to take from any of their opponents, so sometimes it can be dangerous to not bid as well.

Even once you decide who gets what, there are several ways to define who pays their bid:

  • The most common for a single-item auction is that the top bidder pays their bid, and all other players spend nothing.
  • If the top two players pay their bid (but the top bidder gets the item and the second-highest bidder gets nothing), making low or medium bids suddenly becomes very dangerous, turning the auction into an intransitive mechanic where you either want to bid higher than anyone else, or low enough that you don’t lose anything. This is most common in silent auctions, where players are never sure of exactly what their opponents are bidding. If you do something like this with an open auction things can get out of hand very quickly, as each of the top two bidders is better off paying the marginal cost of outbidding their opponent than losing their current stake – for example, if you auction off a dollar bill in this way and (say) the top bid is 99 cents and the next highest is 98 cents, the second-highest bidder has an incentive to bid a dollar (which lets them break even rather than lose 98 cents)… which then gives incentive to the 99-cent bidder to paradoxically bid $1.01 (because in such a situation they’re only losing one cent rather than 99 cents), and if both players follow this logic they could be outbidding each other indefinitely!
  • The top bidder can win the auction but only pay an amount equal to the second-highest bid. Game theory tells us, through some math I won’t repeat here, that in a silent auction with these rules, the best strategy is to bid your maximum willingness to pay.
  • In some cases, particularly when every player gets something and highest bid just chooses first, you may want to have all players pay their bid. If only the top player gets anything, that makes it dangerous to bid if you’re not hoping to win – although in a series of such auctions, a player may choose to either go “all in” on one or two auctions to guarantee winning them, or else they may spread them out and try to take a lot of things for cheap when no one else bids.
  • If only the top and bottom bidders pay, players may again have incentive to bid higher than normal, because they’d be happy to win but even happier if they don’t have to lose anything. You may want to force players to bid a certain minimum, so there is always at least something at stake to lose (otherwise a player could bid zero and pay no penalty)… although if zero bids are possible, that makes low bids appear safer as there’s always the chance someone will protect you from losing your bid by bidding zero themselves.
  • If everyone pays their bid except the lowest bidder, that actually gives players an incentive to bid really high or really low.

Even if you know who has to pay what bid, you have to decide what happens to the money from the auction:

  • Usually it’s just paid “to the bank” – that is, it’s removed from the economy, leading to deflation.
  • It could be paid to some kind of holding block that collects auction money, and is then redistributed to one or more players later in the game when some condition is met.
  • The winning bid may also be paid to one or more other players, making the auction partially or completely zero-sum, in a number of ways. Maybe the bid is divided and evenly split among all other players. The board game Lascaux has an interesting bid mechanic: each player pays a chip to stay in the auction, in turn order, and on their turn a player can choose instead to drop out of the auction and take all of the chips the players have collectively paid so far. The auction continues with the remaining players, thus it is up to the player if it’s a good enough time to drop out (and gain enough auction chips to win more auctions later) or if it’s worth staying in for one more go-round (hoping everyone else will stay in, thus increasing your take when you drop out), or even continuing to stay in with the hopes of winning the auction.

Lastly, no matter what kind of auction there is, you have to decide what happens if no one bids:

  • It could be that one of the players gets the item for free (or minimal cost). If that player is known in advance to everyone, it gives other players an incentive to bid just to prevent someone else for getting an item for free. When the players know that the default state is one of their opponents getting a bonus, it’s often an incentive to open the bidding. If players don’t know who it is (say, a random player gets the item for free) then players may be more likely to not bid, as they have just as good a chance as anyone else.
  • As an alternative, the auction could have additional incentives added, and then repeated. If one resource is being auctioned off and no one wants it, a second resource could be added, and then the set auctioned… and then if no one wants that, add a third resource, and so on until someone finally thinks it’s worth it.
  • Or, what usually happens is the item is thrown out, no one gets it, and the game continues from there as if the auction never happened.

Needless to say, there are a lot of considerations when setting up an in-game economy! Like most things, there are no right or wrong answers here, but hopefully I’ve at least given you a few different options to consider, and the implications of those.

Solving common problems in multiplayer

This next section didn’t seem to fit anywhere else in the course so I’m mentioning it here, so if it seems out of place, that’s why. In multiplayer free-for-all games where there can only be one winner, there are a few problems that come up pretty frequently, that can either be considered a balance or imbalance depending on the game, but they are things that usually aren’t much fun so you want to be very careful of them.

Turtling

One problem, especially in war games or other games where players attack each other directly, is that if you get in a fight with another player – even if you win – it still weakens both of you relative to everyone else. The wise player reacts to this by doing their best to not get in any fights, instead building up their defenses to make them a less tempting target, and then when all the other players get into fights with each other they swoop in and mop up the pieces when everyone else is in a weakened state. The problem here is that the system is essentially rewarding the players for not interacting with each other, and if the interaction is the fun part of the game then you can hopefully see where this is something that needs fixing.

The game balance problem is that attacking – you know, actually playing the game – is not the optimal strategy. The most direct solution is to reward or incentivize aggression. A simple example is in the board game RISK, where attackers and defenders both lose armies so you’d normally want to just not attack, so the game goes to great lengths to avoid turtling by giving incentives to attack: more territories controlled means you get more armies next turn if you hold onto them, same for continent bonuses, and let’s not forget the cards you can turn in for armies but you only get a card if you attack.

Another solution is to force the issue by making it essentially impossible to not attack. As an example, Plague and Pestilence and Family Business are both light card games where you draw 1 then play 1. A few cards are defensive in nature, but most cards hurt opponents, and you must play one each turn (choosing a target opponent, even), so before too long you’re going to be forced to attack someone else – it’s simply not possible to avoid making enemies.

Kill the leader and Sandbagging

One common problem in games where players can directly attack each other, especially when it’s very clear who is in the lead, is that everyone by default will gang up on the leader. On the one hand, this can serve as a useful negative feedback loop to your game, making sure no one gets too much ahead. On the other hand, players tend to overshoot (so the leader isn’t just kept in check, they’re totally destroyed), and it ends up feeling like a punishment to be doing well.

As a response to this problem, a new dynamic emerges, which I’ve seen called sandbagging. The idea is that if it’s dangerous to be the leader, then you want to be in second place. If a player is doing well enough that they’re in danger of taking the lead, they will intentionally play suboptimally in order to not make themselves a target. As with turtling, the problem here is that players aren’t really playing the game you designed, they’re working around it.

The good news is that a lot of things have to happen in combination for this to be a problem, and you can break the chain of events anywhere to fix it.

  • Players need a mechanism to join forces and “gang up” on a single player; if you make it difficult or impossible for players to form coalitions or to coordinate strategies, attacking the leader is impossible. In a foot race, players can’t really “attack” each other, so you don’t see any kill-the-leader strategies in marathons. In an FPS multiplayer deathmatch, players can attack each other, but the action is moving so fast that it’s hard for players to work together (or really, to do anything other than shoot at whoever’s nearby).
  • Or, even if players can coordinate, they need to be able to figure out who the leader is. If your game uses hidden scoring or if the end goal can be reached a lot of different ways so that it’s unclear who is closest, players won’t know who to go after. Lots of Eurogames have players keep their Victory Points secret for this reason.
  • Or, even if players can coordinate and they know who to attack, they don’t need to if the game already has built-in opportunities for players to catch up. Some Eurogames have just a few defined times in the game where players score points, with each successive scoring opportunity worth more than the last, so in the middle of a round it’s not always clear who’s in the lead… and players know that even the person who got the most points in the first scoring round has only a minor advantage at best going into the final scoring round.
  • Or, even if players can coordinate and they know who to attack, the game’s systems can make this an unfavorable strategy or it can offer other strategies. For example, in RISK it is certainly arguable that having everyone attack the leader is a good strategy in some ways… but on the other hand, the game also gives you an incentive to attack weaker players, because if you eliminate a player from the game you get their cards, which gives you a big army bonus.
  • Or, since kill-the-leader is a negative feedback loop, the “textbook solution” is to add a compensating positive feedback loop that helps the leader to defend against attacks. If you want a dynamic where the game starts equal but eventually turns into one-against-many, this might be the way to go.

If you choose to remove the negative feedback of kill-the-leader, one thing to be aware of is that if you were relying on this negative feedback to keep the game balanced, it might now be an unbalanced positive feedback loop that naturally helps the leader, so consider adding another form of negative feedback to compensate for the removal of this one.

Kingmaking

A related problem is when one player is too far behind to win, but they are in a position to decide which of two other people wins. Sometimes this happens directly – in a game with trading and negotiation, the player who’s behind might just make favorable trades to one of the leading players in order to hand them the game. Sometimes it happens indirectly, where the player who’s behind has to make one of two moves as part of the game, and it is clear to everyone that if they make one move then one player wins, and another move causes another player to win.

This is undesirable because it’s anticlimactic: the winner didn’t actually win because of superior skill, but instead because one of the losing players liked them better. Now, in a game with heavy diplomacy (like the board game Diplomacy) this might be tolerable; after all, the game is all about convincing other people to do what you want. But in most games, the winners both feel like the win wasn’t really deserved, so the game designer generally wants to avoid this situation.

As with kill-the-leader, there are a lot of things that have to happen for kingmaking to be a problem, and you can eliminate any of them:

  • The players have to know their standing. If no player knows who is winning, who is losing, and what actions will cause one player to win over another, then players have no incentive to help out a specific opponent.
  • The player in last place has to know that they can’t win, and that all they can do is help someone else to win. If every player believes they have a chance to win, there’s no reason to give away the game to someone else.
  • Or, you can reduce or eliminate ways for players to affect each other. If the person in last place has no mechanism to help anyone else, then kingmaking is impossible.

Player elimination

A lot of two-player games are all about eliminating your opponent’s forces, so it makes sense that multi-player games follow this pattern as well. The problem is that when one player is eliminated and everyone else is still playing, that losing player has to sit and wait for the game to end, and sitting around not playing the game is not very fun.

With games of very short length, this is not a problem. If the entire game lasts two minutes and you’re eliminated with 60 seconds left to go, who cares? Sit around and wait for the next game to start. Likewise, if player elimination doesn’t happen until late in the game, this is not usually a problem. If players in a two-hour game start dropping around the 1-hour-50-minute mark, relatively speaking it won’t feel like a long time to wait until the game ends and the next one can begin. It’s when players can be eliminated early and then have to sit around and wait forever that you run into problems.

There are a few mechanics that can deal with this:

  • You can change the nature of your player elimination, perhaps disincentivizing players to eliminate their opponents, so that the only time a player will actually do this is when they feel they’re strong enough to eliminate everyone and win the game. The board game Twilight Imperium makes it exceedingly dangerous to attack your opponents because a war can leave you exposed, thus players tend to not attack until they feel confident that they can come out ahead, which doesn’t necessarily happen until late game.
  • You can also change the victory condition, removing elimination entirely; if the goal is to earn 10 Victory Points, instead of eliminating your opponents, then players can be so busy collecting VP that they aren’t as concerned with eliminating the opposition. The card game Illuminati has a mechanism for players to be eliminated, but the victory condition is to collect enough cards (not to eliminate your opponents), so players are not eliminated all that often.
  • One interesting solution is to force the game to end when the first player is eliminated; thus, instead of the victory being decided as last player standing, victory is the player in best standing (by some criteria) when the first player drops out. If players can help each other, this creates some tense alliances as one player nears elimination; the player in the lead wants that player eliminated, while everyone else actually wants to help that losing player stay in the game! The card game Hearts works this way, for example. The video game Gauntlet IV (for Sega Genesis) also did something like this in its multiplayer battle mode, where as soon as one player was eliminated a 60-second countdown timer started, and the round would end even if several players were still alive.
  • You can also give the eliminated players something to do in the game after they’re gone. Perhaps there are some NPCs in the game that are normally moved according to certain rules or algorithms, but you can give control of those to the eliminated players (my game group added this as a house rule in the board game Wiz-War, where an eliminated player would take control of all monsters on the board). Cosmic Encounter included rules for a seventh player beyond the six that the game normally supports, by adding “kibbutzing” mechanics where the seventh player can wander around, look at people’s hands, and give them information… and they have a secret goal to try to get a specific other player to win, so while they are giving away information they also may be lying. In Mafia/Werewolf and other variants, eliminated players can watch the drama unfold, so even though they can’t interact the game is fun to observe, so most players don’t mind taking on the “spectator” role.

Excel

Every game designer really needs to learn Excel at some point. Some of you probably already use it regularly, but if you don’t, you should learn your way around it, so consider this a brief introduction to how Excel works and how to use it in game design, with a few tricks from my own experience thrown in. For those of you who are already Excel experts, I beg your patience, and hope I can show you at least one or two little features that you didn’t know before. Note: I’m assuming Excel 2003 for PC; the exact key combinations I list below may vary for you if you’re using a different version or platform.

Excel is a spreadsheet program, which means absolutely nothing to you if you aren’t a financial analyst, so an easier way of thinking about Excel is that it’s a program that lets you store data in a list or a grid. At its most basic, you can use it to keep things like a grocery list or to-do list in a column, and if you want to include a separate column to keep track of whether it’s done or not, then that’s a perfectly valid use (I’ve worked with plenty of spreadsheets that are nothing more than that, for example a list of art or sound assets in a video game and a list of their current status). Data in Excel is stored in a series of rows and columns, where each row has a number, each column has a letter, and a single entry is in a given row and column. Any single entry location is called a cell (as in “cell phone” or “terrorist cell”), and is referred to by its row and column (like “A1” or “B19”). You can navigate between cells with arrow keys or by clicking with the mouse.

Entering data into cells

In general, each cell can hold one of three things: a number, written text, or a computed formula. Numbers are pretty simple, just type in a number in the formula bar at the top and then hit Enter, or click the little green checkmark if you prefer. Text is also simple, just type the text you want in the same way. What if you want to include text that looks like a number or formula, but you want Excel to treat it as text? Start the entry with a single apostrophe (‘) and then you can type anything you want, and Excel will get the message that you want it to treat that cell as text.

For a formula, start the entry with an equal sign (=) and follow with whatever you want computed. Most of the time you just want simple arithmetic, which you can do with the +, -, * and / characters. For example, typing =1+2 and hitting Enter will display 3 in the cell. You can also reference other cells: =A1*2 will take the contents in cell A1, multiply by 2, and put the result in whatever other cell there is. And the really awesome part about this is that if you change the value in A1, any formulas that reference it will change automatically, which is the main thing Excel does that saves you so much time. In fact, even if you insert new rows that change the actual name of the cell you’re referencing, Excel will change your formulas to update what they’re referencing.

Adding comments

Suppose you want to leave a note to yourself about something in one of the cells. One way to do this is just to put another text field next to it, of course, although as you’ll see getting a lot of text to display in one of those tiny cells isn’t trivial – you can type it all in, of course, but there are times when that’s not practical. For those cases you can instead use the Comment feature to add a comment to the box, which shows up as a little red triangle in the corner of the cell. Mousing over the cell reveals the comment.

Moving data around

Cut, copy and paste work pretty much as you’d expect them to. You can even click and drag, or hold Shift while moving around with the arrow keys, to select a rectangular block of cells… or click on one of the row or column headings to select everything in that row or column… or click on the corner between the row and column headings (or hit Ctrl-A) to select everything. By holding Ctrl down and clicking on individual cells, you can select several cells that aren’t even next to each other.

Now, if you paste a cell containing a formula a whole bunch of times, a funny thing happens: you’ll notice that any cells that are referenced in the formula keep changing. For example, if you’ve got a formula in cell B1 that references A1, and you copy B1 and paste into D5, you’ll notice the new formula references D4 instead. That’s because by default, all of these cell references are relative in position to the original. So when you reference A1 in your formula in B1, Excel isn’t actually thinking “the cell named A1”… it’s thinking “the cell just to the left of me in the same row.” So when you copy and paste the formula somewhere else, it’ll start referencing the cell just to the left in the same row. As you might guess in this example, if you paste this into a cell in column A (where there’s nothing to the left because you’re already all the way on the left), you’ll see a result that’s an error: #REF! which means you’re referencing a cell that doesn’t exist.

If you want to force Excel to treat a reference as absolute, so that it references a specific cell no matter where you copy or paste to, there are two ways to do it. First is to use a dollar sign ($) before the column letter or row number or both in the formula, which tells Excel to treat either the row or column (or both) as an absolute position. For our earlier example, if you wanted every copy-and-paste to look at A1, you could use $A$1 instead.

Why do you need to type the dollar sign twice in this example? Because it means you can treat the row as a relative reference while the column is absolute, or vice versa. There are times when you might want to do this which I’m sure you will discover as you use Excel, if you haven’t already.

There’s another way to reference a specific, named cell in an absolute way, which is mostly useful if you’re using several cells in a bunch of complicated formulas and it’s hard to keep straight in your head which cell is which value when you’re writing the formulas. You can give any cell a name; the name is just the letter and number of the cell by default, it’s displayed in the top left part of the Excel window. To change it, just click on that name and then type in whatever you want. And then you can reference that name anywhere else in the spreadsheet and it’ll be an absolute reference to that named cell.

Sorting

Sometimes you’ll want to sort the data in a worksheet. A common use of Excel is to keep track of a bunch of objects, one per row, and each attribute is listed in a separate column. For example, maybe on a large game project you’ll have an Excel file that lists all of the enemies in a game, with the name in one column, hit points in another, damage in another, and so on. And maybe you want to sort by name just so you have an easy-to-lookup master list, or maybe you want to sort by hit points to see the largest or smallest values, or whatever. This is pretty easy to do. First, select all the cells you want to sort. Go to the Data menu, and choose Sort. Next, tell it which column to sort by, and whether to sort ascending or descending. If you’ve got two entries in that column that are the same, you can give it a second column as a tiebreaker, and a third column as a second tiebreaker if you want (otherwise it’ll just preserve the existing order when it sorts like that). There’s also an option for ignoring the header row, so if you have a header with column descriptions at the very top and you don’t want that sorted… well, you can just not select it when sorting, of course, but sometimes it’s easier to just select the whole spreadsheet and click the button to ignore the header row. If you accidentally screw up when sorting, don’t panic – just hit Undo.

Sometimes you realize you need to insert a few rows or columns somewhere, in between others. The nice thing about this is that Excel updates all absolute and relative references just the way you’d want it to, so you should never have to change a value or formula or anything just because you inserted a row. To insert, right-click on the row or column heading, and “insert row” or “insert column” is one of the menu choices. You can also insert them from the Insert menu.

If you need to remove a row or column it works similarly, right-click on the row or column heading and select “delete.” You might think you could just hit the Delete key on the keyboard too, but that works differently: it just clears the values of the cells but doesn’t actually shift everything else up or left.

Using your data

Sometimes you’ve got a formula you want copied and pasted into a lot of cells all at once. My checkbook, for example, has formulas on each row to compute my current balance after adding or subtracting the current transaction from the previous one, and I want that computation on every line. All I had to do was write the formula once… but if I had to manually copy then paste into each individual cell, I’d cry. Luckily, there’s an easy way to do this: Fill. Just select the one cell you want to propagate, and a whole bunch of other cells below or to the right of it, then hit Ctrl+D to take the top value and propagate it down to all the others (Fill Down), or Ctrl+R to propagate the leftmost value to the right. If you want to fill down and right, you can just select your cell and a rectangular block of cells below and to the right of it, then hit Ctrl+D and Ctrl+R in any order. You can also Fill Up or Fill Left if you want, but those don’t have hotkeys; you’ll have to select those from the Edit menu under Fill. As with copying and pasting, Fill respects absolute and relative references to other cells in your formulas.

There’s kind of a related command to Fill, which is useful in a situation like the following example: suppose you’re making a list of game objects and you want to assign each one a unique ID number, and that number is going to start at 1 and then count upwards from there, and let’s say you have 200 game objects. So in one column, you want to place the numbers 1 through 200, each number in its own cell. Entering each number manually is tedious and error-prone. You could use a formula, like, say, putting the number 1 in the first cell, let’s say it’s cell A2, and then in B2 put the formula =A2+1 (which computes to 2), then Fill Down that formula to create the numbers all the way down to 200. And that will work at first, but whenever you start reordering or sorting rows, all of these cells referencing each other might get out of whack, and it might work or it might not but it’ll be a big mess regardless. And anyway, you don’t really want a formula in those cells anyway, you want a number.

You could create the 200 numbers by formula on a scratch area somewhere, then copy, then Paste Special (under the Edit menu), and select Values, which just takes the computed values and pastes them in as numbers without copying the formulas. And then you just delete the formulas that you don’t need anymore. That would work, and Paste Special / Values is an awesome tool for a lot of things, but it’s overkill here.

Here’s a neat little trick: just create two or three cells and put the numbers 1, 2, 3 in them. Now, select the three cells, and you’ll notice there’s a little black square dot in the lower right corner of the selection. Click on that, and drag down a couple hundred rows. When you release the mouse button, Excel takes its best guess what you were doing, and fills it all in. For something simple like counting up by 1, Excel can figure that out, and it’ll do it for you. For something more complicated you probably won’t get the result you’re looking for, but you can at least have fun trying and seeing what Excel thinks you’re thinking.

Functions

Excel comes with a lot of built-in functions that you can use in your formulas. Functions are always written in capital letters, followed by an open-parenthesis, then any parameters the function might take in (this varies by function), then a close-parenthesis. If there are several parameters, they are separated by commas (,). You can embed functions inside other ones, so one of the parameters of a function might actually be the result of another function; Excel is perfectly okay with that.

Probably the single function I use more than any other is SUM(), which takes any number of parameters and adds them together. So if you wanted to sum all of the cells from A5 to A8, you could say =A5+A6+A7+A8, or you could say =SUM(A5,A6,A7,A8), or you could say =SUM(A5:A8). The last one is the most useful; use a colon between two cells to tell Excel that you want the range of all cells in between those. You can even do this with a rectangular block of cells by giving the top-left and bottom-right corners: =SUM(A5:C8) will add up all twelve cells in that 3×4 block.

The second most useful function for me is IF, which takes in three parameters. The first is a condition that’s evaluated either to a true or false value. The second parameter is evaluated and returned if the condition is true. The third parameter is evaluated and returned if the condition is false. The third parameter is optional; if you leave it out and the condition is false, the cell will just appear blank instead. For example, you could say: =IF(A1>0,1,5) which means that if A1 is greater than zero, this cell’s value is 1, otherwise it’s 5. One of the common things I use with IF is the function ISBLANK() which takes a cell, and returns true if the cell is blank, or false if it isn’t. So you can use this, for example, if you’re using one column as a checklist and you want to set a column to a certain value if something hasn’t been checked. If you’re making a checklist and want to know how many items have (or haven’t) been checked off, by the way, there’s also the function COUNTBLANK() which takes a range of cells as its one parameter, and returns the number of cells that are blank.

For random mechanics, look back in the week where we talked about pseudorandomness to see my favorite three functions for that: RAND() which takes no parameters at all and returns a pseudorandom number from 0 to 1 (it might possibly be zero, but never one). Changing any cell or pressing F9 causes Excel to reroll all randoms. FLOOR() and CEILING() will take a number and round it down or up to the nearest whole number value, or you can use ROUND() which will round it normally.

FLOOR() and CEILING() both require a second parameter, the multiple to round to; for most cases you want this to be 1, since you want it rounding to the nearest whole number, but if you want it to round up or down to the nearest 5, or the nearest 0.1, or whatever, then use that as your second parameter instead. Just to be confusing, ROUND() also takes a second parameter, but it works a little differently. For ROUND(), if the second parameter is zero (which you normally want) then it will round to the nearest whole number. If the second parameter is 1, it rounds to the nearest tenth; if the second parameter is 2, it rounds to the nearest hundredth; if the second parameter is 3, it rounds to the nearest thousandth; and so on – in other words, the second parameter for ROUND() tells you the number of digits after the decimal point to include in the significance.

RANK() and VLOOKUP(), I already mentioned back in Week 6; they’re useful for when you need to take a list and shuffle it randomly.

Multiple worksheets

By default, a new Excel file has three worksheet tabs, shown at the bottom left. You can rename these to something more interesting than “Sheet1” by just double-clicking the tab, typing a name, and hitting Enter. You can also reorder them by clicking and dragging, if you want a different sheet to be on the left or in the middle. You can add new worksheets or delete them by right-clicking on the worksheet tab, or from the Insert menu. Ctrl+PgUp and Ctrl+PgDn provide a convenient way to switch between tabs without clicking down there all the time, if you find yourself going back and forth between two tabs a lot.

The reason to create multiple worksheets is mostly for organizational purposes; it’s easier sometimes if you’ve got a bunch of different but related systems to put each one in its own worksheet rather than having to scroll around all over the place to find what you’re looking for on a single worksheet.

You can actually reference cells in other worksheets in a formula, if you want. The easiest way to do it is to type in the formula until you get to the place where you’d type in the cell name, then use your mouse to click on the other worksheet, then actually click on the cell or cells you want to reference. One thing to point out here is that it’s easy to do this by accident, where you’re entering something in a cell and don’t realize you haven’t finished, and then you click on another cell to see what’s there and instead it starts adding that cell to your formula; if that happens or you otherwise feel like you’re lost and not sure how to get out of entering half of a formula, just hit the red X button to the left of the formula bar and it’ll undo any typing you did just now.

Graphing

One last thing that I find really useful is the ability to create a graph, great for looking graphically at your game objects when they relate to each other. Select two or more rows or columns, each one is just a separate curve, then select the Insert menu, then Chart. Select XY(Scatter) and then the subtype that shows curvy lines. From there, go through the wizard to select whatever options you want, then click Finish and you’ll have your chart.

One thing you’ll often want to do with graphs is to add a trendline; right-click on any single data point on the graph, and select Add Trendline. You’ll have to tell it whether the trendline should be linear, or exponential, or polynomial, or what. On the Options tab of the trendline wizard, you can also have it display the equation on the chart so you can actually see what the best-fit curve is, and also display the R-squared value (which is just a measure of how close the fitted curve is to the actual data; R-squared of 1 means the curve is a perfect fit, R-squared of 0 means it may as well be random… although in practice, even random data will have an R-squared value of more than zero, sometimes significantly more). If you’re trying to fit a curve, as happens a lot when analyzing metrics, you’ll probably want to add these right away.

Another thing you should know is that by default, the charts Excel makes are… umm…  really ugly. Just about everything you can imagine to make the display better, you can do: adding vertical and not just horizontal lines on the graph, changing the background and foreground colors of everything, adding labels on the X and Y axis, changing the ranges of the axes and labeling them… it’s all there somewhere. Every element of the graph is clickable and selectable on its own, and generally if you want to change something, just right-click on it and select Format, or else double-click it. Just be aware that each individual element – the gridlines, the graphed lines, the legend, the background, the axes – are all treated separately, so if you don’t see a display option it probably just means you have the wrong thing selected. Play around with the formatting options and you’ll see what I mean.

Making things look pretty

Lastly, there are a few things you can do to make your worksheets look a little bit nicer, even without the graphs, even if it’s just cells. Aside from making things look more professional, it also makes it look more like you know what you’re doing 🙂

The most obvious thing you can do is mess with the color scheme. You can change the text color and background color of any cell; the buttons are on a toolbar in the upper right (at least they are on my machine; maybe they aren’t for you, if not just add the Formatting toolbar and it’s all on there). You can also make a cell Bolded or Italicized, left or right justified, all the other things you’re used to doing in Word. Personally, I find it useful to use background color to differentiate between cells that are just text headings (no color), cells where the user is supposed to change values around to see what effect they have on the rest of the game (yellow), and cells that are computed values or formulas that should not be changed (gray), and then Bolding anything really important.

You also have a huge range of possible ways to display numbers and text. If you select a single cell, a block of cells, an entire row or column, or even the whole worksheet, then right-click (or go to the Format menu) and select Format Cells, you’ll have a ton of options at your disposal. The very first tab lets you say if this is text or a number, and what kind. For example, you can display a number as currency (with or without a currency symbol like a dollar sign or something else), or a decimal (to any number of places).

On the Alignment tab are three important features:

  • Orientation lets you display the text at an angle, even sideways, which can make your column headings readable if you want the columns themselves to be narrow. (Speaking of which – you can adjust the widths of columns and heights of rows just by clicking and dragging between two rows or columns, or right-clicking and selecting Column Width or Row Height).
  • Word Wrap does exactly what you think it does, so that the text is actually readable. Excel will gleefully expand the row height to fit all of the text, so if the column is narrow and you’ve got a paragraph in there, it’ll probably be part of a word per line and the whole mess will be unreadable, so you’ll want to adjust column width before doing that.
  • Then there’s a curious little option called Merge Cells, which lets you convert Excel’s pure grid form into something else. To use it, select multiple cells, then Format Cells and then click the Merge Cells option and click OK. You’ll see that all the cells you selected are now a single giant uber-cell. I usually use this for cosmetic reasons, like if you’ve got a list of game objects and each column is some attribute, and you’ve got a ton of attributes but you want to group them together… say, you have some offensive attributes and some defensive ones, or whatever. You could create a second header column over the individual headers, merge the cells over each group, and have a single cell that says (for example) “defensive attributes”. Excel actually has a way to do this automatically in certain circumstances, called Pivot Tables, but that’s a pretty advanced thing that I’ll leave to you to learn on your own through google if you reach a point where you need it.

One thing you’ll find sometimes is that you have a set of computed cells, and while you need them to be around because you’re referencing them, you don’t actually need to look at the cells themselves – they’re all just intermediate values. One way to take care of this is to stick them in their own scratch worksheet, but an easier way is to stick them in their own row or column and then Hide the row or column. To do that, just right-click on the row or column, and there’s a Hide option. Select that and the row or column will disappear, but you’ll see a thick line between the previous and next columns, a little visual signal to you that something else is still in there that you just can’t see. To display it again, select the rows or columns on either side, right-click and select Unhide.

If you want to draw a square around certain blocks of data in order to group them together visually, another button in the Formatting tab lets you select a border. Just select a rectangle of cells, then click on that and select the border that looks like a square, and it’ll put edges around it. The only thing I’ll warn you is that when you copy and paste cells with borders, those are copied and pasted too under normal conditions, so don’t add borders until you’re done messing around (or if you have to, just remove the borders, move the cells around, then add them back in… or use Paste Special to only paste formulas and not formatting).

Another thing that I sometimes find useful, particularly when using a spreadsheet to balance game objects, is conditional formatting. First select one or more cells, rows or columns, then go to the Format menu and select Conditional Formatting. You first give it a condition which is either true or false. If it’s true, you can give it a format: using a different font or text color, font effects like bold or italic, adding borders or background colors, that sort of thing. If the condition isn’t true, then the formatting isn’t changed. When in the conditional formatting dialog, there’s an “Add” button at the bottom where you can add up to two other conditions, each with its own individual formatting. These are not cumulative; the first condition is always evaluated first (and its formatting is used if the condition is satisfied). If not, then it’ll try the second condition, and if not that it’ll try the third condition. As an example of how I use this, if I’m making a game with a cost curve, I might have a single column that adds up the numeric benefits minus costs of each game object. Since I want the benefits and costs to be equivalent, this should be zero for a balanced object (according to my cost curve), positive if it’s overpowered and negative if it’s underpowered. In that column, I might use conditional formatting to turn the background of a cell a bright green color if benefits minus costs is greater than zero, or red if it’s less than zero, so I can immediately get a visual status of how many objects are still not balanced right.

Lastly, in a lot of basic spreadsheets you just want to display a single thing, and you’ve got a header row along the top and a header column on the left, and the entire rest of the worksheet is data. Sometimes that data doesn’t fit on a single page, but as you scroll down you forget which column is which. Suppose you want the top row or two to stay put, always displayed no matter how far you scroll down, so you can always see the headings. To do that, select the first row below your header row, then go to the Window menu and select Freeze Panes. You’ll see a little line appear just about where you’d selected, and you’ll see it stay even if you scroll down. To undo this, for example if you selected the wrong row by accident, go to the Window menu and select Unfreeze Panes (and then try again). If you want instead to keep the left columns in place, select the column just to the right, then Freeze Panes again. If you want to keep the leftmost columns and topmost rows in place, select a single cell and Freeze Panes, and everything above or to the left of that cell is locked in place now.

About that whole “Fun” thing…

At this point we’ve covered just about every topic I can think of that relates to game balance, so I want to take some time to reflect on where balance fits in to the larger field of game design. Admittedly, the ultimate goal of game design depends on the specific game, but in what I’d say is the majority of cases, the goal of the game designer is to create a fun experience for the players. How does balance fit in with this?

When I was a younger designer, I wanted to believe the two were synonymous. A fun game is a balanced game, and a balanced game is a fun game. I’m not too proud to say that I was very wrong about this. I encountered two games in particular that were fun in spite of being unbalanced, and these counterexamples changed my mind.

The first was a card game that I learned as Landlord (although it has many other names, some more vulgar than others); the best known is probably a variant called The Great Dalmuti. This is a deliberately unbalanced game. Each player sits in a different position, with the positions forming a definite progression from best to worst. Players in the best position give their worst cards to those in the worst position at the start of the round, and the worst-position players give their best cards to those in the best position, so the odds are strongly in favor of the people at the top. At the end of each hand, players reorder themselves based on how they did in the round, so the top player takes top seat next round. This is a natural positive feedback loop: the people at the top have so many advantages that they’re likely to stay there, while the people at the bottom have so many disadvantages that they’re likely to stay there as well. As I learned it, the game never ends, you just keep playing hand after hand until you’re tired of it. In college my friends and I would sometimes play this for hours at a time, so we were clearly having a good time in spite of the game being obviously unbalanced. What’s going on here?

I think there are two reasons here. One is that as soon as you learn the rules, it is immediately obvious to you that the game is not fair, and in fact that the unfairness is the whole point. It’s not that fairness and balance are always desirable, it’s that when players expect a fair and balanced game and then get one that isn’t, the game doesn’t meet their expectations. Since this game sets the expectation of unfairness up front, by choosing to play at all you have already decided that you are willing to explore an unbalanced system.

Another reason why this game doesn’t fail is that it has a strong roleplaying dynamic, which sounds strange because this isn’t an RPG… but at the same time, players in different seats do have different levels of power, so some aspect of roleplaying happens naturally in most groups. The players at the top are having fun because “it’s good to be king.” The players at the bottom are also having fun because there’s a thrill of fighting against the odds, striking a blow for the Little Guy in an unfair system, and every now and then one of the players at the bottom ends up doing really well and suddenly toppling the throne, and that’s exciting (or one of the guys on top crashes and falls to the bottom, offering schadenfreude for the rest of the players). For me it’s equally exciting to dig my way out from the bottom, slowly and patiently, over many hands, and eventually reaching the top (sort of like a metaphor for hard work and retirement, I guess). Since the game replicates a system that we recognize in everyday life where we see the “haves” and “have-nots,” being able to play in and explore this system from the magic circle of a game has a strong appeal.

The second game I played that convinced me that there’s more to life than balance, is the (grammatically-incorrectly titled) board game Betrayal at House on the Hill. This game is highly unbalanced. Each time you play you get a random scenario which has a different set of victory conditions, but most of them strongly favor some players over others, and most don’t scale very well with number of players (that is, most scenarios are much easier to win or lose if there are 3 players or 6 players, so the game is often decided as a function of how many players there are and which random scenario you get). The game has a strong random element that makes it likely one or more players will have a very strong advantage or disadvantage, and in most scenarios it’s even possible to have early player elimination. Not that it has to do with balance, but the first edition of this game also has a ton of printing errors, making it seem like it wasn’t playtested nearly enough. (In fact, I understand that it was playtested extensively, but the playtesters were having such a fun time playing that they didn’t bother to notice or report the errors they encountered.)

In spite of the imbalances, the randomness and the printing errors, the game itself is pretty fun if you play in the right group. The reason is that no matter what happens, in nearly every game, some kind of crazy thing happened that’s fun to talk about after the fact. The game is very good at creating a story of the experience, and the stories are interesting. Partly this has to do with all of the flavor text in the game, on the cards and in the scenarios… but that just sets the haunted-house environment to put the players in the right frame of mind. Mostly it’s that because of the random nature of the game, somewhere along the line you’ll probably see something that feels highly unlikely, like one player finding a whole bunch of useful items all at once, or a player rolling uncharacteristically well or poorly on dice at a key point, or drawing just the right card you need at just the right time, or a player finding out the hard way what that new mysterious token on the board does. And so, players are willing to overlook the flaws because the core gameplay is about working together as a team to explore an unfamiliar and dangerous place, then having one of your kind betray the others and shifting to a one-against-many situation, and winning or losing as a coordinated team. And as a general game structure, that turns out to be unique enough to be interesting.

Now, player expectation is another thing that is a huge factor in Betrayal. I’ve seen some players that didn’t know anything about the game and were just told, “oh, this is a fun game” and they couldn’t get over the fact that there were so many problems with it. When I introduce people to the game, I always say up front that the game is not remotely balanced, because it helps people to enjoy the experience more. And incidentally, I do think it would be a better game if it were more balanced, but my point is that it is possible for a game design to succeed without it.

So, at the end of the day, I think that what game balance does is that it makes your game fair. In games where players expect a fair contest, balance is very important; for example, one of the reasons a lot of players hate the “rubber-banding” negative feedback in racing games, where an AI-controlled car suddenly gets an impossible burst of speed when it’s too far behind you, is that it feels unfair because real-life racing doesn’t work that way. But in a game like The Great Dalmuti which is patently unfair, players expect it to be unbalanced so they accept it easily. This is also why completely unbalanced, overpowered cards in a Trading Card Game (especially if they’re rare) are seen as a bad thing, but in a single-player card-battle game using the same mechanics they can be a lot of fun: for a head-to-head tabletop card game players expect the game to provide a fair match, so they want the cards to be balanced; in the case of the single-player game, the core of the game is about character growth and progression, so getting more powerful cards as the game progresses is part of the expectation.

Just like everything in game design, it’s all about understanding the design goals, what it is you want the player to experience. But if you want them to experience a fair game, which is at least true in most games, then that is the function of balance. In fact, the only games I can think of where you don’t want balance are those where the core gameplay is specifically built around playing with the concept of fairness and unfairness.

If You’re Working on a Game Now…

Well, you’re probably already using Excel in that case, so there’s not much I can have you do to exercise those skills that you’re not already doing.

If your game has a free-for-all multiplayer structure, ask yourself if any of the problems mentioned in this post (turtling, kill-the-leader, sandbagging, kingmaking, early elimination) might be present, and then decide what (if anything) to do about them.

If your game has an economic system, analyze it. Can players trade? Are there auctions? What effect would it have on the game if you added or removed these? Are there alternatives to the way your economic system works now that you hadn’t considered?

Homework

Since it’s the end of the course, there are two ways to approach this. One is to say, no “homework” at all, because hey, the course is over! But that would be lazy design on my part.

Instead, let me set you a longer challenge that brings together everything we’ve talked about here. Make a game, and then apply all the lessons of game balance that you can. Spend a month on it, maybe more if it ends up being interesting, and then you’ll have something you can add to your game design portfolio. It’s up to you whether you want to do this, of course.

Some suggestions:

  • Design the base set for an original trading-card game. I usually tell students to stay away from projects like this, because TCGs have a huge amount of content, so let’s keep it limited here:
    • Design an expansion set to an existing TCG. First, use your knowledge of the existing game to derive a cost curve. Then, put that into Excel, and use it to create and balance a new set. Create one or two new mechanics that you need to figure out a cost for on the curve, and playtest on your own to figure out how much the new mechanic is actually worth. Make a small set, maybe 50 to 80 cards.
  • Or, if you’ve got a bit more time and want to do a little bit of systems design work as well as balance:
    • Make the game self-contained, and 100 cards or less. Consider games like Dominion or Roma or Ascension which behave like TCGs but require no collecting, and have players build their deck or hand during play instead.
    • As soon as you’re done with the core mechanics, make a cost curve for the game. Put the curve into an Excel spreadsheet and use it to create and balance the individual cards.
    • Playtest the game with friends and challenge them to “break” the game by finding exploits and optimal strategies. Adjust your cost curve (and cards) accordingly, and repeat the process.
  • Or, find a turn-based or real-time strategy game on computer that includes some kind of mod tools. Work on the balance:
    • First, play the game a bit and use your intuition to analyze the balance. Are certain units or strategies or objects too good or too weak? Look around for online message boards to see if other players feel the same, or if you were just using different strategies. Once you’ve identified one or more imbalances, analyze the game mathematically using every tool at your disposal, to figure out exactly what numbers need to change, and by how much. Mod the game, and playtest to see if the problem is fixed.
    • For a more intense project, use the mod tools to wipe out an entire part of the gameplay, and start over designing a new one from scratch. For example, maybe you can design a brand-new set of technology upgrades for Civilization, or a new set of units for Starcraft. Use the existing art if you want, but change the nature of the gameplay. Then, work on balancing your new system.
  • If you like RTS games but prefer something on tabletop, instead design a miniatures game. Most miniatures games are expensive (you have to buy and paint a lot of miniatures, after all) so challenge yourself to keep it cheap. Use piles of cardboard squares that you assemble and cut out on your own. With such cheap components, you could even add economic and production elements of the RTS genre if you’d like.
    • First, look at the mechanics of some existing miniatures games, which tend to be fairly complicated. Where can you simplify the combat mechanics, just to keep your workload manageable? Try to reduce the game down to a simple set of movement and attack mechanics with perhaps a small handful of special abilities. (For a smaller challenge you can, of course, just create an expansion set to an existing miniatures game that you already play.)
    • As with other projects, create a cost curve for all attributes and abilities of the playing pieces, and use Excel to create and balance a set of unit types. Keep this small, maybe 5 to 10 different units; you’ll find it difficult enough to balance them even with just that few. Consider adding some intransitive relationships between the units, to make sure that no single strategy is strictly better than another. If you end up really liking the game, you can make another set of units of the same size for a new “faction” and try to balance the second set with the first set.
    • Print out a set of cheap components, and playtest and iterate on your design.
  • Or, if you prefer tabletop RPGs, analyze the combat (or conflict resolution) system of your favorite game to find imbalances, and propose rules changes to fix it. For a longer project, design your own original combat system, either for an existing RPG as a replacement, or for an original RPG set in an original game world. As a challenge to yourself and to keep the scope of this under control, set a page limit: a maximum of ten pages of rules descriptions, and it should all fit on a one-page summary. Playtest the system with your regular tabletop RPG group if you have one (if you don’t have one, you might consider selecting a different project instead).

References

I found the following two blog posts useful to reference when writing about auctions and multiplayer mechanics, respectively:

http://jergames.blogspot.com/2006/10/learn-to-love-board-games-again100.html#auctions

and

http://pulsiphergamedesign.blogspot.com/2007/11/design-problems-to-watch-for-in-multi.html

In Closing…

I’d just like to say that putting this information together over this summer has been an amazing experience for me, and I hope you have enjoyed going on this journey with me.

You might be wondering if I’m going to do something like this again in the future. You can bet the answer to that will be yes, although of course I don’t know exactly what that will be. Expect to see an announcement here when I’m setting up the courses for Summer 2011, if you want to take part.

Enjoy,

– Ian Schreiber

Advertisements

Level 9: Intransitive Mechanics

September 1, 2010

Readings/Playings

See “additional resources” at the end of this blog post for further reading.

This Week

Welcome back! Today we’re going to learn about how to balance intransitive mechanics. As a reminder, “intransitive” is just a geeky way of saying “games like Rock-Paper-Scissors” – that is, games where there is no single dominant strategy, because everything can be beaten by something else.

We see intransitive mechanics in games all the time. In fighting games, a typical pattern is that normal attacks are defeated by blocks, blocks are defeated by throws, and throws are defeated by attacks. In real-time strategy games, a typical pattern is that you have fliers that can destroy infantry, infantry that works well against archers, and archers are great at bringing down fliers. Turn-based strategy games often have some units that work well against others, an example pattern being that heavy tanks lose to anti-tank infantry which loses to normal infantry which lose to heavy tanks. First-person shooters sometimes have an intransitive relationship between different weapons or vehicles, like rocket launchers being good against tanks (since they’re slow and easy to hit) which are good against light vehicles (which are destroyed by the tank’s fast rate of fire once they get in range) which in turn are good against rocket launchers (since they can dodge and weave around the slow incoming rockets). MMOs and tabletop RPGs often have some character classes that are particularly good at fighting against other classes, as well. So you can see that intransitive mechanics are in all kinds of places.

Some of these relationships might not be immediately obvious. For example, consider a game where one kind of unit has long-range attacks, which is defeated by a short-range attacker who can turn invisible; this in turn is defeated by a medium-range attacker with radar that reveals invisible units; and the medium-range attacker is of course weak against the long-range attacker.

Sometimes it’s purely mathematical; in Magic: the Gathering, a 1/3 creature will lose in combat to a 3/2 creature, which loses to a 2/1 First Strike creature, which in turn loses to the original 1/3 creature. Within the metagame of a CCG you often have three or four dominant decks, each one designed to beat one or more of the other ones. These kinds of things aren’t even necessarily designed with the intention of being intransitive, but that is what ends up happening.

Solutions to intransitive mechanics

Today we’re going to get our hands pretty dirty with some of the mathiest math we’ve done so far, borrowing when needed from the tools of algebra, linear algebra, and game theory. In the process we’ll learn how to solve intransitive mechanics, so that we can learn more about how these work within our game and what we can expect from player behavior at the expert level.

What does a “solution” look like here? It can’t be a cost curve, because each choice wins sometimes and loses sometimes. Instead it’s a ratio of how often you choose each available option, and how often you expect your opponent to choose each of their options. For example, building an army of 30% archers, 50% infantry, 20% fliers (or 3:5:2) might be a solution to an intransitive game featuring those units, under certain conditions.

As a game designer, you might desire certain game objects to be used more or less frequently than others, and by changing the relative costs and availability of each object you can change the optimal mix of objects that players will use in play. By designing your game specifically to have one or more optimal strategies of your choosing, you will know ahead of time how the game is likely to develop. For example, you might want certain things to only happen rarely during normal play but be spectacular when they do, and if you understand how your costs affect relative frequencies, you can design a game to be like that intentionally. (Or, if it seems like in playtesting, your players are using one thing a lot more than another, this kind of analysis may be able to shed light on why that is.)

Who Cares?

It may be worth asking, if all intransitive mechanics are just glorified versions of Rock-Paper-Scissors, what’s the appeal? Few people play Rock-Paper-Scissors for fun, so why should they enjoy a game that just uses the same mechanics and dresses them differently?

For one thing, an intransitive game is at least more interesting than one with a single dominant strategy (“Rock-Rock-Rock”) because you will see more variety in play. For another, an intransitive mechanic embedded in a larger game may still allow players to change or modify their strategies in mid-game. Players may make certain choices in light of what they observe other players doing now (in real-time), particularly in action-based games where you must react to your opponent’s reaction to your reaction to their action in the space of a few milliseconds.

In games with bluffing mechanics, players may make choices based on what they’ve observed other players doing in the past and trying to use that to infer their future moves, which is particularly interesting in games of partial but incomplete information (like Poker). So, hopefully you can see that just because a game has an intransitive mechanic, does not mean it’s as dull as Rock-Paper-Scissors.

Additionally, intransitive mechanics serve as a kind of “emergency brake” on runaway dominant strategies. Even if you don’t know exactly what the best strategy in your game is, if all strategies have an intransitive relationship, you can at least know that there will not be a single dominant strategy that invalidates all of the others, because it will be weak against at least one other counter-strategy. Even if the game itself is unbalanced, intransitive mechanics allow for a metagame correction – not an ideal thing to rely on exclusively (such a thing would be very lazy design), but better to have a safety net than not if you’re releasing a game where major game balance changes can’t be easily made after the fact.

So, if I’ve managed to convince you that intransitive mechanics are worth including for at least some kinds of games, get ready and let’s learn how to solve them!

Solving the basic RPS game

Let’s start by solving the basic game of Rock-Paper-Scissors to see how this works. Since each throw is theoretically as good as any other, we would expect the ratio to be 1:1:1, meaning you choose each throw equally often. And that is what we’ll find, but it’s important to understand how to get there so that we can solve more complex problems.

First, let’s look at the outcomes. Let’s call our opponent’s throws r, p and s, and our throws R, P and S (we get the capital letters because we’re awesome). Since winning and losing are equal and opposite (that is, one win + one loss balances out) and draws are right in the middle, let’s call a win +1 point, a loss -1 point, and a draw 0 points. The math here would actually work for any point values really, but these numbers make it easiest. We now construct a table of results:

r           p          s

R          0          -1         +1

P          +1        0          -1

S          -1         +1        0

Of course, this is from our perspective – for example, if we throw (R)ock and opponent throws (s)cissors, we win, for a net +1 to our score. Our opponent’s table would be the reverse.

Let’s re-frame this a little bit, by calling r, p and s probabilities that the opponent will make each respective throw. For example, suppose you know ahead of time that your opponent is using a strategy of r=0.5, p=s=0.25 (that is, they throw 2 rock for every paper or scissors). What’s the best counter-strategy?

To answer that question, we can construct a set of three equations that tells you your payoffs for each throw:

  • Payoff for R = 0r + (-1)p + 1s = s-p
  • Payoff for P = 1r + 0p + (-1)s = r-s
  • Payoff for S = (-1)r + 1p + 0s = p-r

So based on the probabilities, you can calculate the payoffs. In the case of our rock-heavy opponent, the payoffs are R=0, P=0.25, S=-0.25. Since P has the best payoff of all three throws, assuming the opponent doesn’t vary their strategy at all, our best counter-strategy is to throw Paper every time, and we expect that we will gain 0.25 per throw – that is, out of every four throws, we’ll win one more game than we lose. In fact, we’ll find that if our opponent merely throws rock the tiniest, slightest bit more often than the others, the net payoff for P will be better than the others, and our best strategy is still to throw Paper 100% of the time, until our opponent modifies their strategy. This is significant; it tells us that an intransitive mechanic is very fragile, and that even a slight imbalance on the player’s part can lead to a completely dominant strategy on the part of the opponent.

Of course, against a human opponent who notices we’re always throwing P, their counter-strategy would be to throw a greater proportion of s, which then forces us to throw some R, which then causes them to throw p, which makes us throw S, which makes them throw r, and around and around we go. If we’re both constantly adjusting our strategies to counter each other, do we ever reach any point where both of us are doing the best we can? Over time, do we tend towards a stable state of some kind?

Some Math Theorems

Before answering that question, there are a couple of things I’m going to ask you to trust me on; people smarter than me have actually proved these mathematically, but this isn’t a course in math proofs so I’m handwaving over that part of things. I hope you’ll forgive me for that.

First is that if the game mechanics are symmetric (that is, both players have exactly the same set of options and they work the same way), the solution will end up being the same for both players; the opponent’s probability of choosing Rock is the same as our probability.

Second is that each payoff must be the same as the other payoffs; that is, R = P = S; if any strategy is worth choosing at all, it will provide the same payoff as all other valid strategies, because if the payoff were instead any less than the others it would no longer be worth choosing (you’d just take something else with a higher payoff), and if it were any higher than the others you’d choose it exclusively and ignore the others. Thus, all potential moves that are worth taking have the same payoff.

Lastly, in symmetric zero-sum games specifically, the payoff for everything must be zero (because the payoffs are going to be the same for both players due to symmetry, and the only way for the payoffs to sum to zero and still be equal is if they’re both zero).

To summarize:

  • All payoffs that are worth taking at all, give an equal payoff to each other.
  • Symmetric zero-sum games have all payoffs equal to zero.
  • Symmetric games have the same solution for all players.

Finishing the RPS Solution

Let’s go back to our equations. Rock-Paper-Scissors is a symmetric zero-sum game, so:

  • R = P = S = 0.

Since the opponent must select exactly one throw, we also know the probabilities of their throw add up to 100%:

  • r + p + s = 1

From here we can solve the system of equations by substitution:

  • R = 0 = s-p, therefore p=s
  • P = 0 = r-s, therefore r=s
  • S = 0 = p-r, therefore p=r
  • r+p+s = r+r+r = 1, therefore r=1/3
  • Since r=p=s, p=1/3, s=1/3

So our solution is that the opponent should throw r, p and s each with probabilities of 1/3. This suggests that against a completely random opponent it doesn’t matter what we choose, our odds of winning are the same no matter what. Of course, the opponent knows this too, so if we choose an unbalanced strategy they can alter their throw ratio to beat us; our best strategy is also to choose each throw with 1/3 probability.

Note that in actual play, this does not mean that the best strategy is to actually play randomly (say, by rolling a die secretly before each throw)! As I’ve said before, when humans try to play randomly, they tend to not do a very good job of it, so in the real world the best strategy is still to play each throw about as often as any other, but at the same time which throw you choose depends on your ability to detect and exploit patterns in your opponent’s play, while at the same time masking any apparent patterns in your own play. So our solution of 1:1:1 does not say which throw you must choose at any given time (that is in fact where the skill of the game comes in), but just that over time we expect the optimal strategy to be a 1:1:1 ratio (because any deviation from that hands your opponent a strategy that wins more often over you until you readjust your strategy back to 1:1:1).

Solving RPS with Unequal Scoring

The previous example is all fine and good for Rock-Paper-Scissors, but how can we apply this to something a little more interesting? As our next step, let’s change the scoring mechanism. For example, in fighting games there’s a common intransitive system that attacks beat throws, throws beat blocks, and blocks beat attacks, but each of these does a different amount of damage, so they tend to have different results in the sense that each choice puts a different amount at risk. How does Rock-Paper-Scissors change when we mess with the costs?

Here’s an example. Suppose I make a new rule: every win using Rock counts double. You could just as easily frame it like this: in a fighting game, attacks do normal damage, and blocks do the same amount of damage as an attack (let’s say that a successful block allows for a counterattack), but that throws do twice as much damage as an attack or block. But let’s just say “every win with Rock counts double” for simplicity here. How does that affect our probabilities?

Again we start with a payoff table:

r           p          s

R          0          -1         +2

P          +1        0          -1

S          -2         +1        0

We then use this to construct our three payoff equations:

  • R = 2s-p
  • P = r-s
  • S = p-2r

Again, the game is zero-sum and symmetric, and both us and our opponent must choose exactly one throw, so we still have:

  • R = P = S = 0
  • r+p+s = 1

Again we solve:

  • R = 0 = 2s-p, therefore 2s = p
  • P = 0 = r-s, therefore r = s
  • S = 0 = p-2r, therefore 2r = p
  • r+p+s = r+2r+r = 1, therefore r=1/4
  • r=s, therefore s=1/4
  • 2r=p, therefore p=1/2

So here we get a surprising result: if we double the wins for Rock, the end result is that Paper gets chosen half of the time, while Rock and Scissors each get chosen a quarter of the time! This is an answer you’d be unlikely to come up with on your own without doing the math, but in retrospect it makes sense: since Scissors is such a risky play, players are less likely to choose it. If you know your opponent is not likely to play Scissors, Paper is more likely to either draw or win, so it is actually Paper (and not Rock) that is played more frequently.

So if you had a fighting game where a successful throw does twice as much damage as a successful attack or a successful block, but you do as much damage with a block or an attack, then you’d actually expect to see twice as many attack attempts as throws or blocks!

Solving RPS with Incomplete Wins

Suppose we factor resource costs into this. Fighting games typically don’t have a “cost” associated with performing a move (other than time, perhaps), but RTS games usually have actual resource costs to produce units.

Let’s take a simple RTS game where you have knights that beat archers, archers beat fliers, and fliers beat knights. Let’s say further that if you send one type of unit against the same type, they kill each other mutually so there is no net gain or loss on either side, but that it’s a little different with winners. Let’s say that when knights attack archers, they win, but they still lose 20% of their health to the initial arrow volley before they close the ranks. And let’s say against fliers, archers lose 40% of their health to counterattacks. But against knights, fliers take no damage at all, because the knights can’t do anything other than stand there and take it (their swords don’t work too well against enemies a hundred feet above them, dropping rocks down on them from above). Finally, let’s say that knights cost 50 gold, archers cost 75, and fliers cost 100. Now how does this work?

We start with the payoff table:

k                                              a                                              f

K         50-50=0                                  (-50*0.2)+75=+65                  -50

A         -75+(0.2*50)= -65                  75-75=0                                  (-75*0.4)+100=+70

F          +50                                          -100+(75*0.4)= -70                100-100=0

To explain: if we both take the same unit it ends up being zero, that’s just common sense, but really what’s going on is that we’re both paying the same amount and both lose the unit. So we both actually have a net loss, but relative to each other it’s still zero-sum (for example, with Knight vs Knight, we gain +50 Gold relative to the opponent by defeating their Knight, but also lose -50 Gold because our own Knight dies as well, and adding those results together we end up with a net gain of zero).

What about when our Knight meets an enemy Archer? We kill their Archer, which is worth a 75-gold advantage, but they also reduced our Knight’s HP by 20%, so you could say we lost 20% of our Knight cost of 50, which means we lost an equivalent of 10 gold in the process. So the actual outcome is we’re up by 65 gold.

When our Knight meets an enemy Flier, we lose the Knight so we’re down 50 gold. It didn’t hurt the opponent at all. Where does the Flier cost of 100 come in? In this case it doesn’t, really – the opponent still has a Flier after the exchange, so they still have 100 gold worth of Flier in play, they’ve lost nothing… at least, not yet!

So in the case of different costs or incomplete victories, the hard part is just altering your payoff table. From there, the process is the same:

  • K = 0k + 65a + (-50)f = 65a-50f
  • A = (-65)k + 0a + 70f = 70f-65k
  • F = 50k + (-70)a + 0f = 50k-70a
  • K = A = F = 0
  • k+a+f = 1

Solving, we find:

  • K = 0 = 65a-50f, therefore 65a = 50f
  • A = 0 = 70f-65k, therefore 70f = 65k, therefore f = (13/14)k
  • F = 0 = 50k-70a, therefore 50k = 70a, therefore a = (10/14)k
  • k+a+f = k + (10/14)k + (13/14)k = (37/14)k = 1, therefore k = 14/37
  • f = (13/14)k = (13/14)(14/37), therefore f = 13/37
  • a = (10/14)k = (10/14)(14/37), therefore a = 10/37

In this case you’d actually see a pretty even mix of units, with knights being a little more common and archers a little less. If you wanted fliers to be more rare you could play around with their costs, or allow knights to do a little bit of damage to them, or something.

Solving RPS with Asymmetric Scoring

So far we’ve assumed a game that’s symmetric: we both have the exact same set of throws, and we both win or lose the same amount according to the same set of rules. But not all intransitive games are perfectly symmetric. For example, suppose I made a Rock-Paper-Scissors variant where each round, I flip up a new card that alters the win rewards. This round, my card says that my opponent gets two points for a win with Rock, but I don’t (I would just score normally). How does this change things?

It actually complicates the situation a great deal, because now both players must figure out the probabilities of their opponents’ throws, and those probabilities may not be the same anymore! Let’s say that Player A has the double-Rock-win bonus, and Player B does not. What’s the optimal strategy for both players? And how much of an advantage does this give to Player A, if any? Let’s find out by constructing two payoff tables.

Player A’s payoff table looks like this:

rB        pB        sB

RA       0          -1         +2

PA       +1        0          -1

SA       -1         +1        0

Player B’s payoff table looks like this:

rA        pA       sA

RB       0          -1         +1

PB       +1        0          -1

SB       -2         +1        0

Here we can assume that RA=PA=SA and RB=PB=SB, and also that rA+pA+sA = rB+pB+sB = 1. However, we cannot assume that RA=PA=SA=RB=PB=SB=0, because we don’t actually know that the payoffs for players A and B are equal; in fact, intuition tells us they probably aren’t! We now have this intimidating set of equations:

  • RA = 2sB – pB
  • PA = rB – sB
  • SA = pB – rB
  • RB = sA – pA
  • PB = rA – sA
  • SB = pA – 2rA
  • RA = PA = SA
  • RB = PB = SB
  • rA + pA + sA = 1
  • rB + pB + sB = 1

We could do this the hard way through substitution, but an easier way is to use matrices. Here’s how it works: we rewrite the payoff tables as matrices. Here’s the first one:

RA       0          -1         +2

[           PA       +1        0          -1         ]

SA       -1         +1        0

Here, the left column represents the left side of the first three equations above, the second column is rA, the third column is pA, and the fourth column is sA. Two changes for clarity: first, let’s move the leftmost column to the right instead, which will make it easier to work with; and second, since RA=PA=SA, let’s just replace them all with a single variable X, which represents the net payoff for Player A:

0          -1         +2        X

[           +1        0          -1         X         ]

-1         +1        0          X

This is just a shorthand way of writing down these three equations, omitting the variable names but keeping them all lined up in the same order so that each column represents a different variable:

0rB      -1pB    +2sB    = X

1rB      +0pB   -1sB     = X

-1rB     +1pB   +0sB    = X

Algebra tells us we can multiply everything in an equation by a constant and it’s still true (which means we could multiply any row of the matrix by any value and it’s still valid, as long as we multiply all four entries in the row by the same amount). Algebra also tells us that we can add both sides of two equations together and the result is still true, meaning we could add each entry of two rows together and the resulting row is still a valid entry (which we could use to add to the rows already there, or even replace an existing row with the new result). And we can also rearrange the rows, because all of them are still true no matter what order we put them in. What we want to do here is put this matrix in what’s called triangular form, that is, of the form where everything under the diagonal is zeros, and the diagonals themselves (marked here with an asterisk) have to be non-zero:

*          ?          ?          ?

[           0          *          ?          ?          ]

0          0          *          ?

So, first we reorder them by swapping the top and middle rows:

-1         +1        0          X

[           0          -1         +2        X         ]

+1        0          -1         X

To eliminate the +1 in the bottom row, we add the top and bottom rows together and replace the bottom row with that:

-1         +1        0          X

+          +1        0          -1         X

0          +1        -1         2*X

Our matrix is now:

-1         +1        0          X

[           0          -1         +2        X         ]

0          +1        -1         2*X

Now we want to eliminate the +1 on the bottom row, so we add the middle and bottom rows together and replace the bottom row with the result:

-1         +1        0          X

[           0          -1         +2        X         ]

0          0          +1        3*X

Now we can write these in the standard equation forms and solve, going from the bottom up, using substitution:

  • +1(sB) = 3*X, therefore sB = 3*X
  • -1(pB) +2(sB) = X, therefore -1(pB)+2(3*X) = X, therefore pB = 5*X
  • -1(rB) + 1(pB) = X, therefore rB = 4*X

At this point we don’t really need to know what X is, but we do know that the ratio for Player B is 3 Scissors to 5 Paper to 4 Rock. Since sB+pB+rB = 1, this means:

rB = 4/12         pB = 5/12        sB = 3/12

We can use the same technique with the second set of equations to figure out the optional ratio for Player A. Again, the payoff table is:

rA        pA       sA

RB       0          -1         +1

PB       +1        0          -1

SB       -2         +1        0

This becomes the following matrix:

0          -1         +1        RB

[           +1        0          -1         PB       ]

-2         +1        0          SB

Again we reorganize, and since RB=PB=SB, let’s call these all a new variable Y (we don’t use X to avoid confusion with the previous X; remember that the payoff for one player may be different from the other here). Let’s swap the bottom and top this time, along with replacing the payoffs by Y:

-2         +1        0          Y

[           +1        0          -1         Y         ]

0          -1         +1        Y

To eliminate the +1 in the center row, we have to multiply the center row by 2 before adding it to the top row (or, multiply the top row by 1/2, but I find it easier to multiply by whole numbers than fractions).

-2         +1        0          Y

+          +1*2    0*2      -1*2     Y*2

0          +1        -2         Y*3

Our matrix is now:

-2         +1        0          Y

[           0          +1        -2         Y*3     ]

0          -1         +1        Y

Adding second and third rows to eliminate the -1 in the bottom row we get:

-2         +1        0          Y

[           0          +1        -2         Y*3     ]

0          0          -1         Y*4

Again working backwards and substituting:

  • sA = -Y*4
  • pA – 2sA = Y*3, therefore pA = -Y*5
  • -2rA + pA = Y, therefore -2rA = 6Y, therefore rA = -Y*3

Now, it might seem kind of strange that we get a bunch of negative numbers here when we got positive ones before. This is probably just a side effect of the fact that the average payoff for Player A is probably positive while Player B’s is probably negative, but in either case it all factors out because we just care about the relative ratio of Rock to Paper to Scissors. For Player A, this is 3 Rock to 4 Scissors to 5 Paper:

rA = 3/12         pA = 5/12        sA = 4/12

This is slightly different from Player B’s optimal mix:

rB = 4/12         pB = 5/12        sB = 3/12

Now, we can use this to figure out the actual advantage for Player A. We could do this through actually making a 12×12 chart and doing all 144 combinations and counting them up using probability, or we could do a Monte Carlo simulation, or we could just plug these values into our existing equations. For me that last one is the easiest, because we already have a couple of equations from earlier that directly relate these together:

sA = -Y*4, therefore Y = -1/12

rB = X*4, therefore X = +1/12

We know that RA=PA=SA and RB=PB=SB, so this means the payoff for Player A is +1/12 and for Player B it’s -1/12. This makes a lot of sense and acts as a sanity check: since this is still a zero-sum game, we know that the payoff for A must be equal to the negative payoff for B. In a symmetric game both would have to be zero, but this is not symmetric. That said, it turns out that if both players play optimally, the advantage is surprisingly small: only one extra win out of every 12 games if both play optimally!

Solving Extended RPS

So far all of the relationships we’ve analyzed have had only three choices. Can we use the same technique with more? Yes, it just means we do the same thing but more of it.

Let’s analyze the game Rock-Paper-Scissors-Lizard-Spock. In this game, Rock beats Scissors and Lizard; Paper beats Rock and Spock; Scissors beats Paper and Lizard; and Lizard beats Spock (and Lizard beats Paper, Spock beats Scissors and Rock). Our payoff table is (with ‘k’ for Spock since there’s already an ‘s’ for Scissors, and ‘z’ for Lizard so it doesn’t look like the number one):

r           p          s           z           k

R          0          -1         +1        +1        -1

P          +1        0          -1         -1         +1

S          -1         +1        0          +1        -1

Z          -1         +1        -1         0          +1

K         +1        -1         +1        -1         0

We also know r+p+s+z+k=1, and R=P=S=L=K=0. We could solve this by hand as well, but there’s another way to do this using Excel which makes things slightly easier sometimes.

First, you would enter in the above matrix in a 5×5 grid of cells somewhere. You’d also need to add another 1×5 column of all 1s (or any non-zero number, really) to represent the variable X (the payoff) to the right of your 5×5 grid. Then, select a new 1×5 column that’s blank (just click and drag), and then enter this formula in the formula bar:

=MMULT(MINVERSE(A1:E5),F1:F5)

For the MINVERSE parameter, put the top left and lower right cells of your 5×5 grid (I use A1:E5 if the grid is in the extreme top left corner of your worksheet). For the final parameter (I use F1:F5 here), give the 1×5 column of all 1s. Finally, and this is important, press Ctrl+Shift+Enter when you’re done typing in the formula (not just Enter). This propagates the formula to all five cells that you’ve highlighted and treats them as a unified array, which is necessary.

One warning is that this method does not always work; in particular, if there are no solutions or infinite solutions, it will give you #NUM! as the result instead of an actual number. In fact, if you enter in the payoff table above, it will give you this error; by setting one of the entries to something very slightly different (say, changing one of the +1s to +0.999999), you will generate a unique solution that is only off by a tiny fraction, so round it to the nearest few decimal places for the “real” answer. Another warning is that anyone who actually knows a lot about math will wince when you do this, because it’s kind of cheating and you’re really not supposed to solve a matrix like that.

Excel gives us a solution of 0.2 for each of the five variables, meaning that it is equally likely that the opponent will choose any of the five throws. We can then verify that yes, in fact, R=P=S=L=K=0, so it doesn’t matter which throw we choose, any will do just as well as any other if the opponent plays randomly with equal chances of each throw.

Solving Extended RPS with Unequal Relationships

Not all intransitive mechanics are equally balanced. In some cases, even without weighted costs, some throws are just better than other throws. For example, let’s consider the unbalanced game of Rock-Paper-Scissors-Dynamite. The idea is that with this fourth throw, Dynamite beats Rock (by explosion), and Scissors beats Dynamite (by cutting the wick). People will argue which should win in a contest between Paper and Dynamite, but for our purposes let’s say Dynamite beats Paper. In theory this makes Dynamite and Scissors seem like really good choices, because they both beat two of the three other throws. It also makes Rock and Paper seem like poor choices, because they both lose to two of the other three throws. What does the actual math say?

Our payoff table looks like this:

r           p          s           d

R          0          -1         +1        -1

P          +1        0          -1         -1

S          -1         +1        0          +1

D         +1        +1        -1         0

Before we go any further, we run into a problem: if you look closely, you’ll see that Dynamite is better than or equal to Paper in every situation. That is, for every entry in the P row, it is either equal or less than the corresponding entry in the D row (and likewise, every entry in the p column is worse or equal to the d column). Both Paper and Dynamite lose to Scissors, both beat Rock, but against each other Dynamite wins. In other words, there is no logical reason to ever take Paper because whenever you’d think about it, you would take Dynamite instead! In game theory terms, we say that Paper is dominated by Dynamite. If we tried to solve this matrix mathematically like we did earlier, we would end up with some very strange answers and we’d quickly find it was unsolvable (or that the answers made no sense, like a probability for r, p, s or d that was less than zero or greater than one). The reason it wouldn’t work is that at some point we would make the assumption that R=P=S=D, but in this case that isn’t true – the payoff for Paper must be less than the payoff for Dynamite, so it is an invalid assumption. To fix this, before proceeding, we must systematically eliminate all choices that are dominated. In other words, remove Paper as a choice.

The new payoff table becomes:

R          s           d

R          0          +1        -1

S          -1         0          +1

D         +1        -1         0

We check again to see if, after the first set of eliminations, any other strategies are now dominated (sometimes a row or column isn’t strictly dominated by another, until you cross out some other dominated choices, so you do have to perform this procedure repeatedly until you eliminate everything). Again, to check for dominated strategies, you must compare every pair of rows to see if one dominates another, and then every pair of columns in the same way. Yes, this means a lot of comparisons if you give each player ten or twelve choices!

In this case eliminating Paper was all that was necessary, and in fact we’re back to the same exact payoff table as with the original Rock-Paper-Scissors, but with Paper being “renamed” to Dynamite. And now you know, mathematically, why it never made sense to add Dynamite as a fourth throw.

Another Unequal Relationship

What if instead we created a new throw that wasn’t weakly dominated, but that worked a little different than normal? For example, something that was equivalent to Scissors except it worked in reverse order, beating Rock but losing to Paper? Let’s say… Construction Vehicle (C), which bulldozes (wins against) Rock, is given a citation by (loses against) Paper, and draws with Scissors because neither of the two can really interact much. Now our payoff table looks like this:

r           p          s           c

R          0          -1         +1        -1

P          +1        0          -1         +1

S          -1         +1        0          0

C         +1        -1         0          0

Here, no single throw is strictly better than any other, so we start solving. We know r+p+s+c=1, and the payoffs R=P=S=D=0. Our matrix becomes:

0          -1         +1        -1         0

+1        0          -1         +1        0

[           -1         +1        0          0          0          ]

+1        -1         0          0          0

Rearranging the rows to get non-zeros along the diagonal, we get this by reversing the order from top to bottom:

+1        -1         0          0          0

-1         +1        0          0          0

[           +1        0          -1         +1        0          ]

0          -1         +1        -1         0

Zeroing the first column by adding the first two rows, and subtracting the third from the first, we get:

+1        -1         0          0          0

0          0          0          0          0

[           0          -1         +1        -1         0          ]

0          -1         +1        -1         0

Curious! The second row is all zeros (which gives us absolutely no useful information, as it’s just telling us that zero equals zero), and the bottom two rows are exactly the same as one another (which means the last row is redundant and again tells us nothing extra). We are left with only two rows of useful information. In other words, we have two equations (three if you count r+p+s+c=1) and four unknowns.

What this means is that there is actually more than one valid solution here, potentially an infinite number of solutions. We figure out the solutions by hand:

  • r-p=0, therefore r=p
  • -p+s-c=0, therefore c=s-p

Substituting into r+p+s+c=1, we get:

  • p+p+s+(s-p)=1, therefore p+2s=1, therefore p=1-2s (and therefore, r=1-2s).

Substituting back into c=s-p, we get c=s-1+2s, therefore c=3s-1.

We have thus managed to put all three other variables in terms of s:

  • p=1-2s
  • r=1-2s
  • c=3s-1

So it would seem at first that there are in fact an infinite number of solutions: choose any value for s, then that will give you the corresponding values for p, r, and c. But we can narrow down the ranges even further.

How? By remembering that all of these variables are probabilities, meaning they must all be in the range of 0 (if they never happen) to 1 (if they always happen). Probabilities can never be less than zero or greater than 1. This lets us limit the range of s. For one thing, we know it must be between 0 and 1.

From the equation c=3s-1, we know that s must be at least 1/3 (otherwise c would be negative) and s can be at most 2/3 (otherwise c would be greater than 100%). Looking instead at p and r, we know s can range from 0 up to 1/2. Combining the two ranges, s must be between 1/3 and 1/2. This is interesting: it shows us that no matter what, Scissors is an indispensible part of all ideal strategies, being used somewhere between a third and half of the time.

At the lower boundary condition (s=1/3), we find that p=1/3, r=1/3, c=0, which is a valid strategy. At the upper boundary (s=1/2), we find p=0, r=0, c=1/2. And we could also opt for any strategy in between, say s=2/5, p=1/5, r=1/5, c=1/5.

Are any of these strategies “better” than the others, such that a single one would win more than the others? That unfortunately requires a bit more game theory than I wanted to get into today, but I can tell you the answer is “it depends” based on certain assumptions about how rational your opponents are, whether the players are capable of making occasional mistakes when implementing their strategy, and how much the players know about how their opponents play, among other things. For our purposes, we can say that any of these is as good as any other, although I’m sure professional game theorists could philosophically argue the case for certain values over others.

Also, for our purposes, we could say that Construction Vehicle is probably not a good addition to the core game of Rock-Paper-Scissors, as it allows one winning strategy where the throw of C can be completely ignored, and another winning strategy where both P and R are ignored, making us wonder why we’re wasting development resources on implementing two or three throws that may never even see play once the players are sufficiently skilled!

Solving the Game of Malkav

So far we’ve systematically done away with each of our basic assumptions: that a game has a symmetric payoff, that it’s zero-sum, that there are exactly three choices. There’s one other thing that we haven’t covered in the two-player case, and that’s what happens if the players have a different selection of choices – not just an asymmetric payoff, but an asymmetric game. If we rely on there being exactly as many throws for one player as the other, what happens when one player has, say, six different throws when their opponent has only five? It would seem such a problem would be unsolvable for a unique solution (there are six unknowns and only five equations, right?) but in fact it turns out we can use a more powerful technique to solve such a game uniquely, in some cases.

Let us consider a card called “Game of Malkav” from an obscure CCG that most of you have probably never heard of. It works like this: all players secretly and simultaneously choose a number. The player who played this card chooses between 1 and 6, while all other players choose between 1 and 5. Each player gains as much life as the number they choose… unless another player chose a number exactly one less, in which case they lose that much life instead. So for example if you choose 5, you gain 5 life, unless any other player chose 4. If anyone else chose 4, you lose 5 life… and they gain 4, unless someone else also chose 3, and so on. This can get pretty complicated with more players, so let’s simply consider the two-player case. Let’s also make the simplifying assumption that the game is zero-sum, and that you gaining 1 life is equivalent in value to your opponent losing 1 life (I realize this is not necessarily valid, and this will vary based on relative life totals, but at least it’s a starting point for understanding what this card is actually worth).

We might wonder, what is the expected payoff of playing this card, overall? Does the additional option of playing 6 when your opponent can only play up to 5? What is the best strategy, and what is the expected end result? In short, is the card worth playing… and if so, when you play it, how do you decide what to choose?

As usual, we start with a payoff table. Let’s call the choices P1-P6 (for the Player who played the card), and O1-O5 (for the Opponent):

O1       O2       O3       O4       O5

P1        0          +3        -2         -3         -4

P2        -3         0          +5        -2         -3

P3        +2        -5         0          +7        -2

P4        +3        +2        -7         0          +9

P5        +4        +3        +2        -9         0

P6        +5        +4        +3        +2        -11

We could try to solve this, and there do not appear to be any dominated picks for either player, but we will quickly find that the numbers get very hairy very fast… and also that it ends up being unsolvable for reasons that you will find if you try. Basically, with 6 equations and 5 unknowns, there is redundancy… except in this case, no rows cancel, and instead you end up with at least two equations that contradict each other. So there must actually be some dominated strategies here… it’s just that they aren’t immediately obvious, because there are a set of rows or columns that are collectively dominated by another set, which is much harder to just find by looking. How do we find them?

We start by finding the best move for each player, if they knew what the opponent was doing ahead of time. For example, if the opponent knows we will throw P1, their best move is O5 (giving them a net +4 and us a net -4). But then we continue by reacting to their reaction: if the player knows the opponent will choose O5, their best move is P4. But against P4, the best move is O3. Against O3, the best move is P2. Against P2, there are two equally good moves: O1 and O5, so we consider both options:

  • Against O5, the best response is P4, as before (and we continue around in the intransitive sequence O5->P4->O3->P2->O5 indefinitely).
  • Against O1, the best response is P6. Against P6, the best response is O5, which again brings us into the intransitive sequence O5->P4->O3->P2->O1->P6->O5.

What if we start at a different place, say by initially throwing P3? Then the opponent’s best counter is O2, our best answer to that is P6, which then leads us into the O5->P4->O3->P2->O1->P6->O5 loop. If we start with P5, best response is O4, which gets the response P3, which we just covered. What if we start with O1, O2, O3, O4, O5, P2, P4 or P6? All of them are already accounted for in earlier sequences, so there’s nothing more to analyze.

Thus, we see that no matter what we start with, eventually after repeated play only a small subset of moves actually end up being part of the intransitive nature of this game because they form two intransitive loops (O5/P4/O3/P2, and O5/P4/O3/P2/O1/P6). Looking at these sequences, the only choices ever used by either player are O1, O3, O5 and P2, P4, P6. Any other choice ends up being strictly inferior: for example, at any point where it is advantageous to play P6 (that is, you are expecting a positive payoff), there is no reason you would prefer P5 instead (even if you expect your opponent to play O5, your best response is not P5, but rather P4).

By using this technique to find intransitive loops, you can often reduce a larger number of choices to a smaller set of viable ones… or at worst, you can prove that all of the larger set are in fact viable. Occasionally you will find a game (Prisoner’s Dilemma being a famous example, if you’ve heard of that) where there are one or more locations in the table that are equally advantageous for both players, so that after repeated play we expect all players to be drawn to those locations; game theorists call these Nash Equilibriums after the mathematician who first wrote about them. Not that you need to care.

So in this case, we can reduce the table to the set of meaningful values:

O1       O3       O5

P2        -3         +5        -3

P4        +3        -7         +9

P6        +5        +3        -11

From there we solve, being aware that this is not symmetric. Therefore, we know that O1=O3=O5 and P2=P4=P6, but we do not know if they are all equal to zero or if one is the negative of the other. (Presumably, P2 is positive and O1 is negative, since we would expect the person playing this card to have an advantage, but we’ll see.)

We construct a matrix, using X to stand for the Payoff for P2, P4 and P6:

-3         +5        -3         X

[           +3        -7         +9        X         ]

+5        +3        -11       X

This can be reduced to triangular form and then solved, the same as earlier problems. Feel free to try it yourself! I give the answer below.

Now, solving that matrix gets you the probabilities O1, O3 and O5, but in order to learn the probabilities of choosing P2, P4 and P6 you have to flip the matrix across the diagonal so that the Os are all on the left and the Ps are on the top (this is called a transpose). In this case we’d also need to make all the numbers negative, since such a matrix is from Player O’s perspective and therefore has the opposite payoffs:

+3        -3         -5         Y

[           -5         +7        -3         Y         ]

+3        -9         +11      Y

This, too, can be solved normally. If you’re curious, the final answers are roughly:

P2:P4:P6 = 49% : 37% : 14%

O1:O3:O5 = 35% : 41% : 24%

Expected payoff to Player P (shown as “X” above): 0.31, and the payoff for Player O (shown as “Y”) is the negative of X: -0.31.

In other words, in the two-player case of this game, when both players play optimally, the player who initiated this card gets ahead by an average of less than one-third of a life point – so while we confirm that playing the card and having the extra option of choosing 6 is in fact an advantage, it turns out to be a pretty small one. On the other hand, the possibility of sudden large swings may make it worthwhile in actual play (or maybe not), depending on the deck you’re playing. And of course, the game gets much more complicated in multi-player situations that we haven’t considered here.

Solving Three-Player RPS

So far we’ve covered just about every possible case for a two-player game, and you can combine the different methods as needed for just about any application, for any kind of two-player game. Can we extend this kind of analysis to multiple players? After all, a lot of these games involve more than just a single head-to-head, they may involve teams or free-for-all environments.

Teams are straightforward, if there’s only two teams: just treat each team as a single “player” for analysis purposes. Free-for-all is a little harder because you have to manage multiple opponents, and as we’ll see the complexity tends to explode with each successive player. Three-player games are obnoxious but still quite possible to solve; four-player games are probably the upper limit of what I’d ever attempt by hand using any of the methods I’ve mentioned today. If you have a six-player free-for-all intransitive game where each player has a different set of options and a massive payoff matrix that gives payoffs to each player for each combination… well, let’s just say it can be done, probably requiring the aid of a computer and a professional game theorist, but at this point you wouldn’t want to. One thing that game theorists have learned is that the more complex the game, the longer it tends to take human players in a lab to converge on the optimal strategies… which means for a highly complex game, playtesting will give you a better idea of how the game actually plays “in the field” than doing the math to prove optimal solutions, because the players probably won’t find the optimal solutions anyway. Thus, for a complicated system like that, you’re better off playtesting… or more likely, you’re better off simplifying your mechanics!

Let’s take a simple multi-player case: three-player Rock-Paper-Scissors. We define the rules like this: if all players make the same throw or if all players each choose different throws, we call it a draw. If two players make the same throw and the third player chooses a different one (“odd man out”), then whoever throws the winning throw gets a point from each loser. So if two players throw Rock and the third throws Scissors, each of the Rock players get +1 point and the unfortunate Scissors player loses 2 points. Or if it’s reversed, one player throwing Rock while two throw Scissors, the one Rock player gets +2 points while the other two players lose 1 point each. (The idea behind these numbers is to keep the game zero-sum, for simplicity, but you could use this method to solve for any other scoring mechanism.)

Of course, we know because of symmetry that the answer to this is 1:1:1, just like the two-player version. So let’s throw in the same wrinkle as before: wins with Rock count double (which also means, since this is zero-sum, that losses with Scissors count double). In the two-player case we found the solution of Rock=Scissors=1/4, Paper=1/2. Does this change at all in the three-player case, since there are now two opponents which make it even more dangerous to throw Scissors (and possibly even more profitable to throw Rock)?

The trick we need to use here to make this solvable is to look at the problem from a single player’s perspective, and treat all the opponents collectively as a single opponent. In this case, we end up with a payoff table that looks like this:

rr          rp         rs         pp        ps         ss

R          0          -1         +2        -2         0          +4

P          +2        +1        0          0          -1         -2

S          -4         0          -2         +2        +1        0

You might say: wait a minute, there’s three variables here and six unknowns (two ‘r’ and two ‘p’ and two ‘s’, one for each opponents) which means this isn’t uniquely solvable. But the good news is that this game is symmetric, so we actually can solve it, because the probabilities of the opponents are taken together and multiplied (recall that we multiply probabilities when we need two independent things to happen at the same time). One thing to be careful of: there’s actually nine possibilities for the opponents, not six, but some of them are duplicated. The actual table is like this:

rr          rp         pr         rs         sr         pp        ps         sp         ss

R          0          -1         -1         +2        +2        -2         0          0          +4

P          +2        +1        +1        0          0          0          -1         -1         -2

S          -4         0          0          -2         -2         +2        +1        +1        0

All this means is that when using the original matrix and writing it out in longhand form, we have to remember to multiply rp, rs and ps by 2 each, since there are two ways to get each of them (rp and pr, for example). Note that I haven’t mentioned which of the two opponents is which; as I said earlier, it doesn’t matter because this game is symmetric, so the probability of any player throwing Rock or Scissors is the same as that of the other players.

This payoff table doesn’t present so well in matrix form since we’re dealing with two variables rather than one. One way to do this would be to actually split this into three mini-matrices, one for each of the first opponent’s choices, and then comparing each of those to the second opponent’s choice… then solving each matrix individually, and combining the three solutions into one at the end. That’s a lot of work, so let’s try to solve it algebraically instead, writing it out in longhand form and seeing if we can isolate anything by combining like terms:

  • Payoff for R = -2rp+4rs-2pp+4ss = 0
  • Payoff for P = 2rr+2rp-2sp-2ss = 0
  • Payoff for S = -4rr-4rs+2pp+2sp = 0
  • r+s+p=1 (as usual)

The “=0” at the end is because we know this game is symmetric and zero sum.

Where do you start with something like this? A useful starting place is usually to use r+s+p=1 to eliminate one of the variables by putting it in terms of the other, then substituting into the three Payoff equations above. Eliminating Rock (r=1-s-p) and substituting, after multiplying everything out and combining terms, we get:

  • -4pp+2ps-2p+4s = 0
  • -2p-4s+2 = 0
  • 2pp-6ps+8p+4s-4 = 0

We could isolate either p or s in the first or last equations by using the Quadratic Formula (you know, “minus b plus or minus the square root of 4ac, all divided by 2a”). This would yield two possible solutions, although in most cases you’ll find you can eliminate one as it strays outside the bounds of 0 to 1 (which r, p and s must all lie within, as they are all probabilities).

However, the middle equation above makes our lives much easier, as we can solve for p or s in terms of the other:

  • p = 1-2s

Substituting that into the other two equations gives us the same result, which lets us know we’re probably on the right track since the equations don’t contradict each other:

  • 20ss-26s+6 = 0

Here we do have to use the dreaded Quadratic Formula. Multiplying everything out, we find s=(26+/-14)/40… that is, s=100% or s=30%. Are both of these valid solutions? To find out, we evaluate p=1-2s and any other equation with r.

For s=30%, we find p=40% and r=30%, so that is a valid solution. For s=100%, we get p= -100% and r=100%, which is invalid (p cannot be below zero), leaving us with only a single valid solution: r:p:s = 3:4:3.

It turns out that having multiple players does have an effect on the “rock wins count double” problem, but it might not be the result we expected; with three players it is actually closer to 1:1:1 than it was with two players! Perhaps it’s because the likelihood of drawing with one player choosing Rock, one choosing Paper and one choosing Scissors makes Scissors less risky than it would be in a two-player game, because even if one opponent chooses Rock, the other might choose Paper and turn your double-loss into a draw.

Summary

This week we looked at how to evaluate intransitive mechanics using math. It’s probably the most complicated thing we’ve done, as it brings together the cost curves of transitive mechanics, probability, and statistics which is why I’m doing it at the very end of the course only after covering those! To solve these, you go through this process:

  • Make a payoff table.
  • Eliminate all dominated choices from both players (by comparing all combinations of rows and columns and seeing if any pair contains one row or column that is strictly better or equal to another). Keep doing that until all remaining choices are viable.
  • Find all intransitive “loops” through finding the best opposing response to each player’s initial choice.
  • Calculate the payoffs of each choice for one of the players, setting the payoffs equal to the same variable X. In a zero-sum game, X for one player will be the negative of the X for the other player. In a symmetric game, X is zero, so just set all the payoffs to zero instead.
  • Add one more equation, that the probabilities of all choices sum to 1.
  • Using algebraic substitution, triangular-form matrices, Excel, or any other means you have at your disposal, solve for as many variables as you can. If you manage to learn the value of X, it tells you the expected gain (or loss) for that player. Summing all players’ X values tells you if the game is zero-sum (X1+X2+…=0), positive-sum (>0) or negative-sum (<0), and by how much overall.
  • If you can find a unique value for each choice that is between 0 and 1, those are the optimal probabilities with which you should choose each throw. For asymmetric games, you’ll need to do this individually for each player. This is your solution.
  • For games with more than two players each making a simultaneous choice, choose one player’s payoffs as your point of reference, and treat all other players as a single combined opponent. The math gets much harder for each player you add over two. After all, with two players all equations are strictly linear; with three players you have to solve quadratic equations, with four players there are cubic equations, with five players you see quartic equations, and so on.

I should also point out that the field of game theory is huge, and it covers a wide variety of other games we haven’t covered here. In particular, it’s also possible to analyze games where players choose sequentially rather than simultaneously, and also games where players are able to negotiate ahead of time, making pleas or threats, coordinating their movements or so on (as might be found in positive-sum games where two players can trade or otherwise cooperate to get ahead of their opponents). These are beyond the scope of this course, but if you’re interested, I’ll give a couple of references at the end.

If You’re Working on a Game Now…

Think about your game and whether it features any intransitive mechanics. If not, ask yourself if there are any opportunities or reasons to take some transitive mechanics and convert them to intransitive (for example, if you’re working on an RPG, maybe instead of just having a sequence of weapons where each is strictly better than the previous, perhaps there’s an opportunity at one point in the game to offer the player a choice of several weapons that are all equally good overall, but each is better than another in different situations).

If you do have any intransitive mechanics in your game, find the one that is the most prominent, and analyze it as we did today. Of the choices you offer the player, are any of them dominant or dominated choices? What is the expected ratio of how frequently the player should choose each of the available options, assuming optimal play? Is it what you expected? Is it what you want?

Homework

For practice, feel free to start by doing the math by hand to confirm all of the problems I’ve solved here today, to get the hang of it. When you’re comfortable, here’s a game derived from a mini-game I once saw in one of the Suikoden series of RPGs (I forget which one). The actual game there used 13 cards, but for simplicity I’m going to use a 5-card deck for this problem. Here are the rules:

  • Players: 2
  • Setup: Each player takes five cards, numbered 1 through 5. A third stack of cards numbered 1 through 5 is shuffled and placed face-down as a draw pile.
  • Progression of play: At the beginning of each round, a card from the draw pile is flipped face up; the round is worth a number of points equal to the face value of that card. Both players then choose one of their own cards, and play simultaneously. Whoever played the higher card gets the points for that round; in case of a tie, no one gets the points. Both players set aside the cards they chose to play in that round; those cards may not be used again.
  • Resolution: The game ends after all five rounds have been played, or after one player reaches 8 points. Whoever has the most points wins.

It is easy to see that there is no single dominant strategy. If the opponent plays completely randomly (20% chance of playing each card), you come out far ahead by simply playing the number in your hand that matches the points each round is worth (so play your 3 if a 3 is flipped, play your 4 on a 4, etc.). You can demonstrate this in Excel by shuffling the opponent’s hand so that they are playing randomly, and comparing that strategy to the “point matching” strategy I’ve described here, and you’ll quickly find that point matching wins the vast majority of the time. (You could also compute the odds exhaustively for this, as there are only 120 ways to rearrange 5 cards, if you wanted.)

Does that mean that “matching points” is the dominant strategy? Certainly not. If I know my opponent is playing this strategy, I can trounce them by playing one higher than matching on all cards, and playing my 1 card on the 5-point round. I’ll lose 5 points, but I’ll capture the other 10 points for the win. Does the “one higher” strategy dominate? No, playing “two higher” will beat “one higher”… and “three higher” will beat “two higher”, “four higher” will beat “three higher”, and “matching points” beats “four higher” – an intransitive relationship. Essentially, the goal of this game is to guess what your opponent will play, and then play one higher than that (or if you think your opponent is playing their 5, play your 1 on that).

Since each strategy is just as good as any other if choosing between those five, you might think that means you can do no worse than choosing one of those strategies at random… except that as we saw, if you play randomly, “matching points” beats you! So it is probably true that the optimal strategy is not 1:1:1:1:1, but rather some other ratio. Figure out what it is.

If you’re not sure where to start, think of it this way: for any given play there are only five strategies: matching, one higher, two higher, three higher, or four higher. Figure out the payoff table for following each strategy across all five cards. You may shift strategies from round to round, like rock-paper-scissors, but with no other information on the first round you only have five choices, and each of those choices may help or hurt you depending on what your opponent does. Therefore, for the first play at least, you would start with this payoff table (after all, for the first round there are only five strategies you can follow, since you only have five cards each):

matching           match+1           match+2           match+3           match+4

M         0                      -5                     +3                    +9                    +13

M+1    +5                    0                      -7                     -1                     +3

M+2    -3                     +7                    0                      -9                     -10

M+3    -9                     +1                    +9                    0                      -11

M+4    -13                   -3                     +10                  +11                  0

References

Here are a pair of references that I found helpful when putting together today’s presentation.

“Game Architecture and Design” (Rollings & Morris), Chapters 3 and 5. This is where I first heard of the idea of using systems of equations to solve intransitive games. I’ve tried to take things a little farther today than the authors did in this book, but of course that means the book is a bit simpler and probably more accessible than what I’ve done here. And there is, you know, the whole rest of the book dealing with all sorts of other topics. I’m still in the middle of reading it so I can’t give it a definite stamp of personal approval at this time, but neither can I say anything bad about it, so take a look and decide for yourself.

“Game Theory: a Critical Text” (Heap & Varoufakis). I found this to be a useful and fairly accessible introduction to Game Theory. My one warning would be that in the interest of brevity, the authors tend to define acronyms and then use them liberally in the remainder of the text. This makes it difficult to skip ahead, as you’re likely to skip over a few key definitions, and then run into sentences which have more unrecognizable acronyms than actual words!