designates my notes. / designates important. / designates very important.
The two major components that games have (and they need not have both) are systemic and agential, or the explicit rules of the game and the human element that can be roughly thought of as the metagame.
Games can be broken down into smaller and smaller units, from campaigns to matches to atoms, and then these smallest bits can be examined to understand the strengths and weaknesses of the macro game.
Many of the older games we know of today have changed a lot since their inception and have ostensibly “evolved the bad out” to arrive at what we see as classics today (chess, checkers, bridge, etc).
Computer games adhere to a great deal of the same principles applied to board games, but they also break a lot of those principles; for a simple example things like finding people to play with is trivial compared to in-person gaming, computer games allow you to play alone with a simulated opponent that makes move functionally instantaneously, and the adjudication of the rules is done by the computer. Many things are still the same, albeit physical vs. digital. For example there is still a graphic design, there are “user interfaces” in both worlds (although we rarely would call the physical components a UI).
Many games can be broken down into a few simple categories, like brawls or races. There are essentially 2-player games that are “glued together.” Further, many games are rekinned version of one or several archtypical games like chip-taking. Political games are essentially the equivalent to chip-taking games, but unless you are trying to look under the hood, you might never make this connection. This is important because it lets you understand the game you are trying to design better by looking at it abstractly and without all the the trappings that look/feel good, but aren’t actually part of the game.
Sometimes the problems you game might face are very common and often times strictly unsolvable. You can probably never get rid of kingmaking in multiplayer games, but you can design in such a way that incentives people to play toward a particular spirit of the game.
This leads well into the next part of the book where rules and understanding the games are discussed. In simple terms there are first-order rules that are needed to play and there are second-order (and beyond) rules, that are needed to do well. A great deal of enjoyment can be had beyond the simple first-order understanding. Many people play many games specifically to learn more about a game and improve their performance. Think about a rating system like the Elo in chess. Understanding how the pieces move in chess (first-order) is quite simple. Understanding which moves to make is so complex that there are libraries of books written on the topic. When learning these second-order rules you will often have some kind of gut feeling of what move you should make. These are heuristics and there is much joy found in climbing the heuristic tree for many people.
Interestingly as you refine your heuristics games can sometimes reach “tipping points” where they become “solved” as with tic-tac-toe or nim or they can almost transform into very different games. Take for example games with hidden catch-up mechanisms. The naive player may think they are very far ahead or behind. You heuristics are working on a simple pattern of how many “points” behind you are. After playing the game more and integrating knowledge of the catch-up mechanisms you now have a revised heuristic that tells you that even if you are behind/ahead a large number of points, the game is actually still very close. You will likely make different choices in your naive vs. nuanced understanding of the game.
The final part of the book (appendix notwithstanding) goes into more detail on some of these patterns that show up again and again in games. The simplest example is the rock-paper-scissors metagame. It shows up in so many places. In Magic: The Gathering you have aggro beats control beats combo beats aggro. In RTS games you have melee-ranged-flying units. The aforementioned catch-up mechanism and its opposite, the snowball effect, can make games feel more or less competitive or balanced depending on your understanding. The cost/effect pattern is also seen time and time again. Balancing this is difficult at best and in many games there are after-the-fact balancing changes implements. In a game like MTG this is quite disrupting because you need to ban/restrict cards or errata their text. In computer games this can be handled much simpler (though still with disruption) by patching the game.
However, computer games are different from other game genres in one very important way: they are almost all deliberately designed. There are card games and boardgames that have been designed by specific people, but always against a backdrop of well-known “classic” games that have evolved over time.
One can think of the distinction between designed games and evolved games as parallel to that between modern novels, with specific authors, and ancient oral poetry or folktales. Perhaps an even better parallel is the distinction between modern architecture and traditional architecture: as Christopher Alexander discusses in Notes on the Synthesis of Form , 4 even very intelligent people can have trouble designing complex systems as good as ones that have evolved gradually over time.
Our feeling is that these classic games and sports, which have evolved through an unselfconscious process, are an especially good source of examples for the modern-day deliberate game designer. Many problems that crop up repeatedly in deliberately designed games have been “evolved out” of classic games. Indeed, for many characteristics one can go to modern games for examples of problems, and classic games for examples of solutions — not because ancient people were geniuses, but because the classic games that survive today have undergone a long process of evolution and of weeding out.
The first is orthogame,10 which we define as a game for two or more players, with rules that result in a ranking or weighting of the players, and done for entertainment. Explicit winners or losers, scores, or time to completion all count as rankings or weightings — the point is there is something explicit to tell you how well you ’ ve performed.
We’ll say a characteristic is systemic if it depends mainly on the game as a system (e.g., on the rules) and agential if it depends primarily on the player base. (We derive the latter term from the use one sees occasionally of the term agent to mean a player in a game; one can think of agential as a more euphonic version of “player-ential.” )
Characteristic: Length of Playtime
Note that the amount of time a game takes to play is not just a property of the game itself, but of the community that plays it — that is, length of play is agential.
Atom The smallest complete unit of play, in the sense that the players feel they’ve “really played” some of the game (e.g., two possessions in football, or one level in Donkey Kong)
Game What is conventionally thought of as the length of the game — a “standard” full round of play (most typically starting from a standard beginning state and ending with the determination of a winner)
Session A single continuous period of play (e.g., an evening of play)
Campaign A series of games or sessions that are all linked in some way (the weekly poker game at Randy ’ s place, a match , or an ongoing paper role-playing game)
Match A series of individual games commonly agreed on as the correct amount of play in order to arrive at a satisfactory determination of the victor. For many games this is merely “best two out of three” or similar grouping.
Positional heuristics - These are heuristics that evaluate the state of the game — that is, tell you who ’ s winning. Examples include seeing how many people are ahead of you (and by what distance) in a race, or counting the point values of the pieces on each side in chess.
Directional heuristics - These are heuristics that tell you what strategy22 you should follow. Examples include rules like “run as fast as you can once you see the finish line” or “try to control the center squares.”
When players can target other players in an arbitrary way that differentially affects their game states, we refer to this as politics. The higher the degree of interaction (ability to affect each other’s game state) and the higher the ability to target specific players, the more political the game is.
Calling a game political, calling it a chip-taking game, or calling it a voting game are all broadly similar. A game with few restrictions on the amount or targeting of trading falls into this category as well. Political is the most general term; chip-taking emphasizes the ability of players to damage the positions of other players by targeting; voting emphasizes the fact that players are choosing a winner according to their tastes rather than that the game process is choosing a winner based on some combination of that winner’s skill and whatever luck may be inherent in the game.
One danger to watch out for when designing or modifying a game is the impulse to fix things (problems in the other characteristics) by adding rules. Sometimes adding rules is unavoidable, but in general it’s better to look for other fixes. Repeated little extra bits of rules, each added to fix a different problem, can add up to a quite messy game.
Counterintuitively, making a rule against some behavior and giving it an in-game penalty can in some sense legalize it. A delay of game call in football or kicking in basketball are activities against the rules, and yet commonly used in play. The rules in actuality are not “don’t do x” but rather “when you do x, y happens." This distinction is important and has implications when designing rule systems. For example, the introduction of a rule that forces a move after a certain amount of time has passed could be intended to speed up a game. Instead, however, the rule might lead to players using all of that allowed clock time, thus actually increasing the average time it takes to play a game. The normal agential pressure that would be exerted in speeding up casual play can be short-circuited by a poorly thought out rule intended to have the same effect.
Snowballing and catch-up
At any moment in a game, we can write down each player’s chance to win. Typically those chances will start out more or less equal (for a fair game), change somewhat over the course of time, and then gradually shift toward 1 (for the winner) and 0 (for everyone else). 10 If we write the various chances for each player in a row, say for a four-person game that lasts ten turns, we might see something like (0.25, 0.25, 0.25, 0.25) at the start of turn 1, (0.3, 0.2, 0.15, 0.35) on turn 4, and (0.9, 0.03, 0.03, 0.04) on turn 9. We’ll call this list of numbers a state vector. Note that the sum of the numbers is always 1.
If there’s no chance 11 involved at all (i.e., the game is completely determined), then the vector will look like (0, 0, 0, 1, 0) — all 0s for the players who have no chance and 1 for the player who is certain to win.
In a two-player game, if I am 70 percent likely to win at a certain point (perhaps it ’ s a simple race game and I am eight squares ahead), and then later I am only 60 percent likely to win (perhaps you’ve rolled well and I’m only five squares ahead now 12), then you have caught up. If instead later I am 95 percent likely to win, then that’s a snowball situation relative to the earlier game state.
What we’re really looking at is the spread of the state vector: as it spreads out, the game is snowballing toward its conclusion. If the player who is behind catches up, the vector will be less spread out. The standard way of defining spread is by the variance:13 the expected sum-of-squares deviation from the average. The average is just the sum of the values divided by n, so for a state vector that’s 1/n. Thus for a state vector
$$p_1 , . . . ,p_n$$ the variance is
$$\frac{( p_1 – 1/n )^2 + … + (p_n – 1/n)^2}{n} $$
This number represents how far the state $(p_1 ,… ,p_n)$ is from the “most caught up state” $(1/n, …, 1/n)$. Naturally, the state $(1/n, …, 1/n)$ has the smallest possible variance, namely 0. The largest possible variance 14 belongs to vectors like $(0, 1, …, 0)$ — the most extreme snowball states.
So we’ll define a catch-up event as one that decreases the variance of the state vector, and a snowball event as one that increases the variance.
The square root of the variance is called the standard deviation, and is another common measure of spread. The variance is more convenient for our purposes — it ’ s easier to compute with — but it conveys essentially the same information.
$(n - 1)/n^2$, not that it matters for this discussion.
When a catch-up feature is put in, by and large what is happening, as we have stated, is partially actual variance control and partially catch-up relative to a particular heuristic - for example, the lead in Mario Kart. The feature’s effect is one of muddying the heuristics, but as long as those heuristics don’t change, for all practical purposes the effect is real — the player who thinks he is far behind thinks he is catching up. The danger lies in players developing new heuristics, perhaps seeing that there is no catch-up but instead only a nonintuitive ranking of the leaders, and placing themselves back into the state the designer was attempting to avoid - namely player dissatisfaction with their ability to come back from behind.
Catch-up features allow a nice first-order heuristic (score/position without the catch-up feature considered) and a more advanced second-order 16 heuristic. Since climbing the heuristic tree is a big part of the enjoyment of games, that’s no small thing.
One common attempt to solve the problem of catch-up in very long games is to use dynamic difficulty adjustment. This basically amounts to catching up the player invisibly whenever she falls behind, and catching up the AI if the player moves ahead. The problem is that it is rather like your spouse cheating on you: arguably fine if you know nothing about it, but liable to make you feel bad if you do find out about it, which eventually you will (at least in the case of games, given the Internet). Players who are trying to play well want to feel that if they do play well, they will be rewarded. This feeling is hard to come by if the game tries to ensure equal outcomes regardless of player skill.
However, such a mechanic — say the Green Shell in Mario Kart (which, being unaimed, typically is used against players who are close) — may not give large-scale catch-up. Instead, it may cause clumping: groups of players who are close together keep shooting each other, forming clumps, but one clump can’t affect another far-off clump (although occasionally a player will break away from one clump and push ahead or fall behind until pulled into the orbit of another clump). In this sense Mario Kart is almost exactly like a large bicycle race, with the Green Shell playing the same role as drafting: something that pulls together nearby vehicles but does not affect faraway ones. They are a catch-up feature within a given clump, but less so when viewed from the point of view of the race as a whole. 17
complexity/decision trees
The game arc is how the bushiness of the tree fluctuates throughout the course of the game. Commonly the tree will start out sparse, get bushier, and then become sparse again.
Luck can make it harder to climb the heuristics tree:
Heuristics involving probability tend to be more complicated.
It’s harder to learn any of the heuristics (even those not directly involving probability) — if you make a move and lose, was it a bad move, or did you just get unlucky?
Game | Amount of luck | Amount of skill |
---|---|---|
Poker | High | High |
Chess | Low | High |
Tic-tac-toe | Low | Low |
Slots | High | Low |
downtime
One particularly good example of this is Monopoly: the best thing that can happen to me - namely someone landing on my property and paying me lots of money - happens on my opponents’ turns. So I am excited to watch the turns of other players. 17
“Pure” busywork Completely mechanical operations that must be performed according to some deterministic algorithm. No choices of any kind are involved. Shuffling and dealing, setting up the board, making change, or looking up results in a table.
Incomprehensible busywork Actions that involve gameplay choices that are completely opaque to the players and are made essentially randomly. Logically these are quite different from pure busywork, but the effect on the player is much the same. These are actions that must be performed for the game to continue, but that don’t involve any meaningful choices and cannot be done better or worse, 21 and hence seem like work unrelated to the play of the game.
Very low reward/effort ratio activities Not strictly speaking busywork, these sorts of activities might seem close to busywork to some players.
Scale of Intensity for Conceits
Conceits in a game can range from none at all, or a light conceit, all the way to full-blown simulation.
Sometimes an IP is deliberately designed to fit together with a game. This is fairly common for (nonlicensed) computer games, but less so for paper games. One notable group of exceptions includes a number of Japanese trading card games: Pokémon, Yu-gi-oh, and Duelmasters, for example. 14 These games are also notable in that there is a game inside the IP itself, with the game the player plays being a mirror of the game the characters in the IP play.
Done right, the presentation of the IP in various ways - books, comics, TV, various toys, and perhaps multiple games - can become powerfully reinforcing. Oddly enough, the dynamic here is not that different from the dynamic of sports, where a person who likes a sport might play, watch, and follow the “back-story” (personal lives of players, personalities of coaches, and so on), with all of these activities potentially supporting one another.
Here if Player 1 plays rock, and Player 2 pays paper, the payoff of -1, 1 means Player 1 loses $1 (or a victory point, or whatever you imagine the players are playing for) and Player 2 wins $1.
As an aside, rock-paper-scissors is zero-sum (what one player gains, the other loses) and symmetric (each player has the same options as the other).
A married couple wishes to spend an evening together. Spouse 1 prefers opera; Spouse 2 prefers football. But each cares mainly about spending the evening together. If for some reason they had to decide what to do without communicating, what should happen?
Note there are two Nash equilibria, Opera/Opera and Football/Football: given that the couple has landed on either one, neither spouse at that point wishes to move off. But which equilibrium? Spouse 1 hopes for the first; Spouse 2 would prefer the second. In the absence of some sort of prior coordination (i.e., if they really are deciding independently and simultaneously as we’ve been assuming) it’s hard to see how they can decide. If they each go for their own favorite, they could wind up separate and unhappy; if one decides to pick the other’s favorite, there’s no way to know if the other has done the same, leading also to an evening spent separately. This game is a very simple model for a problem in coordination.
Some birds (or perhaps nations) have a choice between two strategies: be very aggressive in confrontations over some resource like food, or give way. Two hawks will have a terrible destructive fight that leaves them both much worse off. Two doves will share food and each gain a little. A hawk that meets a dove will frighten it off and gain the whole meal for itself. (This game can also be thought of as the teenage driver’s game of “chicken” with Hawk as Drive Straight and Dove as Swerve; it sometimes goes by that name in the literature.)
Hawks and Doves is quite similar to Battle of the Sexes with its two Nash equilibria. The focus here, though, is on the desire to be the Hawk: each bird would like to be the Hawk if only it could be sure the other bird would be the Dove. But both birds being the Hawk is a disaster, whereas both birds being the Dove is fine.
It’s hard to make much sense of this until you start thinking of populations. In a population of Doves, a single Hawk will do very well (and presumably will start reproducing more and more, leading to more Hawks). In a population of Hawks (constantly fighting and hurting each other) a single Dove will do very well. With some work, one can compute what the end percentage of Hawks and Doves should be (i.e., the percentage at which a new Dove or a new Hawk entering the population has no special advantage over its opposite number). The Hawks and Doves game is a foundational example for the application of game theory to evolutionary biology.
References
Arranged roughly in increasing order of difficulty.
Game Theory: A Very Short Introduction , by Ken Binmore. The “Very Short Introduction” series is quite good in general, and this intro to game theory is particularly nice. Lots of good examples, very readable.
Game Theory and Economic Modelling , by David M. Kreps. Another fairly accessible text. Don’t let the Economic Modelling in the title scare you; it’s not a heavily mathematical text, and it’s fairly user-friendly. It’s more a general text than an economics-specific one. A little more academically oriented than Binmore, but not by that much.
Luck, Logic, and White Lies: The Mathematics of Games , Part III, by Jörg Bewersdorff. An excellent survey that applies three different kinds of math to games: basic probability theory, combinatorial game theory, and Von Neumann game theory.
Evolution and the Theory of Games , by John Maynard Smith. Game theory is also used in biology to explain how the behavior of animals can evolve (the Hawk-Dove game is an example). This is the classic book by one of the founders of the field. Not too much math — very much focused on evolutionary biology.
The New Palgrave Game Theory , edited by John Eatwell, Murray Milgate, and Peter Newman. A collection of essays about game theory by and for economists (the New Palgrave is a series of books for economists on various econ-related subjects). The essays vary widely in accessibility - some friendly survey articles, some pretty hardcore stuff. The fact that the articles are all independent makes it useful for finding short bits on specific topics that interest you. A good place to get a survey of how game theory is used in economics.
Game Theory for Applied Economists, by Robert Gibbons. A good intermediate-level survey of game theory. A nice place to find careful and precise statements of definitions and theorems (if you’re the type to find such things helpful) but definitely requires some ability to read math.
Game Theory, by Drew Fudenberg and Jean Tirole. The standard reference in the field. A serious graduate-level text.
Theory of Games and Economic Behavior , by John von Neumann and Oskar Morgenstern. The book that started it all back in 1944, and still in print (in fact, a commemorative edition was published in 2004). Not an easy read, in part due to its density and in part due to the fact that notation has changed since the 1940s, but still a classic.
Dating back only to the 1970s, combinatorial game theory is more recent than Von Neumann game theory. Combinatorial game theory was invented primarily by John Conway (also known for the 0-player game of Life). It’s still an active, albeit small, field of mathematical research. No real-world applications of the theory have been found (other than applications to games themselves), so not many people outside of mathematics know much about it. It’s approachable enough, though, that recreational mathematicians (who are often the sort of people who like games anyway) have some interest in it.
The big idea of the theory is that game positions or states can be analyzed in a way that lets you think of a game state as a kind of generalized number. The Conway game theory is a type of math for these generalized numbers (sometimes called “surreal numbers," a term invented by Donald Knuth, who also wrote a book by that title on the theory). If you can decompose the board position into pieces, evaluate the (surreal) numerical value of the pieces, and know how to do (surreal) addition, you can win games that would utterly baffle people who don’t know the theory. 6
Winning Ways for Your Mathematical Plays (4 vols.), by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. Introduces the theory using lots and lots of example games. If you are wondering how the theory might apply to some particular game you like, this is the place to look.
The Dots & Boxes Game: Sophisticated Child’s Play, by Elwyn Berlekamp. In-depth application of the theory to the classic childhood game of dots & boxes. Read this book and be an unstoppable dots & boxes force!
Surreal Numbers, by Donald E. Knuth. An odd but fun little book about two people stranded on a desert island who decide to pass the time by constructing the surreal number system. Good if you are a Knuth fan, or like the Socratic style of presentation.
Luck, Logic, and White Lies: The Mathematics of Games , Part II, by Jörg Bewersdorff. Part II of Bewersdorff’s three-part survey of the math of games (mentioned in the previous appendix) is on combinatorial game theory; it also covers some other topics like computer chess.
Lessons in Play: An Introduction to Combinatorial Game Theory, by Michael H. Albert, Richard J. Nowakowski, and David Wolfe. A textbook (complete with exercises and answers in the back) at an undergraduate level. Good for those who found the casual approach of Winning Ways more confusing than friendly.
On Numbers and Games, by John H. Conway. A more formal treatment of the subject. Good for those who want precise definitions, statements and proofs of theorems, and a generally mathematical approach. More abstract and more advanced than Lessons in Play .
Mathematical Go: Chilling Gets the Last Point, by Elwyn Berlekamp and David Wolfe. A fairly involved application of the theory to a very complicated game. Good if you want to see the theory applied in a big way to a hard problem.
Games of No Chance, More Games of No Chance, and Games of No Chance 3, edited by Richard J. Nowakowski. Separate collections of papers on combinatorial game theory. Good if you want to see a broad scope of applications of the theory and if you want to get an idea of the state of (relatively) current research. Most areas of math are so dense and complex that it’s basically impossible for a layperson to get to current research, but combinatorial game theory is young enough that it’s almost approachable for a smart and hard-working amateur.