A toss of a perfect coin is pure randomness. And by perfect, I mean unflawed coin, ideal conditions, perfect 50/50. Similarly, a game that involves only rolling one perfect dice has total randomness. 1/6 chance of a six-sided result.
Photo by Riho Kroll on Unsplash.
Games with multiple dice are still random, but within predictable patterns: the Central Limit Theorem says that a pair of 6-sided dice are most likely to add up to 7, simply because there are more possible combinations of dice that make 7.
Each dice is random, but some rolls are much more likely than others.
So the chances of every possible value coming up are not equal — but random within a set of probabilities.
When I say “random” I mean “eventually random, accounting for all the probabilities across all the possible things you can control, such as who goes first or how many dice there are.” If everyone’s betting on the results of a two-dice roll, and everyone knows the relative probabilities and accounts for them, it’s “random” within that context.
The most boring game in the world
Imagine a game with two identical tracks of squares stretching off into the distance.
(I should note that and I mean competitive, zero-sum games in which can players can only win, lose, or draw here, rather than the gameified play of an Animal Crossing or Minecraft. They didn’t have these 4,500 years ago, which will be relevant in a bit.)
You and your opponent take turns rolling a perfect dice and moving that many spaces ahead. After a certain number of spaces moved, the leader wins.
The most boring game in the world.
This is purely random; or rather it is predictably probable. It depends (a very tiny bit) on the number of tiles in the lane; the number of sides on the dice, the number of dice, and who goes first.
How likely you are to land on each square on your first roll of two 6-sided dice.
At this point, I bumped up against my lack of coding skills and matrix algebra, so I asked Paco Nathan and Eric Packman (my Managing Bandwidth co-author and Coradiant co-founder) for advice.
That is a relatively simple formula, which can be calculated. Eric explained that one way to do it is a Markov Chain: You make a 2-dimensional grid of board positions and probabilities of landing on a given square next turn; then you do a two-dimensional set of multiplications, called the dot product, across the results; you repeat for each successive turn; and you add everything up. Paco observed that this is essentially what’s going on with reinforcement learning, an AI technique that learns the most “rewarding” course of action based on computing probabilities across many iterations and many different inputs.
But there’s another way to solve this, one that even my coding skills can handle. I didn’t have a chance to put them to the test: Paco put some Python code on Github in 17 minutes; and Eric wrote 49 lines of PHP code, had it play this very boring game a billion times, and counted how often the “player” landed on each square, in 19 minutes. I would have futzed around for hours.
Here’s the resulting probability of landing on each square.
How likely you are to land on each square in a 2-dice, 28-space game, playing a billion times.
In the graph below, which plots these probabilities on a line, you can see the spike around 7, because the first roll of two dice still has that central limit; then there’s another cluster around 17; then things gradually even out. Over a very long time, each square should get a roughly similar chance of being landed on, as the initial central limit spreads out across all the squares.
The probabilities of landing on each square.
So we have a boring, predictable game. To win even slightly more than average you have to be a jerk who offers to play but insists on going first, just to eke out an infinitesimally small advantage.
You could shark a game like this—but it would be awfully dull and not very profitable. Over time, this is just more a random walk of winning and losing streaks — which is ultimately about who has more money in the bank. The house always wins.
Randomness is the opposite of agency
This is why random games, that don’t afford agency to the players, aren’t very popular. If we don’t get to make choices, we wonder why we bother playing. We could instead flip a coin.
We try to hide the pure randomness with complexity. We add chutes, and ladders, and roll again spaces, and more. Each of these hides the underlying randomness just a bit, giving us an illusion of narrative and a chance to catch up. But it’s not giving us agency; we are still at the mercy of the dice. We can crunch the numbers in advance. The end is preordained based on what will be rolled.
Snakes and ladders just delay the inevitable randomness. There’s no real choice.
Games with a known stalemate
By contrast, some games aren’t random. You can know every game state and how to respond. In these games, the fun actually comes from trying to take advantage of the opponent’s lack of knowledge.
Tic-tac-toe, for example, is entirely knowable. There are only a few basic moves (corner, edge, middle) and three possible values (an empty space, an X, and an O.) It’s easy to avoid losing by following a couple of simple rules. Once you know the right countermoves, you’re enlightened. You’ll always be able to at least stalemate the game. You’ll never lose.
This also means that two enlightened opponents will always draw. A novice player may lose or draw; but an enlightened player will never lose, only win or draw. Someone who knows the trick will either win (opportunistically, against an unenlightened opponent who makes a mistake) or draw (against an enlightened one.)
Tic-tac-toe is actually a game of knowing whether your opponent knows the trick.
The only truly unknowable thing about tic-tac-toe is whether your opponent is enlightened. It’s a standoff, but we can also judge whether they might be: A four year old is unlikely to know the secret, so they’re an easy mark.
That means you can enjoy tic-tac-toe on two levels:
Unenlightened: Competitive placement of circles and squares, when you don’t know the trick;
Enlightened: Discovering whether your opponent also knows how to always tie a game, when you know the trick.
You could be a tic-tac-toe shark, if nobody were enlightened. But it is easy to be enlightened at tic-tac-toe, and to call out the shark, and to reveal the scam. As cons go, tic-tac-toe is pretty low on the list. Which is why there aren’t competitive tic-tac-toe leagues: Once the stakes get high enough, you can spot enlightenment a mile away.
Unknowable games
So there are purely random games (like dice rolling.) And there knowable games (like tic-tac-toe) which, if you understand them, can always deliver a predictable outcome (such as a stalemate; or worse, they have a design flaw in which a particular player will always win or lose by going first.) It’s either random walks or tic-tac-toe: A game of chance or a game of stalemates.
On the other hand, a perfectly balanced game gives each player the same initial chance of winning. It gives them agency. A true game of skill. A fair contest. A Noble Game. But is there such a thing? Deep down, there’s nothing random about chess or Go (other than deciding who goes first.) They’re just ridiculously, unthinkably complex.
Imagine for a moment that both chess players had instant, perfect knowledge of every single possible next move and all of its consequences. Forget, for this moment, that there are more possible combinations in chess than there exists time to compute them. If two such players existed, one of two things would happen:
Every game between two perfect players would end in a tie, like tic tac toe does; or
A certain player (the one who went first, or the one who went last) would always lose (in other words, there is a series of moves that, once known, can never be countered.)
Claude Shannon, by Konrad Jacobs
To be so perfect a chess player, you have to process an impossibly large amount of information. Like, really large. Claude Shannon, one of the founders of information theory, tried to figure out how many there were (it’s referred to as the Shannon Number.)
Dutch computer scientist Victor Allis, who has devoted much of his career to making AIs that can play games like Connect Four, estimates the “game-tree complexity showing every possible sequence of moves to be at least 10^123, ‘based on an average branching factor of 35 and an average game length of 80’. By comparison, the number of atoms in the observable universe, to which it is often compared, is roughly estimated to be 10^80.”
It sure is hard to be good enough.
So there are random games, and there are knowable games. It’s just that knowing some of them (chess) takes more than the universe to achieve, so we treat them as unknowable, strategic games for the purpose of gameplay. In the absence of knowable surefire ways to stalemate, we resort to heuristics, intuition, and the mastery of patterns.
Continuing our thought experiment: If you’re good enough, knowable games are all stalemates (like tic-tac-toe, but with vastly more possible variations.) Checkers, Chess, Go, Mancala, and many more games fall into this category.
Combining randomness and complexity
Many games — backgammon, Bridge, and so on — include both randomness, and unthinkably complex variations. They are more about how you respond to random, transient advantages or disadvantages—good or bad dice rolls; a terrible hand—than about testing your relative enlightenment against that of another.
Let’s call this class of game a True Game, in contrast to a Random Game (no agency, like flipping a coin) or a Stalemate Game (knowable outcomes, like tic-tac-toe.) A True Game requires both randomness and unknowable outcomes. And it exhibits emergent behaviour, as players realize the deeper, underlying, unknowable Stalemate Game hiding within the Random Game.
The Royal Game of Ur
I watched a British Museum piece on the Royal Game of Ur recently. The game was invented around 2500 BC, and for whatever reason modern civilization has lost the game; historians had to decipher the rules from cuneiform tablets to be able to play it. But back then, this thing was common, like poker or checkers.
The video features a total novice (Tom Scott) and a master (Irving Finkel.) I find it absolutely entrancing.
(Go watch it. It’s amazing.)
Finkel, the teacher, began by stating a few simple rules: How you roll; how you move; how you win; what certain squares do. He explained little else (though there’s plenty of nerdy trash-talk.)
At first blush, this seemed to be a variant on our roll-and-race example. A Random Game. One that was, ultimately, like Snakes and Ladders: Complicated, but still a roll of the dice. The kind of game where you don’t strategize against others, but instead spend your time wishing out loud for the dice roll that will most benefit you as you cast your dice upon the table. “Come on, daddy needs a four.”
But there is a second kind of game hidden inside the Royal Game of Ur. A nonrandom, knowable one, like chess. Backgammon — and the Royal game of Ur — are True Games, because they have enough randomness that a novice can win, but enough strategy that an expert can return from a disadvantageous set of random rolls with the right strategy.
I never understood the appeal of Backgammon. Some folks do, apparently. Photo by Josh Pepper on Unsplash
Anyway, back to the video.
Soon, the novice (Scott) was talking about the Royal Game of Ur like it was a strategic game. It wasn’t a random race at all. His language changed. Unprompted, without ever having heard the teacher use the expression, the student spoke of “your defense,” instead of “advance three paces.” He used terms like “then I’m vulnerable.”
He was playing a different game from the one the teacher had taught—One he had inferred, and for which, unbidden, he had devised metaphors that made perfect sense to the teacher.
The teacher didn’t tell the student about this second, hidden, deeper game. The gameplay was balanced so evenly on the fence between random and strategic that you could watch the transition from random to knowable-but-complex happen. The deeper, second-level gameplay emerged before your eyes.
As this happened, the student became far more engaged. It was to see him realize, gradually, “nope, I can safely not worry about who goes first, or being sharked by someone who knows the underlying pattern and can trick me. I have a fair chance here. This is a meritocracy.”
It’s wonderful.
Make of your life true games
I’m not really sure why I felt compelled to write this, other than that I watched a cool video and went down a mental rabbit-hole of stats and game theory. But on reflection, there’s a deep lesson about fairness and play here, which probably matters to everything from negotiation to profiteering to politics to social safety nets to late-stage capitalism to communal resource sharing:
In the early stages of playing a game, we are making a judgement about whether something is:
A fun battle of wits.
A rigged game preying upon those who don’t know the trick.
Or a complete waste of time for which you may as well flip a coin.
Randomness is exhausting, and robs us of agency, denying us well-deserved merit. Rigged games remind me too much of corruption, and the predatory externalities of late-stage capitalist trickery. No, I want simple starting conditions that bloom into infinite possibilities. For me, the best life—and the best society—is made up of True Games.
John Conway, the mathematician who devised the cellular automata model behind The Game Of Life, died this month from Coronavirus. https://www.theregister.co.uk/2020/04/14/john_conway_obit/
RIP, Dr. Conway.