GeekGold Bonus for All Supporters at year's end: 1000!
22 Days Left
Gamers often discuss the concept of depth - and along with it strategy, tactics, and complexity - with a great deal of fervor. I am guilty as charged. For a while now, have been ruminating on the nature of depth and I how I have come to understand it. I’m driven by an interest in putting my thumb on what qualities in games tend to relate to depth; how our understanding of depth applies differentially to the concepts of strategy and tactics; and how the idea of uncertainty and complexity plays into the greater discourse on depth.
Earlier in my investigation of depth, I asked a few questions in this thread, and received many insightful replies. I will be quoting and referencing a number of the responses in the course of this blog post. But, I believe I’ve arrived at a point where I can put forth some basic hypotheses that summarize my thinking on the matter. And as with all my blog posts, this is merely the start of a new thread of inquiry - I’m interested to hear your reactions and how you may have been, or continue to be, searching for depth yourself.
Broadly speaking, my understanding of “depth” is this:
Depth is a function of the layering of heuristic understanding necessary for effective play.
I’m currently reading “Characteristics of Games” and one of the three principal characteristics is heuristics (the others being playtime and number of players!). Merriam-Webster defines heuristics as “involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods.”
The relevancy for games is that heuristics reflects the levels of learning and expertise (i.e. skill) gained in a particular game as a consequence of playing it. One could also read strategy articles and gain greater heuristic understanding without having to find out in-game; but that is another topic for debate.
Regardless, the heuristics themselves involve the process of interpreting the board state, evaluating legal moves, choices, and decisions, performing those moves and then gauging the resulting impact and feedback on whether the choice pushes you towards victory (or not).
Suffice to say, games with more layers of heuristics - that is, more that needs to be interpreted and evaluated - will tend to be games with greater depth. Consider two non-random games (other than the first player), Tic-Tac-Toe and Chess. The heuristics of Tic-Tac-Toe are thin, and most adults can play optimally leading to a cats game (a draw) with very little effort. Chess on the other hand has a very thick pile of heuristic layers that yields great differentiation in player skill levels. Even more importantly, the heuristic layers at the forefront of an advanced Chess player’s mind are likely not even perceived or acknowledged by a player new to the game. They haven’t “uncovered” those heuristic layers yet and are only playing the game at the most basic and superficial level.
At this point in my thinking, I had many questions. What constitutes a heuristic layer? Are strategic games inherently deeper than tactical games (or the other way around)? What role does luck play in defining or bounding depth (and heuristics). What is complexity and how does it tie into heuristics? How might we measure depth empirically and/or form hypotheses about depth a in given game?
The Continuum of Strategy + Tactics
Inevitably, discussions on depth end up touching the relentless discussion about strategy and tactics, and where the line between the two falls (if there is such a line). In regards to depth, I think many people jump to the conclusion that games with a strategic emphasis tend to be de facto deeper compared to those with a tactical focus; but I am not so sure. For reasons we will explore later, I think strategic games and tactical games, or anywhere in between, have the potential to be deep games.
Rarevos provided some interesting thoughts regarding the differences between strategy and tactics. He asserts on one hand that some people tend to distinguish strategy and tactics on the basis of timing, strategy is long-term and tactics are short-term. However, he proposes another outlook which is that strategic decisions are those that require more thought and sets a direction for the future, while tactics is about the execution and on the fly adjustments. And yet another way of approaching it is to think of strategy in terms of “opportunity creation” and tactics in terms “reactionary moves.”
I summarized my own thoughts on the tactics/strategy debate as follows:
- Tend to focus more on immediate and short-term opportunities;
- Tend to emphasize the execution of actions to support broader goals and objectives;
- Tend to manifest as actual changes to the board state.
- Tend to focus on long-term or game winning opportunities and the pathways to get there;
- Tends to concern itself with uncertainty and the planning of actions;
- Tends to be decoupled from board state, the strategic disposition of players “is in their head.”
What I found interesting is whether we can safely say that ALL actions that we take on the board are tactical in the sense that they pertain to physically altering the board state. The reasoning behind the move might be purely tactical in nature or might be driven by a higher level strategic objective, but the move itself is tactical in execution. If this is the case, what is the nature of a strategic decision if it isn’t coupled with board-state changes; and specifically, what kinds of games or situations provide more strategic level thinking?
A new Concept: Strategic Efficacy
I spent some time contemplating many of the games I was most familiar with in an attempt to understand what part of a game’s heuristics embodied the “strategy”. I slowly came to an idea that I’m now calling “strategic efficacy”. Efficacy is defined by Merriam-Webster as “The power to produce an effect.” So strategic efficacy relates to decisions in the game at a broad, long-term level that have an effect or impact on the direction of the game.
In thinking about various games, I developed the following hypothesis:
Strategic efficacy in a game is proportional to the amount of future control you can exert over domains of the game.
That’s a bit of a mouthful. By “future control” I mean the ability or capacity to influence the future boardstate based on current decisions. By “domain” I mean any particular aspect of the game that plays into your long-term path to victory. The more critical the domains in the game are to victory and the greater control you can exert over them (either through exclusive ownership, conflict, or something inbetween) the greater potential there is for strategic efficacy. Said plainly, the more your choices now matter in the long-term for determining your victory the more strategic efficacy can exist in the game.
Let’s consider some examples. A game that I think has a modesty high strategic efficacy is Tigris + Euphrates. Players make an investment by locating their leaders in a particular kingdom and playing tiles to both score points and increase their strength in that region. Players gain increasing degrees of control over these domains (kingdoms) and can use that power in the future to increase their control even further and into other domains (i.e. conquering an adjacent kingdom). There is a feedback effect at work on the strategic level. Tigris + Euphrates does have random tile draws, which plays into the tactics and timing of what you can plausibly expand/attack, and might suggest a shift in strategy, but it is separate from the domain (kingdom) control you already have established on the board. How you use the tiles is informed by the strategy.
Compare this to QWIRKLE, which incidentally also has a hand of tiles and a shared board space, yet is a game I feel has low strategic efficacy. Why is this? In QWIRKLE, there is very little control you have over the evolution of the boardstate. Any other player can add onto any line, and once tiles are played they are not owned or controlled, or under the influence of an individual player. Certainly there is strategy in terms of long-term hand management and playing the odds/risks of someone else finishing the line (more on this later); but once you play tiles they cease to be yours and provide little guarantee of future value. At it’s best in a 2-player game, an open line has a 50-50 chance of you being able to close it in the future by drawing the right tiles. There is little control offered by this situation because there is little control over the “domains” of open positions on the board.
Boiled down, a game with relatively more strategic efficacy provides more potential for your actions and choices to influence future domains and the board state. Think of strategic decisions as being investment choices. Implementing the decision requires resources with the expectation that it will pay-off in the future.
Overall, this idea is not meant to imply that short-term tactical are not deep. Quite the opposite. Tactical choices in a longer high strategic efficacy game are critical to actually realizing potential efficacy. In games (like QWIRKLE) that have less long-term strategic efficacy, the tactical choices ARE the primary decision points in the game; and there can be a great deal of depth within the tactical realm.
Look-Ahead and Strategic Horizons
The matter of timing is still important for understanding depth. I think about timing in two ways; look-ahead and strategic horizon. Gamers often talk about “look-ahead” in games. I tend to view look-ahead as an action planning activity. If I am planning my moves in Chess (which I’m not all that good at mind you) and I have a goal of getting board control somewhere, look-ahead is the process where I might play out the next few moves and my opponent’s moves in my head to help visualize future board states. The look-ahead activity is principally focused on envisioning a sequence of tactical decisions.
“Strategic Horizon” by contrast is about long-term and intermediate-term goal setting and is inexorably intertwined with the scoring mechanics in the game. Many games have terminal length strategic horizons along with many mid-term horizons. For example, in Chess the terminal strategic goal is capturing the opponent’s king. In Carcassonne (Hunters & Gatherers is the flavor I’m familiar with) the choice of which hunting grounds to permanently place a hunter in is a terminal-length decision; as you don’t score the points until the end of the game and can’t undo your placement. Placing a gatherer in a large forest is a more mid-term objective, you’ll likely close the forest and score points but it may not be for a few turns and you face opportunity costs for the decision (i.e. you can’t place that gatherer somewhere else to score points until the forest is closed).
So the strategic horizons in a game are essentially how far out you are setting goals over the course of the game; while look-ahead is planning and visualizing the tactical actions needed to get you there.
A game that captures this difference really well for me is Taluva, especially in a two-player game. The winning objective in a two player game is generally met when a player builds all of 2 of 3 types of buildings. There are four strategic horizons that exist: (1) objective of building all huts; (2) objective of building all temples; (3) objective of building all towers; (4) objective of preventing opponent from doing 1-3. These are all mid-term length objectives in my mind, and one can work incrementally towards them over a few turns. The overarching strategic goal helps dictate which of these sub-objectives you are in best position to pursue at a particular moment. There is a fair amount of look-ahead involved with reaching these objectives, as there are often multiple opportunities that need to be examined and considered alongside with what your opponent’s likely reactions will be. But point is that the strategic horizon is different from look-ahead are different creatures.
Summarizing the Strategy-Tactics Continuum
In terms of relating the above concepts to depth, I feel that each of these activities (strategic efficacy, look-ahead, and strategic horizons) all establish layers of heuristics that contribute to a deeper overall gaming experience. Games that do more of these things (e.g. lots of high efficacy domains, both look-ahead and horizons) will tend to be deeper games.
Layers of Uncertainty
Another key issue in the consideration of depth is “uncertainty”, which can take a number of forms. Many people tend to feel that randomness acts as a break or limiter to depth. I think it really has to be examined on a game-by-game basis. Poker is certainly a “random” game in the sense of the cards - but understanding the odds and intersecting that knowledge with the personalities and dispositions of the other players at the table is a real skill. At a high level of competitive play, poker is quite a deep game, as players balance odds and risk-reward decisions against the flow of money coming in and out of their pile and the layers of bluffing and reading going on.
So what role does uncertainty play in affecting depth? I often look at uncertainty in three ways; (1) randomness in outcome; (2) randomness in opportunity; and (3) player-driven uncertainty. My hypothesis is that each of these variations of randomness has the “potential” to add a layer of depth on to the game depending on its execution.
Randomness in outcome is what we most often associate with something like rolling to attack in Risk; where players commit to a decision and part of resolving that decision involves an uncontrollable element of randomness, e.g. rolling dice or drawing battle cards. One often comes across people cursing their luck in such situations, perhaps they failed an attack they thought they would win. The thing is, randomness in outcome is one manifestation of the “risk-reward” layer of depth. If your long-term strategy hangs on a die roll you can’t afford to lose; is that a problem with the dice or with your strategy? Perhaps you would have been better off pursuing a different strategy that didn’t hang your total success on a single die roll.
I raise this point because I think the heuristics around risk-reward evaluation is at the core of many games and can add a distinct layer of depth. Consider the game Illuminati (one of my favorites). The game features a high degree of attacking, the success of which hinges on rolling equal to or under your strength advantage on a 2d6. The attacker + defender can each spend money to increase their strength; and an 11 or 12 always fails. So, as the attacker you can create pretty good odds (33 to 36) of your attack succeeding by having a 10+ strength advantage - but there is always a chance of failure. How much risk do you take in maximizing your odds? If you spend all your money and get the best odds; what is the price of failure at that point? Can you afford to fail having spent all your money on a giant attack and having none left to defend with? These are really tough and interesting choices driven by the randomness of outcome and the risk-reward factors at play. This kind of randomness when players have ways to mitigate or avoid the randomness; or where there is enough randomness occurring that the results tend to even out statistically speaking.
Randomness in opportunity pertains to random elements creating different opportunities up front, after which you then make a decision to act in some capacity. Poker is much this way; you are dealt your initial hand of cards and THEN have to decide how to play the hand, how much to bid, etc. Or Roll Through the Ages, where you roll your dice and then apply the results in different ways depending on your goal. Or Eclipse, where you draw a random tile as you explore new sectors of space. Each of these random upfront events helps shape the range of potentially good strategies or actions you might pursue.
At first, it may not seem like this contributes to depth (and only to variety), but it can in one important way. Understanding the range of random opportunities that may be presented to you over the course of game creates its own risk-reward layer coupled with opportunity costs. There is often a tension between taking advantage of the immediate opportunity maximally at the expense of remaining balanced and flexible to take advantage of future opportunities.
This is where the heart of the depth in QWIRKLE exists (a very tactical game remember). You draw tiles randomly from a bag and have a hand of 6 tiles. Often you want to try and make complete lines of 6 tiles (a QWIRKLE) because you score many bonus points for doing so. You may be able to get 4 or 5 tiles lined up and get a modest amount of points from that, but it leaves the line open for your opponent to finish, stealing the bonus. The risk-reward decision is about understanding your future opportunities relative to potential tile draws you could expect in the future and anticipating the odds of drawing them. The risk in playing your tiles for points now is that your opponent might draw the finishing tile and get more points. The risk in waiting is that you aren’t scoring as much now and it isn’t certain that you’ll draw the one you need later; but potentially you can get more points by waiting. QWIRKLE is a great example of a very tactical game with randomness creating a modest amount of depth through the risk-reward evaluation.
Player-driven uncertainty is a third source of “randomness” in the sense that you can’t be sure exactly what your opponent’s are going to do. Either by doing something unexpected, or making a poor play, or making an exceptional move, or having a mood swing, we can rarely count with absolute certainty on our opponents doing what we’d expect them to. A game with no randomness of outcome or opportunity automatically draws all of its uncertainty from the other player. A zero-luck 2-player abstract (Chess or Go) falls into this group. If it weren’t for uncertainty stemming from the other player’s actions you hardly have a game at all. In large multiplayer conflict games - the player-driven uncertainty can be quite high as politics and negotiation enter the metagame. I tend to think of Race for the Galaxy as a sort of deductive game first and foremost, where I try to anticipate my opponent’s moves so as to leech off their play and not advantage them with my own - but there is a lot of uncertainty wrapped into the second-guessing that goes on. Having to navigate through the minds of other players can, obviously, be a very great source of depth.
As with the prior major topic (strategy + tactics), the three varieties of uncertainty can each contribute a pancake to the heuristic stack, driving up (or down?) the depth. Whether by forcing readjustments, requiring deeper risk-reward evaluations, or deducting your opposition, each of these can make for a deeper game.
The Deception of Complexity
When we think about the “weight” of a game, we often talking about the aggregate of depth and complexity. They are often related to one another, but certainly we can have very deep but relatively simple games (Go, Checkers) as well as complex and intricate games that are not terribly deep (I won’t name names!).
I can’t help but feeling that many games focus on creating complexity rather than creating depth. In such cases, the web of complexity the game projects can be a guise masking its lack of depth. However, such games may be successful when viewed not so much as a game but more as a puzzle - where understanding and navigating the complexity of the system “IS” the intended focus of the game; rather than having players navigate uncertainty and the strategic/tactical heuristics in light of an ever-changing board state.
I don’t mean to imply that such games aren’t games, but rather the principal challenge offered in the game is exploring the mechanical systems and getting them to work rather than having to deal with your opponents in a more direct fashion. You see desires for these two perspectives routinely here on BGG with people’s tolerances for direct interaction/conflict in their games - and it helps characterizes the historic divide between euro and ameritrash games.
Moving on however, my hypothesis regarding complexity is that complexity is primarily a function of the game’s mechanics and structure (i.e. “how” you do things in the game); and more importantly that higher complexity games can make the heuristic layers “broader” in extent, but does not automatically translate into a deeper game.
The Power of Goal Trees
Goal trees are a concept I first ran across previously (Objective Driven Gameplay), and have discussed previously in this blog at various points. What I find fascinating and compelling about them is that they provide a map (or flowchart) depicting the complexity of a game. The goal chart considers first “how you win” the game, and then walks backwards through all the mechanical constructs AND the associated decision points.
The example below is a goal tree I made for Race for the Galaxy a while ago. Starting at the bottom, you obviously want to score VP’s, which comes from the VP value of the cards in your tableau plus VP tokens gained from Produce-Consume cycles. You can follow the goal tree upwards and see the major actions and decision points that you need to work through to get your VP’s.
The second goal tree (below) is for Taluva, a more abstract game. The principal action is drawing and placing 1 tile (Carcassonne-style) and then building one of three types of buildings. The complexity of the game is quite low as there are only a few intermediate conditions attached to these rules, e.g. you can only have 1 temple and/or 1 tower per settlement, and you need 3+ huts to make a temple and 3+ tile height to make a tower. Yet, I find Taluva to be a relatively deep game due to the richness of it heuristic layers (discussed earlier).
On the basis of complexity, it’s pretty clear that Race for the Galaxy is the more complex game of the two. Yet can we say which is really deeper from looking at the goal chain? I don’t think we can. But what we CAN probably say is that the larger (more complex) goal tree in Race for the Galaxy creates a more expansive mechanical surface over which the game is played and over which heuristic layers can be draped.
So what is the relationship between the goal tree (complexity) and depth? Here’s the crux of the hypothesis: Layers of depth manifest as a function of the “factors” that have a bearing at each decision point.
Whuff. What does that mean?
Let’s consider the Race for the Galaxy Goal Chain. The primary decision point in the game is what role/action card you will pick for the round. First, we consider the size of the decision space at this point, which is 7 for the 7 different role cards you could play. That decision space is compounded by the need to consider what each of your opponent’s likely selections will be and whether you can leech off their play and/or how your own play can minimize benefit to your opponent. This is a layer of player-driven uncertainty heuristics that are piled onto the primary decision space. Second, you need to consider the relative costs, risks, and opportunity costs associated with the actions. If you take action X, what are you giving up? If you Explore to get more cards, what is the likeliness of getting a beneficial card if you are hunting for one? Do you have goods sitting at risk if you don’t want to play Trade? What cards are you going to discard as payment? What’s the opportunity cost of those? Essentially, there are many layers that need to be considered at the primary decision point.
Compare this against Chess. The goal tree is fairly straightforward (Kill the King!) but the physical board space, the number of pieces on each side, and number of potential moves creates a huge decision space. Compound this against what your opponent may (or may not) do in response to your actions and the amount of decision factors start piling up very quickly.
Variety is the Spice of Gaming?
Another aspect of complexity manifests itself through variety (or breadth). Variety can be increased in a number of ways: varying start conditions, varying events, varying player powers/abilities, different sequences of events, etc.. The key is that variety increases the overall complexity of the game by creating permutations of the goal tree. The fundamental game structure remains the same, but the specifics are all different.
Dominion is a game with great variety (breadth) as a consequence of all the possible setup conditions. Yet we are unlikely to conclude that any particular set of 10 kingdom cards is specifically deeper than another set; it is just different and requires re-formulating a goal tree in your mind based on the present cards.
Race for the Galaxy has a modestly complex goal tree to begin with, but the overall complexity of game is amplified by the sheer volume of card variations in the deck (most of which are unique cards). Taluva has relatively little variety in the scheme of things, as the basic progression in each game is reasonably similar; and despite growing a unique board configuration each game, the basic topological structure of the game is fairly consistent from game to game. It’s more like chess in that regard, where the interplay variety is a consequence of different player choices leading to novel board states rather than the game handing you a different set of rules each time you play.
The Weight of a Lake
In my, sadly final, interaction with the late Tim Seitz, he made a great analogy. Tim said, “I see strategic space like the volume of water in a lake. Some lakes are very deep; some lakes are very shallow. But a shallow lake can still have as much water in it as a deep lake. If that makes any sense.”
Yes, I think that does make sense. I see the volume of water in the lake as a manifestation of weight; incorporating both depth and breadth. Depth is a function of the meaningful heuristic layers (uncertainty, strategy, tactics, etc.) and more layers makes a deeper lake. Breadth is a function of complexity (goal trees, variety/breadth) and more varied complex games have a bigger surface. The end result is that vastly different seeming games might have a comparable weight when engaged with and played over many many sessions. A player heavily exploring the various Kingdom combinations in Dominion is engaged with the game much the same as a Chess player might be in their consideration of look-ahead and board-state analysis.
I think this concept also ties into the idea of the decision space and how open versus constrained that decision space is. I feel that games with very open decision spaces (think in terms of a continuous and near infinite range of potential moves) tend to increase depth, while games with a narrow decision space and a reliance on pre-baked “paths to victory” tend to decrease depth and make for a shallower game. Yet, with sufficient variety in setup or in-game events, the latter type of “paths to victory” game can have great breadth and the overall experience can still be quite rewarding. I do tend to think of high breadth / low depth games being a tad more puzzle-like, but this is a matter of personal preference.
How Deep is the Rabbit Hole?
The final topic of this blog post is a consideration of how we might begin to evaluate depth in a consist, potentially objective, manner across many games. This is a difficult to challenge to be sure.
When I have asked this question before, the most common response is that we can evaluate skill differences and win ratios as a function of determining how many potential “levels of experience” might exist in a game - and use that knowledge as a basis for drawing conclusions about depth. Yet, I find this approach problematic for a few critical reasons.
First, there are very few games played competitively in a structured format with a big enough sample size to be of use in characterizing other games. We can compare ELO ratings in Chess and talk about depth in Chess, but there aren’t ELO ratings for Twilight Imperium last time I checked. Second, these rating systems may not adequately take into account randomness factors in games. A master chess player might beat a novice 99.9999% of the time, but what about a champion poker player versus a rookie? Over the course of many deals, surely the champion will win, but his win/loss ratio on a deal-by-deal basis might only be 70/30. Or for a family euro game, the “better” player might only win in 55% of the games. IT becomes hard to develop an ELO rating system for a game with randomness and uncertainties that obscure skill differences.
I touched on this earlier in the goal tree topic, but first let’s go all the way back to the beginning and the first hypothesis: Depth is a function of the layering of heuristic understanding necessary for effective play.
I have the case for complexity (variety, breadth, goal trees) defining the surface area of game, or said another way, the size of our pancakes. Depth, as a function of the pertinent heuristic layers, describes how tall our stack of pancakes is. The connection between the two dimensions, the maple syrup if you will, is the interplay between key decision points (defined in the goal tree) and the number of factors (defined by the heuristic layers) that have a meaning bearing on the decision. More factors creates more depth.
As for the butter on top? That’s all chrome.
This has gone on long enough. At this point I feel this approach is worth examining in more detail, and worth developing some specific methods for identifying the number of heuristic factors at each decision point and characterizing/quantifying the size of each of those factors. I’ll continue to develop my own thinking aimed in this direction - but what about you? Do any of these hypotheses resonate with your own experience? How do you perceive heuristics and depth?