Thematic Solitaires for the Spare Time Challenged

A blog about solitaire games and how to design them. I'm your host, Morten, co-designer of solo modes for Scythe, Gaia Project, Wingspan, Glen More II, and others.
Recommend
55 
 Thumb up
2.75
 tip
 Hide

How to make AI scoring feel satisfying – part 1

Morten Monrad Pedersen
Denmark
flag msg tools
designer
badge
Avatar
Microbadge: Fan of Stonemaier GamesMicrobadge: 1 Player Guild - Together We Game AloneMicrobadge: Geoff EngelsteinMicrobadge: Level 18 BGG posterMicrobadge: Automa Factory fan
This 2-part blogpost series is a (hopefully) improved version of a post from 2019 merged with parts of a 2018 post.

I try to make my Automas (artificial opponents) feel like playing against a human player, but as mentioned in a previous blogpost there are exceptions to this. One of them relates to scoring and simulating the skill level of human opponents.

For reasons of clarity, I’ll use the generic term victory points (VP), but almost everything applies equally well to games that don’t have scoring.

Human players vary in skill level for all games, which means that in games that rewards skill to a significant degree there’ll a skilled player will crush a novice. This can lead to an unsatisfying experience for both players. Still, though, it can feel “right” because the result can be ascribed to player skill and so it’ll feel fair.

Don’t replicate that in your solo AIs

It might seem to us that since we want AIs to replicate the feel of playing against humans (well, that’s what I want) we need to have as large a range in AI scoring as there are in games with human players. That idea can lead to bad player experiences, though.
[BREAK]
The reason is that randomness, not skill, will be a major factor in determining a cardboard AI’s score. So, if an AI has significant a scoring range as large as the full range of human players, then randomness will be the main factor in determining who wins, because the range of the specific player playing against the AI will have a much narrower range of scores.

Let’s take an example: In game X, I tend to score 31 to 70 VP, but human players in total score in the range of 1 to 100 VP and so the game’s AI also scores in that range. For simplicity let’s say that each of the Automa’s possible scores are equally likely and that my influence on the AI’s score is negligible.

There’s then a 30% chance that the AI will lose no matter where in my interval I score and a 30% chance that I’ll lose no matter what. In total that means that in only 40% of the plays my performance has any impact on the outcome of the game and in 60% it doesn’t matter what I do.

That creates a frustrating play experience and I’d probably start to consider it a beat-your-own-high-score game with the AI being an obstacle, not an opponent to compete against.

From gallery of mortenmdk

The score variation of a human player with a specific skill level on the left compared to the score variation of an AI that includes the scoring range of human players of all skill levels on the right. The latter completely drowns out the former.


Don’t replicate the scoring range of a specific human skill level

Even if we don’t simulate the full range of all humans in one go and instead base each difficulty level on players of a specific skill level, we can still get into trouble.

It might seem fair and realistic to have the AI score within such a range, but unless the impact of the human player on the AI’s score is strong, the winner of the game will to a large extent be decided by the random score, not the human’s play. To most it’s more psychologically acceptable for the game’s outcome to be heavily influenced by the skill of a human opponent than it being heavily influenced by a mainly random AI score.

The fix: Low variance difficulty levels

To overcome these issues, we can create difficulty levels that vary less than a human. Luckily, as AI designers we can control the scoring of the AI via control of its behavior in a way that a designer of human-only competitive games can’t.

Thus, we can create difficulty levels each of which scores in a narrow interval. This means that the player can get a play experience that’s tense much more often than the norm in competitive human-only games.

I think that most players prefer games that are close, but those who prefer to win or lose most of the time can also get that their desired play experience via a tight difficulty level system.

Just as important, we can make sure that the human player’s skill is the major factor in deciding the winner (not accounting for the randomness of the game itself).

If you do make low variance scoring then it’s very important that you create multiple difficulty levels, since otherwise the AI will only be an interesting opponent for players of a specific skill level.

From gallery of mortenmdk

The score variation of a human player with a specific skill level on the left compared to the score variation of 5 difficulty levels of an AI. In this situation the human player’s performance is more important than the random variation of the AI’s score and the player can choose how hard it’ll be for them to win.


An example where this in my opinion goes wrong is Imperial Settlers where the single difficulty level (IIRC) of the AI is so low that I lost to it one (maybe zero) time(s) out of close to 30 plays. Because of this I treat the Imperial Settlers AI as an obstacle, not an opponent and the game as a beat-your-own-high-score game.

I can forgive it in this case, though, because I think that the game itself is fantastic.

Board Game: Imperial Settlers

The Imperial Settlers AI army. Image credit: Mirosław Gucwa.


Scoring during setup or at the end of the game

A very simple way to create scoring with low variance is to assign the AI its final number of VP during setup. This is what we did in Viticulture.

I’m not fond of this approach, though, because it makes the AI feel less like an actual and there’s a decreased feel of competition while playing because there’s no neck-to-neck scoring race and sometimes the player will know mid-game whether they’ll win or lose.

The next step is to assign the AI a number of VP during setup and let the AI score additional VP during the game. The fixed part makes it easy to change difficulty and the variation of the in-game VP is easier to keep in check. This is what we did for Red Rising.

The reason that we chose to do it any way in the two games was simplicity. The Viticulture Automa is extremely streamlined and we didn’t want to add high complexity just for scoring.

Red Rising had the added challenge that we had no dynamic parameters to tweak for handling difficulty levels without either changing the game or adding an amount of complexity that was unacceptable for what we felt needed to be a very smooth Automa.

Instead of determining the score during setup it can be done at the end of the game. For fixed scores there’s no difference between the two, though, but with random scores there is. This keeps the tension till the end, but it can feel like the winner of the game is determined by one random event at the end of the game, which can be deflating.

For Between Two Castles we had a base score for the Automa and in-game scoring, but we additionally gave the Automa a number of starting tiles for its castle, which not only scored a semi-random number of VP at the end but also gave the Automa a different personality for each play because the starting tiles affect it in-game choices.

One should be careful with giving AIs boosts during setup, though, because it can end up moving the play experience away from what the game was designed for.

From gallery of mortenmdk

The Between Two Castles difficulty levels.


Organic scoring: Coupled

As mentioned, the Red Rising and Between Two Castles Automas both have scores that have a fixed part and a variable part generated by how the game plays out. We can call the latter part organic scoring.

The most obvious way to let an AI score VP organically is to have it score it in the exact same way as a human player, i.e. the scoring is directly coupled to the in-game effects of the actions taken. The problem with this is that because an AI typically behaves randomly, the scores can vary wildly, which is exactly what we want to avoid.

In Charterstone, for example, players have turns that score no VP because they gather resources and then use them to score VP in other turns.

From gallery of mortenmdk

VP progress of 2 human players in a game of Charterstone.


The Charterstone Automas (there can be more than one in the game) each draw a card every turn determining what action they take. If the scoring was modelled after that of a human player, we’d have cards that gave it VP and cards that gave it 0. This would mean that in a game with 2 Automas one might get most of the VP cards while the other gets most of the 0 VP cards. The result would be wildly varying scores, which is frustrating for the player.

From gallery of mortenmdk

VP progress of 2 human players and 2 Automas with unbalanced card draws.


The same issue can occur with a single Automa if only some of the Automa cards come out each game.

Organic scoring: Decoupled

The easiest fix for this organinc scoring issue is to decouple the AI’s scoring from what’s going on in the game and let it score be a small number of VP for each card independently of the action specified by the card. This is what we did for Charterstone and it made the Automas score more smoothly than a human player, it limited their scores to narrow intervals, and it gave us a way to create difficulty levels.

From gallery of mortenmdk

Charterstone card snippets showing VP gains (top icon).


From gallery of mortenmdk

VP progress of 2 human players and 1 Automa with smoothened scoring in Charterstone.


This creates a balanced and tense game, but the downside is that the Automas feel less like a human player.

Games with low scores

Had the scores in Charterstone been very low this approach wouldn’t have worked because the number of VP scoring cards would have to be low compared to the number of 0 VP cards and we’d be back to having too high variation in scoring.

We faced this issue in Euphoria which ends at 10 VP. We chose to have some cards that scored 1 VP and more that didn’t score. We removed the swinginess of this with a small Automa deck that is cycled through multiple times and almost all cards are activated in each cycle. This means that the number of VP cards drawn is fairly consistent.

In Scythe the end of the game is triggered by an even lower number: Just six achievements (stars). The game is much higher interaction than Euphoria and it needs higher variety in actions, so a small deck didn’t work and with just 6 stars we faced high variety.

To achieve consistent A star pace, we gave the Automa a fraction of a star on most cards. The game itself doesn’t have a way to track this, so we added “star tracker” that had a track on which the Automa advanced one space for each fractional star.

With this system we got as tight control of the Automa’s star pace as we wanted.

From gallery of mortenmdk

A Scythe Automa star tracker card. A token starts at the upper left space and advances in reading order every time the Automa earns a fraction of a star. It scores a full star every time it reaches a filled star.


Taking a break

You’re now allowed to take a break, so that your brain is ready for the second and final part of this blogpost series when it’s ready tomorrow .



The second post is live and can be found here: How to make AI scoring feel satisfying – part 2
Twitter Facebook
2 Comments
Subscribe sub options Thu Jun 2, 2022 1:21 pm
Post Rolls
  • [+] Dice rolls
Loading... | Locked Hide Show Unlock Lock Comment     View Previous {{limitCount(numprevitems_calculated,commentParams.showcount)}} 1 « Pg. {{commentParams.pageid}} » {{data.config.endpage}}
{{error.message}}
{{comment.error.message}}
    View More Comments {{limitCount(numnextitems_calculated,commentParams.showcount)}} / {{numnextitems_calculated}} 1 « Pg. {{commentParams.pageid}} » {{data.config.endpage}}

Subscribe