Bez ShahriariUnited Kingdom
Yesterday, I wrote, "Playtesting is a game (or part of a game) being played (by real or simulated humans) in order to inform possible changes."
On Twitter, there were some great comments and in a conversation with Rob and me, Bastiaan Reinink wrote, I care about creating good games.
That is, of course the ultimate objective. Whether you call the parts of the process 'playtesting' or 'simulation' is almost irrelevant.
The purpose of defining words is for clarity of language. For Kitty Cataclysm, I made great use of spreadsheets. Tracking the card abilities to work out whether hand size would naturally increase, decrease or remain the same. Working out what the average meownetary value of a card was, so as to know the average value of stealing/drawing a single card and balance those values better, which would again change the numbers.
For want of a better word, I usually call that 'spreadsheeting' and have heard a couple of other designers also refer to this practise this way.
It's good to have different terminology. If we had 10 different words to refer to 10 different types of playtesting, maybe I'd be considering all these possible methods rather than just focusing on the few that come naturally or are easier for me.
Let's call this a first attempt at cataloguing some possible processes.
When spreadsheeting, you already know your objective and are tweaking numbers to make a certain thing mathematically true. E.g. I might have decided that I want players' hands to grow rather than shrink, but (broadly speaking) the number of cards in all players' hands should remain constant. I play with the cardcounts until the average (mean) amount that a card lets you draw is marginally above 1.
Quick mental modelling is thinking about how people will react to a small part of the game. Maybe you consider a topic of conversation, a thing to be drawn, or a joke. How will people react to this part of the game? How hard will that object be to draw? If you have hundreds of pieces of content, this is essential. You can't thoroughly playtest all that with other humans. You can ask their opinion, but then you're just relying on someone else to do their own mental modelling - and even modelling our own behaviour can be quite inaccurate. It's actually easier to predict general behaviour than your own behaviour and - either way - this is a skill to be developed and not one you should trust playtesters to perform.
Even the 'how much would you pay' question can be wrongly answered.
Full mental modelling would be a case of thinking through how the game is played and how players will react, what they may do... There is a little of this going on when learning rules for any game - as I read through, I try to develop a mental model of the possible options and how they connect and what things are done in what order... and then I stop thinking. If I were to carry on thinking, I might think about player motivations over the entire game, what parts might slow down, what sort of reactions may be elicited... It's certainly easier with a simpler game but is a skill to be developed in any case.
Augmented mental model I used this for a few mid-weight Euros I started in 2016/2017. I won't actually solo-test (I won't move all the pieces around and will focus on certain parts) but I'll have a few pieces, just to keep track of a few numbers (e.g. different resources for each player) that I'd struggle to keep in my head. Personally, I find that this took away the more difficult memory parts and allowed me to focus on the sort of decisions that people would be doing, before I had actually even created a board or the final cards.
Solo testing is a phrase used either for testing a solo variant/solo game, or for testing a multiplayer game/mode solo. The general idea is that solo-testing a multiplayer game might reveal the biggest of problems. You could have a random player and you should be able to win AND lose against it. You might realise that your rules don't cover situations that you failed to imagine. If there is negotiation, deception, strong realtime aspects or physical interaction with other humans, you may not actually be able to even go through the motions in any meaningful way.
Computer simulations are not something I have any experience with, but are certainly an interesting way to check the balance of different strategies. I'm sure articles could be written (by someone else) about these alone.
Initial human playtesting needs a better name. Maybe you play. Maybe you don't. Maybe you facilitate discussion afterwards. Maybe you stop the game half-way through and already know what you need to change. This allows you to properly observe how real humans behave. You can watch reactions, without the restrictions of your mental models.
Focus testing requires that the group is within your target audience. At this point, you're really observing the emotional reactions to the game. It's not really about the mathematical virtue of any strategic possibilities, but the emotions that these possibilities elicit. Maybe even a weak counterspell is too aggravating in your game. Maybe you are looking or reactions to the graphical layout and style. Maybe you are fine-tuning the balance of complexity/simplicity with regards to unique cards v duplicated cards. Maybe you want to see how often a particular effect makes people smile or laugh.
Blind testing requires that you do not teach the rules. This is crucial to ensure that your rulebook is comprehensible. I think that to do this when your rules are still in flux is wasting the time of everyone involved. Maybe a few numbers are still undecided (e.g. whether coming 2nd in an area should give 5 or 7 points) but the overall flow of gameplay should be firmly set and have been already thoroughly tested for it to be worth testing the rules document itself.
Even if you never publish, you should do this a couple of times, just to ensure that a publisher can actually understand your rules.
Remote blind testing is the above, but when you're not even around. This can be harder to gather data from, as you're not able to ask follow-up questions and might not always be able to get video (to learn the full story of what was done and how people reacted). You need to properly understand what questions to ask in your forms.
If gathering data, this could be the best way to learn statistical information such as playtime, how often a given mechanism is used, how often a given scoring incentive actually makes any difference, etc.
Generally you might get volunteers creating their own PnP version. Sometimes, you might send a prototype for someone doing this. Sometimes, because of the relatively late stage at which it is done, it can be a marketing tool - much like an open beta of a good videogame can net some new fans or even proselytisers.
Remote-electronic testing would involve using Tabeletopia/Tabletop Simulator/etc to playtest your game with other humans. This might be blind-testing, or you might be on-hand to teach. It might be remote-electronic focus-testing, or you may be testing with other designers.
This makes me think that maybe rather than different categories, a better way to talk about testing might be to have a sequence of adjectives that are each in one of X states. There is a lot of possible crossover.**************
I think that is everything that could be possibly considered playtesting. Let me know if you do a type of playtesting that doesn't quite fit into one of these categories.
I am a full-time designer/artist/self-publisher and I am available for freelance work. I go to cons as a trader and help run the all-day Friday playtest sessions in London. I left my last 'real' job in 2014. I was getting benefits for a few years. I'm currently writing sporadically, but getting back into the habit of daily posts. If you have any questions/topics you'd like me to address, send me a geekmail and I'll probably address the topic within a week.
- [+] Dice rolls