Recommend
6 
 Thumb up
 Hide
14 Posts

BoardGameGeek» Forums » Board Game Design » Board Game Design

Subject: Directed Strategy Playtests rss

Your Tags: Add tags
Popular Tags: [View All]
ɹǝpun uʍop ʞǝǝƃ
Australia
Adelaide
SA
flag msg tools
badge
Avatar
mbmbmb
...AKA "Break My Game"

I had an idea a while back about trying to find strategy paths and weak points in my games. The plan is to get a bunch of index cards and write on each one a single strategy, eg "Corner the Phlebotinum market". Each player is given one of these cards at random and they should try to win the game by using the strategy shown.

Potential issues that I can see:

d10-0 You can't cover all bases with regards to strategies. Need to offer a feedback mechanism for players to suggest different options.
d10-1 The strategy the player gets might make them lose. If this were you, would you feel disgruntled at the railroading?
d10-2 Does this suggestion of strategy lead your players along your own mindset too much? You'd definitely need to do split testing with some groups going completely blind.

Can anyone see different problems than these? Does it sound like an interesting concept?
4 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Jaime Lawrence
Australia
Sydney
New South Wales
flag msg tools
designer
Jingle Bells, Cthanta Smells, I will kick his ass
badge
Because I am an awesome Crab with style, charm and class, oh!
Avatar
mbmb
0) That's ok, you have to start somewhere.
1) Big deal, I'm playing a game to help you playtest.
2) You could also playtest without this method??

I think you need to add

3) Jaime wants to corner the Phlebotinum market anyway and ignores your directive to 'maintain the polarity of the neutron flow'.
5 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Brook Gentlestream
United States
Long Beach
California
flag msg tools
Avatar
mbmbmbmbmb

I've done this during Alpha Testing. "Okay, let's see what happens if you never play Stealth ships."
5 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Matt Riddle
United States
Oxford
Michigan
flag msg tools
designer
badge
Avatar
mbmbmbmbmb
we usually do our own breaking. Ben and I will play 2 players and go different strategies. once we are ready, one of those strategies is break it
5 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Fraser
Australia
Melbourne
flag msg tools
admin
designer
Back in the days when there were less maps we played every map back to back
badge
Ooh a little higher, now a bit to the left, a little more, a little more, just a bit more. Oooh yes, that's the spot!
Avatar
mbmbmbmbmb
d10-0 Correct. You need to pick up alternate/additional suggestions for future playtests
d10-1 Depends on how long the game is If it was a reasonable length or you possibly called it if it is blinding obvious that the "Corner the Phlebotinum market" is the worst strategy since trumping your partner's ace, then it should be OK. That said I have been prepared to toss the towel in half way through a game of Puerto Rico which I went on to actually win (much to my surprise).
d10-2 It could, depends on how open your mind is when creating the "Master strategies". It would probably be best after a few games to get some of the more experienced (and possibly offbeat) players to come up with some suggestions of their own to break the game.

This is how I like to test software. As will as running the legitimate test cases try and break it too. See if it is idiot proof etc.
5 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Russ Williams
Poland
Wrocław
Dolny Śląsk
flag msg tools
designer
badge
Avatar
mbmbmb
Yeah, I often see that software testing methods can carry over usefully to game testing in various ways (not only for strategy playtesting but for testing whether all situations are covered by the rules, whether the rules are consistent, whether the rules correctly describe the physical components, whether all data in charts and tables is correct, etc etc).

Basic stuff like trying minimum and maximum cases can and should be done more often in playtesting. I seriously recommend that game designers and playtesters should learn some basic software testing techniques and ideas.

A famous classic example is Ogre and the Fuzzy Wuzzy fallacy, where in original Ogre if the defender used all GEVs they could usually win, because no one tried it in playtesting.

A more recent example is the extreme deck-thinning Halifax Hammer strategy from A Few Acres of Snow which wasn't caught during playtesting.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Derry Salewski
United States
Augusta
Maine
flag msg tools
badge
I'm only happy when it rains...
Avatar
mbmbmbmbmb
When you're talking about playtesting with regulars, it seems like a really good plan.

If you are talking about blind playtesters, though, it might not be what you want out of them?

3 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Sam Mercer
United Kingdom
Southampton
Hampshire
flag msg tools
designer
Avatar
mbmb
Do remember though Mr Lunch that if you have already created a card that says "Try and win the game like this" - to me that says that you have already "coded for" the possibility that this might break the game. Players have a very good way of finding exactly the one killer "unforseen" strategy, and as such, it will more likely than not NOT be on those lovely cards of yours!
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Jake Staines
United Kingdom
Grantham
Lincolnshire
flag msg tools
designer
badge
Avatar
mbmbmbmbmb
Cogentesque wrote:
Do remember though Mr Lunch that if you have already created a card that says "Try and win the game like this" - to me that says that you have already "coded for" the possibility that this might break the game.


I see what you're saying, but I'd suggest that a lot of the time, it's probably useful for those times where you've realised an element is under/overpowered, tried to fix it, and need to test whether the fix is good or not.

If the same group of players played a version of the game where submarines were hideously overpowered and used near-exclusively, then you tweaked subs to be much less useful, it's reasonably likely that the players would then abandon subs entirely now they're no longer the most awesome thing ever, just by contrast! So if you want to know whether your adjustment to subs was actually balanced, or whether they're still overpowered, or whether you've gone too far and they're now useless, it helps to nudge one or two of the players to please try the subs...
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
ɹǝpun uʍop ʞǝǝƃ
Australia
Adelaide
SA
flag msg tools
badge
Avatar
mbmbmb
Thanks for all the feedback, folks! I should have mentioned that these strategy cards would be a bit of a hidden agenda for the players rather than open information. We're also talking about a late beta stage rather than alpha in the development cycle -- it's tweaking time.

Hida Mann wrote:
I think you need to add
3) Jaime wants to corner the Phlebotinum market anyway and ignores your directive to 'maintain the polarity of the neutron flow'.

As long as the playtesters can tell me what they actually did as a strategy and if they thought it worked for them, then fine

Karlsen wrote:
d10-0 Correct. You need to pick up alternate/additional suggestions for future playtests

Hmm... the cards themselves could double as a feedback sheet. That would make them easy to reference.

Karlsen wrote:
d10-2 It could, depends on how open your mind is when creating the "Master strategies". It would probably be best after a few games to get some of the more experienced (and possibly offbeat) players to come up with some suggestions of their own to break the game.

"So what happens if we don't feed our cavemen?"

Karlsen wrote:
This is how I like to test software. As will as running the legitimate test cases try and break it too. See if it is idiot proof etc.

Ooh, I think you just volunteered to have a prototype copy sent over to you

russ wrote:
Yeah, I often see that software testing methods can carry over usefully to game testing in various ways (not only for strategy playtesting but for testing whether all situations are covered by the rules, whether the rules are consistent, whether the rules correctly describe the physical components, whether all data in charts and tables is correct, etc etc).

This leads me to a bit of an off-topic thought: How much of the real-world technical documentation style do you think you can impose upon a set of rules before they become too dry? Is structure enough to satisfy the need (eg defining terms before you use them)?

russ wrote:
Basic stuff like trying minimum and maximum cases can and should be done more often in playtesting. I seriously recommend that game designers and playtesters should learn some basic software testing techniques and ideas.

Sounds like a good topic for a blog post for anyone so inclined. Do you know any web links on the subject?

scifiantihero wrote:
If you are talking about blind playtesters, though, it might not be what you want out of them?

You're probably quite right here. The thing I was trying to get around is group think. You'll get to know the mindset of your regular players.

Cogentesque wrote:
Do remember though Mr Lunch that if you have already created a card that says "Try and win the game like this" - to me that says that you have already "coded for" the possibility that this might break the game.

Yes, sorry I didn't make it at all clear in my original post that I was thinking in terms of testing strategies that have already been identified and tweaks applied.

I'd like to think that this approach might get players pondering different strategies that they could then feed back into the testing process.
5 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Matt Loomis
United States
Illinois
flag msg tools
designer
badge
Avatar
mbmbmbmbmb
It's an interesting concept, and it's naturally evolving if you're playtesting with a group of experienced playtesters who know how/when to test a game for a test case reason. I don't know if I would try it in a forced manner, mainly because the perception of what should be done is a very important piece of information as well. You would also miss the feedback about what happens in the game if two people try to corner X market, or try the same strategy, maybe that occurence makes a generally weaker strategy more dominant.

I personally feel, as the designer, you should be the best player that your game has to offer before the game is released. If you're not able to see the full extent of the interactions between the different mechanisms of your game, how can you be sure that those are the correct mechanisms? The other side to that argument is that things of that nature tend to matter only if you want to approach the game on a competitive level and analytical level, which the majority of people who play your game will never do.

I guess you should know what you're looking for with each playtest session, and if you're looking for a specific strategy to be played, then you should have someone play it.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Alex Weldon
Canada
flag msg tools
designer
mbmbmbmbmb
I'd let players try whatever strategies seem intuitive to them at first, because that's what "real" players will do. Make sure the game is fun and not completely broken at first.

Later, if you see that some possibly breaking strategies are going untested, then you could start requesting that people try them... but I think that if a strategy doesn't occur to a player on its own, they're less likely to be able to execute it well.
2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Jeff Warrender
United States
Averill Park
New York
flag msg tools
designer
publisher
badge
Avatar
mbmbmbmbmb
This is an interesting idea, and I have thought about force-dealing players a particular strategy or setup to see how it fares. The difficulty with this concept is that the effectiveness of a given strategy is going to be a complex interaction of the competence of the player attempting to execute it, the random factors of the game (if any), and the interaction of that strategy with other strategies in play. (For example, "corner the Gold market" might seem stronger in a game where another player is playing the "dump Gold and buy Wheat" strategy). Another consideration is that in some games, it's not necessarily in the player's advantage to stay rigidly locked in to a single strategy for the entire game; adaptability is sometimes required to do well.

These situations could lead to false negatives. It would therefore seem risky to interpret the results of a single session or handful of sessions played in this way. Obviously, if something is truly broken it only takes one test to see that (if the session happens to produce that result, which for the aforementioned reasons won't always happen even if there is a break), but balance is something that can probably only be detected as the accumulation of many sessions played with a broad variety of player approaches.



2 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Arthur O'Dwyer
United States
San Francisco
California
flag msg tools
designer
mbmbmbmbmb
xopods wrote:
I'd let players try whatever strategies seem intuitive to them at first, because that's what "real" players will do. Make sure the game is fun and not completely broken at first.


Definitely don't spring the strategy cards on brand-new players in their first (or even second or third) game. I'm just thinking how I would feel if someone asked me to playtest Agricola for the first time "but try to win without planting any grain", or some other nutty idea. Your players will likely give you some unjustly negative feedback on the first play anyway ("I don't see the strategy here. Too complicated/random. This sucks."); you don't want their overall experience to be made even worse by forcing them into a losing strategy too.

In my experience, your best playtesters will start trying nutty things on their own after the first few games, even without the cards. But the cards are a neat idea.
1 
 Thumb up
 tip
 Hide
  • [+] Dice rolls
Front Page | Welcome | Contact | Privacy Policy | Terms of Service | Advertise | Support BGG | Feeds RSS
Geekdo, BoardGameGeek, the Geekdo logo, and the BoardGameGeek logo are trademarks of BoardGameGeek, LLC.