GeekGold Bonus for All Supporters: 117.23
7,116 Supporters
$15 min for supporter badge & GeekGold bonus
44.8% of Goal  left
Support:
Rob Wrigley
United States New York

Allow me to present to you a simple game of chance:
For a nominal wager, you play one round.
For each round you may choose to do one of the following:
(1) Roll a single d6. If you roll a '1' or better, you win $10. (2) Roll six d6s. For each roll of a '6', you win $10.
I have several questions regarding this game:
(a) I think option 2 is better than option 1. In fact, it is vastly superior. The chance of a payout of upto six times greater eclipses the gaurenteed payout of the first option. Is this assumption correct?
(b) For option 2. What is the probability of rolling at least one success. My own calculations put it at 65%. I would show my math, but for my deepseated fear of public ridicule
(c) What is the math and terminology I lack to express the outcomes, the probalities, and/or the relationship between the two options.
Any help will be welcome. Even just pointing me to the Wikipedia entry I should be reading would be greatly appreciated.

Dan Blum
United States Wilmington Massachusetts

a. No, they are exactly the same in terms of expected outcome, i.e. the mean amount you would win per round if you played many rounds.
b. The simplest way to calculate that is to calculate the probability of rolling NO successes and subtract it from 1. So this is 1  (5/6)^6 ~= 66.5%.
c. The expected outcome of option 1 is obviously $10. To determine the expected outcome of option 2 you can calculate the number of ways you can get each number of successes and then multiply that by the payout, add those sums together and then divide by the total number of possible rolls.
There is only 1 way to get 6 successes, so the payout for that is just $60.
The number of ways to get 5 successes is 6 (number of ways to leave a die out) * 5 (number of ways that die can come up) = 30 * $50 = $1500 payout.
In general the number of ways to get N successes is C(6 6N) * 5^(6N) (see https://en.wikipedia.org/wiki/Combination for what C() means). Summing everything up, the expected payout for all 46656 possible rolls is $466560, which means the expected payout per roll is $10  the same as option 1.

Bobby Ramsey
United States Grove City OH
Hush

robwrigley wrote: Allow me to present to you a simple game of chance:
Let's look at the expected return of each round.
Quote: (1) Roll a single d6. If you roll a '1' or better, you win $10.
If you roll a '1' or better, you win $10. This round sounds like a `free $10' button.
Quote: (2) Roll six d6s. For each roll of a '6', you win $10.
You expect one 'success' (a six face to appear) on a roll. That is, the expected return is $10.
Quote:
I have several questions regarding this game:
(a) I think option 2 is better than option 1. In fact, it is vastly superior. The chance of a payout of upto six times greater eclipses the gaurenteed payout of the first option. Is this assumption correct?
It depends upon what you mean by 'better'. Yes, there is a chance of a larger payout, but there is also a much greater chance of no payout at all. It definitely doesn't `eclipse' the guaranteed payout. In fact, the expected return is exactly the same.
Quote: (b) For option 2. What is the probability of rolling at least one success. My own calculations put it at 65%. I would show my math, but for my deepseated fear of public ridicule
There are 6^6 total possible (ordered) dice rolls. Of those, only 5^6 have no successes. That is, there is a 5^6/6^6 = (5/6)^6 probability of no successes, close to 33.49%. That means there is about 1(5/6)^6 probability of at least one six appearing, about 66.51%


(a) It depends what you mean by "better".
There are cases where you could say that one lottery is strictly better than another by lining up the payouts and showing that every outcome is equal or better. For instance:
(3) Roll a d6. If you roll 1, 2, or 3, gain $20. Otherwise, gain $10. (4) Roll a d6. Gain (result) * $10
If you roll a 1, option 3 pays more than option 4. But we could still say option 4 is strictly better because you can create a mapping of outcomes between options (3) and (4) where the outcome in option (4) is always equal or better than the matching outcome in option (3).
Between your options (1) and (2), neither is strictly better in that sense. Option (2) could give a bigger payout than option (1), but it could also give a smaller one. Therefore, either one could be better depending on the player's preferences, such as their appetite for risk and the marginal utility of money to them.
The typical starting point for this sort of analysis is to look at the average payout (also called the "mean" or the "expected value"). This is a measure of what it will do "in the long run" if you played the same lottery over and over again. Higher averages are better.
In this case, the average payouts are equal (they're both $10). For option (1), that's obvious, because the payout is always the same. For option (2), you can calculate it easily by considering each die one at a time. Each die has a 1/6 chance of paying $10, so the average payout of a single die is (1/6) * $10. The average of a sum is the same as the sum of the averages, so to get the average of all 6 dice you can just multiply by 6, giving 6 * (1/6) * $10 = $10.
The next thing you'd typically look at is the variance. Option (1) has no variance (the outcome is always the same), whereas option (2) has relatively high variance (higher probability of more extreme outcomes). Neither higher variance nor lower variance is inherently better; it depends on your circumstances and your preferences.
Psychologically, humans tend to prefer lowvariance gains (people will take a small guaranteed gain over a small chance of a big gain) but highvariance losses (people will risk a big loss for a chance at avoiding a small loss). Note that whether something is a "gain" or a "loss" is a perception, not an objective mathematical reality. The exact same scenario can provoke different reactions depending on how it is described.
(b) 66.51% http://anydice.com/program/9752
You're usually better off answering these questions with a computer program than doing the calculations by hand. But if you wanted to do it by hand, you could notice that each die has a 5/6 chance of giving no payout, so the chance that all 6 of them will give no payout is (5/6)^6, so the chance that at least one of them will pay is 1  (5/6)^6.

Choubi Gogs
France
My avatar is from the chilren's game Monster Mash

robwrigley wrote: Allow me to present to you a simple game of chance:
For a nominal wager, you play one round.
For each round you may choose to do one of the following:
(1) Roll a single d6. If you roll a '1' or better, you win $10. (2) Roll six d6s. For each roll of a '6', you win $10.
I have several questions regarding this game:
(a) I think option 2 is better than option 1. In fact, it is vastly superior. The chance of a payout of upto six times greater eclipses the gaurenteed payout of the first option. Is this assumption correct?
(b) For option 2. What is the probability of rolling at least one success. My own calculations put it at 65%. I would show my math, but for my deepseated fear of public ridicule
(c) What is the math and terminology I lack to express the outcomes, the probalities, and/or the relationship between the two options.
Any help will be welcome. Even just pointing me to the Wikipedia entry I should be reading would be greatly appreciated.
I'll start with b) For option 2, the probability of having at least one six is: 1 minus the probability of having no sixes
The probability of having no 6s is the probability of one dice not being a six (5/6) to the exponent 6: (5/6)^6. Thus, the probability of not having at least one six is:
1(5/6)^6 = 67%.
And now c) You need to be able to compute probabilities for each types of payoffs:
probability that one 6 (exactly) comes out (6*(5/6)^5*(1/6)) probability that two 6s come out (6*5/2 * (5/6)^4 *(1/6)^2)
In order to compute these, you need to know about combinatorials (is that the english word?). The first part of these probabilities (6 and 6*5/2) are the combinatorics I'm refering to. These correspond to the number of combinations of one dice among six and two dice among six. These represent the number of different possibilities of dies that could take the value 6.
It helps if you can think each die has a different color. In the first case (exactly one die out of six), you have six possibilities: it could be the "red" dice or the "blue" dice...
In the second case (2 out of 6), you can have the "red" and "blue", "red" and "yellow", "red" and "green"...
The number of possibilities to have exactly k dies among six is given by the following formula: n!/(k!(nk)!)
The second part (5/6)^5*(1/6)) is the probability of getting exactly one outcome. For instance, the probability that the "red" die goes on six and all other less than six is (1/6 for the red die on 6)times (5/6)^5 for the remaining five dies on not 6.
We multiplied this value by the combinatorics because you can have exactly one six when either the "red" die or the "blue", or the "yellow" die is a six. You therefore multiply the probability of having a given sequence of exactly 1 six and 5 not sixes by the number of sequences.
Once you have those, you need to know about expected profits. Your expected profit is defined as the sum of probabilities of each possibility times the profit in this possibility.
Here:
P(1 die)*10 + P(2 die)*20 + P(3)*30...+P(6)*60
Back to a) If this value is greater than 10 (the expected profit of scenario 1), then option 2 is better than option 1.
This ignores the possibility that you might be risk averse (see concept for risk aversion).
Risk aversion is when you prefer a safer option if both have the same (or close) expected profit: I'd rather get 10 dollars with certainty than get 20 dollars with 50% chance.
Your level of risk aversion can be measured (just one method of measuring among many) by determining under which certain profit you'd rather take the risky option.
Ie: someone that would prefer the risky option rather than 8 dollars for certain is less risk averse than someone that would only prefer the risky option rather than 7 certain dollars.
I'm too lazy to carry out all the computations myself but hopefully you should now have enough information to do it yourself

Rob Wrigley
United States New York

Thanks for all the responses/lessons folks. I think I understand the math a lot better now.
I've been doing these kind of computations by hand, without really knowing the formal mathematics behind it. I've been doing a lot of them exactly backward. That is, calculating each die, and adding the results, instead of summing the chance of failure.
I think where I was hitting a mental road block was with the 33% chance of failure. It doesn't seem there should be enough failures to offset the potentially large payoffs. This is, I guess, where intuition runs smack into cold, hard math.


