PDA

View Full Version : Poker/BJ deck question.


09-15-2005, 08:51 AM
Consider a deck of playing cards. Your standard 52 card deck.

Question:

According to the laws of probability, the chance of getting 1 Ace when drawing a card from a randomized deck is 4/52 or 7.692%. Which means in the long run (10 mil draws?), we'll end up getting about 4 aces per 52 draws on average.

Now, what if instead of drawing for a Ace we pick a random card denonimination and then draw a card to see if we pick that card denomination? The probability of a randomly choosen card type/denomination (any number,jack,queen,king,ace etc) matching the card you draw at random is also 4/52...right?

Is there a difference? In the long run a particular denomination, say Ace, is going to pop up 4 out of 52 times on average. Does that also mean the if you guess a different card denomination every draw you're also gonna get that card denomination 4 out of 52 times on average in the long run?

LetYouDown
09-15-2005, 09:14 AM
[ QUOTE ]
Is there a difference?

[/ QUOTE ]
No.

[ QUOTE ]
In the long run a particular denomination, say Ace, is going to pop up 4 out of 52 times on average. Does that also mean the if you guess a different card denomination every draw you're also gonna get that card denomination 4 out of 52 times on average in the long run?

[/ QUOTE ]
Yes.

09-17-2005, 03:51 AM
ah thanks a lot

AaronBrown
09-17-2005, 01:03 PM
LetYouDown gave you an excellent terse answer, I will be a little more wordy.

Although this statement is true, it turns out to be surprisingly hard to construct a theory of probability that supports it. This was not accomplished until the 1930's.

Also, this assumes you reshuffle the deck every time, and that your guess about the card is independent of the shuffle. As a counterexample, if you keep dealing from one deck and always guess that the next card will match the rank of the card just dealt, you will be right only 1 time in 17 instead of 1 time in 13.

09-18-2005, 11:34 AM
[ QUOTE ]
LetYouDown gave you an excellent terse answer, I will be a little more wordy.

Although this statement is true, it turns out to be surprisingly hard to construct a theory of probability that supports it. This was not accomplished until the 1930's.

Also, this assumes you reshuffle the deck every time, and that your guess about the card is independent of the shuffle. As a counterexample, if you keep dealing from one deck and always guess that the next card will match the rank of the card just dealt, you will be right only 1 time in 17 instead of 1 time in 13.

[/ QUOTE ]

Ah thanks! You're always knowledgeable about such things. Any websites to recommend regarding probability theory from its infant stages to now? I'm very interested in probability theory, not really because of gambling but because it seems to govern many aspects of life.

LetYouDown
09-18-2005, 12:14 PM
[ QUOTE ]
LetYouDown gave you an excellent terse answer, I will be a little more wordy.

[/ QUOTE ]
LOL, first post in Probability that actually made me laugh. I didn't really base my statements on any theory, it just seemed intuitively obvious. Is there a reason they had difficulty proving this?

AaronBrown
09-18-2005, 01:47 PM
Thanks for the kind words. The world expert in history of statistics is Stephen Stigler (http://www.hup.harvard.edu/catalog/STIHIS.html) (son of the great statistician George Stigler). But if you're interested in the philosophy of the field, the best book is The Foundations of Statistics by Leonard Savage (http://www-groups.dcs.st-and.ac.uk/~history/Mathematicians/Savage.html). Another great book is Nassim Taleb's (http://www.fooledbyrandomness.com/) Fooled by Randomness.

This is a good page of links (http://www.york.ac.uk/depts/maths/histstat/welcome.htm) is maintained by Peter Lee of York University.

AaronBrown
09-18-2005, 02:17 PM
I need to make a partial retraction, this particular example is not hard. Since the probability of drawing any rank of card from a well shuffled deck is equal, it is obvious that it doesn't matter how you choose the card to match.

The hard part is proving the theorem in general. Given two independent variables X and Y and some set of events E (that is some set of values of X and Y, such as the set where X > Y or the set such that X + Y < 1), prove that the probability of E is equal to the expected value over all possible values of X of Pr{E|X}, that is the probability of event E given the value of X.

For example, suppose I choose p from a uniform distribution over (0,1), then flip a coin that has probability p of coming up heads until I get a tail. I count the number of heads that come up before the first tail. The chance of getting n heads before the first tail, given p, is (1-p)*p^n. If I integrate this over p, I get 1/[(n+1)*(n+2)]. I want to say this is the probability of getting n heads before the first tail. But this has infinite expected value, despite the fact that n has finite expected value for any legal value of p. That bothered some people until we got a good measure theory.