View Single Post
  #2  
Old 09-18-2005, 03:21 PM
AaronBrown AaronBrown is offline
Senior Member
 
Join Date: May 2005
Location: New York
Posts: 505
Default Re: Starting Hands - 6max vs Folded Full Ring

There is a reason this can be true, but there is also a fallacy that makes it seem true. I'm not sure whether you are basing your thinking on the correct or incorrect logic.

Players who fold are less likely to have high cards, especially Aces, than players who call or raise. So if the first 8 hands fold in a 10-handed game, and you're the small blind, you have to figure the chance the big blind holds an Ace is higher than if you were playing heads up (also Aces are more likely on the board).

For an extreme example, suppose people play only hands with Aces and pairs. In heads up play, the chance that the other player has an Ace before he bets is 15%. If he bets there's a 73% chance he has an Ace. But if 8 players have folded, you know there are 16 non-Aces removed from the deck. That raises his probability of having an Ace to 21% before he bets. If he does bet, his chance of having an Ace is 80%.

However, while this is true, you can see that it's not very useful. In the first place, actual play is more complex than this example, so you can only guesstimate some small increased probability of an Ace. It's clearly not a big effect. And you only care about it if he bets, in which case it may not make that much difference to you whether he's coming in on an Ace or a pair. Also, a smart player is aware of this effect, and slightly more likely to bet a non-Ace hand in this situation. That could cancel out, or reverse, the effect.

The fallacy logic does not depend on specific cards. It says people can come in on strong hands or weaker ones. They always bet the strong ones, and sometimes bet weaker ones. Since it's likely someone among the first nine bettors has been dealt a strong hand, if only one person comes in it seems likely they have a strong hand.

This breaks down when you put numbers on it. Say people always bet the strongest 10% of starting hands, and come in to 30% of pots (meaning 20% of the time they bet a weaker hand that they would sometimes fold). It doesn't matter what numbers you pick, or whether you specify different probabilities for different players based on position or inclination. Call the first number a and the second number b.

In heads up play, if the other player bets the chance of him having a strong hand is a/b. In a larger table, if 1 of the first N players bets, the probability is the same a/(a+b). You can compute this by seeing the probability that one of N players will come in on a strong hand is:

N*a*(1-b)^(N-1)

You can select the single player N ways, she must have been dealt a strong hand (probability a) and the other N-1 players folded. The probability that one player came in on a weak hand is:

N*(b-a)*(1-b)^(N-1)

You can select the single player N ways, he must have been dealt a weaker hand but decided to play it, and the other players must have decided to fold.

We can eliminate the N*(1-b)^(N-1) from both formulae because we observed the event that exactly 1 of the first N-1 players stayed in, and it can only happen in one of these two ways. The probability of it happening due to a strong hand is the same a/b as heads up play.

Of course, this is a highly simplified mathematical example. In real play the assumptions will not hold, and there will be differences in hand strengths. All I've shown is there's no fundamental reason to assume one caller out of N has a stronger hand than one caller out of one. Depending on your assumptions, it could go either way.
Reply With Quote