Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > Tournament Poker > Multi-table Tournaments
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #81  
Old 09-10-2004, 05:44 PM
SossMan SossMan is offline
Senior Member
 
Join Date: Apr 2003
Location: Bay Area, CA
Posts: 559
Default Re: Don\'t You Guys Understand This Simple Fact?

[ QUOTE ]
bonds has had the count reach 3-0 against him 143 times this season. of those 143 times he has walked 137 times got a hit 5 times and been retired once. So his obp in plate appearances where the count reached 3-0 is 142/143=.993

[/ QUOTE ]

i'd say that's darn near 1.000.
Reply With Quote
  #82  
Old 09-10-2004, 08:08 PM
Eldog605 Eldog605 is offline
Junior Member
 
Join Date: Sep 2004
Posts: 14
Default Related question

Today I had the chance to triple up. I had 10's and I knew one opponent could have anything, he was a maniac. And I figured the other had two over cards. We all called all in (i had short stack). Crazy man flipped over A-7 offsuit, other guy flipped A-Q of diamonds. So, I'd like to first of all know what my chances were going into the flop of winning this hand.

Second, if we assume I am a good player (this is wholly debatable), and I have a 60 % shot at doubling before going broke, what would be the correct percentage to seek in order to try and triple up? A 55%, 75%? Basically, I'm asking if my move made sense, considering the fact that I had 60 % to double anyway. Would a player who doubles up 80% of the time before going broke make the same move?
Thanks in advance
Reply With Quote
  #83  
Old 09-10-2004, 08:47 PM
aces961 aces961 is offline
Member
 
Join Date: Aug 2003
Location: Urbana, IL
Posts: 69
Default Re: Related question

[ QUOTE ]
Today I had the chance to triple up. I had 10's and I knew one opponent could have anything, he was a maniac. And I figured the other had two over cards. We all called all in (i had short stack). Crazy man flipped over A-7 offsuit, other guy flipped A-Q of diamonds. So, I'd like to first of all know what my chances were going into the flop of winning this hand.

Second, if we assume I am a good player (this is wholly debatable), and I have a 60 % shot at doubling before going broke, what would be the correct percentage to seek in order to try and triple up? A 55%, 75%? Basically, I'm asking if my move made sense, considering the fact that I had 60 % to double anyway. Would a player who doubles up 80% of the time before going broke make the same move?
Thanks in advance

[/ QUOTE ]


You win the hand 53 percent of the time. (could vary by a slight amount due to exact suits that your tens and the a7 were)

When we say that a good player has a 60 percent to double up before going broke this is a generalization included in his play. Therefore the all these situations you can bring up are part of what gives him that 60 percent chance. You shouldn't change how you play for the reason of trying to get to that 60 percent, but you should change plays you make because they aren't the correct plays.

Now going all in here is with the short stack is a very good play and somewhat obvious against the maniac.

Now if a good player has a 60 percent chance of doubling up. He is going to have a .6^1.5 chance of tripling up this comes out to about 46.5 percent of the time.
Reply With Quote
  #84  
Old 09-11-2004, 01:29 AM
sdplayerb sdplayerb is offline
Senior Member
 
Join Date: Dec 2002
Location: San Diego, CA
Posts: 380
Default Re: Don\'t You Guys Understand This Simple Fact?

sorry, but you are way overestimating the intelligence of people.
Reply With Quote
  #85  
Old 09-11-2004, 06:56 AM
Charon Charon is offline
Junior Member
 
Join Date: Jun 2004
Posts: 22
Default Answer

This is simply Bayes Theorem.

Let x be that a player doubles up and
y that the player wins the tournament. Then:

p(y|x)=p(x|y)*p(y)/p(x)

Since p(x)>1/2 follows:

p(y|x)<2*p(x|y)*p(y) and since p(x|y) is 1 (because if he wins the tournament he had to double up there (otherwise he would have busted out) follows:

p(y|x)<2*p(y)

So..guess I saved Sklansky some work.
Reply With Quote
  #86  
Old 09-11-2004, 09:27 AM
Pensive Gerbil Pensive Gerbil is offline
Member
 
Join Date: Apr 2003
Posts: 76
Default Wow, look what you did to my thread! Let me elaborate...

Thank you for contributing...sorry I missed the fun! I have not yet read any of the posts that follow yours. I thought it was common knowledge among well-read tournament players that if everyone is equally skilled, then the chance of taking first place is generally proportional to the percentage of the total chips that are in your stack.

As you point out, this simple principle does not hold for players whose skills are above average or below average. To elaborate by addressing your question: The probability that an above-average player with 20K will double to 40K before going broke is greater than 50% (except perhaps if he is so short-stacked that he is in danger of being blinded off). Therefore, his chances of winning with 40K must be less than twice what it would be with 20K. To further illustrate, suppose our above-average hero has a 55% chance of doubling-up. If the total chips in play were 80K, then 40K would give our hero a 55% chance of victory. If our hero controlled 20K in chips, his chance of doubling to 40K would be 55% and his chance of doubling again to 80K would be 55%. Therefore, 20K would give our hero a .55 x. 55 or 30.25% chance of victory. As you can see, 55% is less than twice as much as 30.25%. [Due to rising blinds, our hero's chance of doubling up may decline over time. As long as it does not fall below 50%, however, this principle remains valid.]

It is easy to see that in the case of a below-average player, 40K will provide more than twice the chance to win the tournament as 20K in chips.

I'm afraid I won't have time to read or respond to more posts until possibly Sunday.

Regards,

PG
Reply With Quote
  #87  
Old 09-11-2004, 09:52 AM
Pensive Gerbil Pensive Gerbil is offline
Member
 
Join Date: Apr 2003
Posts: 76
Default Re: Negreanu\'s tournament theory regarding big pots.

Now that I (and presumably others) have elaborated, I would much appreciate any further comments you may have on the original topic of this thread. I expected you to agree that above-average players should be generally less inclined to chase big pots with marginal odds than below-average players. Do you also agree that this practice would be better (or less bad) for good players who are desperately short-stacked (since they will likely have less opportunity to benefit from their skill edge)?

What do you think of the argument that some above-average players are justified in chasing big pots with slightly inadequate odds because their skill edge with a large stack is greater than their skill edge with a lesser stack? I suspect that in reality, this potential skill edge differential rarely justifies chasing pots with inadequate odds, due to the offsetting value that superior players realize by minimizing the variance of their tournament play.

Regards,

PG
Reply With Quote
  #88  
Old 09-11-2004, 12:40 PM
David Sklansky David Sklansky is offline
Senior Member
 
Join Date: Aug 2002
Posts: 241
Default Re: Negreanu\'s tournament theory regarding big pots.

Agreed. Daniel is wrong (for himself). He would never realize it however merely from experience since it is no big deal either way. Notice however that the concept is correct as regards that rare bird who plays well with a big stack but is less than even money to double up with a small stack.
Reply With Quote
  #89  
Old 09-11-2004, 04:22 PM
JNash JNash is offline
Junior Member
 
Join Date: Feb 2004
Location: Chicago, IL
Posts: 22
Default Re: S-Curve Hypothesis

IF my hypothesis is correct..(and I will need to post it in the theory forum to get more comments), I believe that for a big stack there are two opposing forces at work:

1) Part 1 of the hypothesis is that the relationship between chip count and fair value (FV) of those chips (I'll call this the "payoff function") is concave for big stacks, and convex for small stacks, and the payoff function has its inflection point (switching from convex to concave)at the average chip count.

Incidentally, saying that the payoff function is concave is the same as saying that each chip you win is worth less than each chip you lose. (A commonly accepted statement). Saying that the payoff function is convex is the same as saying that each chip you win is worth MORE than each chip you lose. I claim that this is the case for short stacks. (This latter statement is, I believe, more controversial.)

If this is true, then a 50/50 situation (with a chip-EV of zero for both sides) will have a negative EV as measured in fair value terms for the big stack, and a positive one for the small stack. The implication is that short stacks should seek out chip-EV coinflips, while big stacks should avoid them. Or, to put it differently, short stacks can be correct from a FV-EV perspective to play hands that have negative chip-EVs. (Just how negative it can be and still be justified depends on how convex the payoff function is). [This is also the part of this post that relates to the original subject of the Negreanu thread--i.e. why and when it might make sense to go for big pots when you're seemingly getting the worst of it.]

2) The second part of the hypothesis is that big stacks have an a priori (i.e. before the cards are dealt) higher probability of winning the next hand than a small stack. That's because they get more respect, and so their bluffs and blind-steals have a higher probability of being successful.

If this is true, then big stacks can and should play more aggressively (open-raise pre-flop, bluff-raise or semi-bluff raise, etc.) and are actually correct (in the sense of having positive chip-EV) even though a short stack in the same situation might have a negative chip-EV.

These two forces point in opposite directions: avoiding 50/50 bets sounds like a "conservative" thing to do, while playing more hands (and raising them aggressively) sounds like a more "aggressive/loose" way to play.

While these two forces seem at odds, they can actually co-exist. If the big stack has a higher probability of being successful with bluffs and aggressive plays, then the big stack can enter more pots (i.e. situations that might have been negative chip-EV for a short stack become positive EV for the big stack.) As a result, the big stack is correct to play looser and more aggressively.

Now in situations where the big stack's clout has no value (e.g. when calling an allin bet from a short stack), the only thing that matters are the objective odds based on the cards. In this case, the concavity of the payoff function for the big stack (and the convexity of the function for the short stack) says that a 50/50 situation (with zero chip-EV expectation for both sides) are bad for the big stack and good for the short stack.

So while the two forces go in opposite directions, they do not contradict each other. Hope that helps...
Reply With Quote
  #90  
Old 09-11-2004, 04:49 PM
JNash JNash is offline
Junior Member
 
Join Date: Feb 2004
Location: Chicago, IL
Posts: 22
Default Re: S-Curve Hypothesis

[ QUOTE ]


Of course it is going to be possible for a player to have a chance of doubling up with the small stack of under .5 and with the large stack of over .5, and in this case obviously what you say will be correct. I'm just saying that this player might think there is nothing he can do about this fact but as long as the blinds are still reasonable in comparison to his stack and we arn't in a bubble situation with work his game should be able to improve to get this probability of doubling up with a smaller stack to above .5.

[/ QUOTE ]

That is the essence of the S-curve hypothesis. Even if all players have equal skill, their chip count influences their probability of winning. You state it in terms of the probability of doubling up, but I believe the point is even more general: a priori, before the cards are dealt, the short stack has a negative chip-EV and the big stack a positive chip-EV. Incidentally "by symmetry" (since this is after all a zero-sum game) if there were only two players, the amount of positive chip-EV for the big-stack is exactly equal to the amount of negative chip-EV of the small stack.

The practical question is when in a tourney this S-curve effect actually becomes pronounced, versus when the payoff function is simply linear. I believe that when the blinds are low compared to the average chip count, the curve is pretty much linear, and you ring-game EV calculations are correct. But as the blinds increase relative to the average stack size, I believe the curve becomes more S-shaped--certainly around bubble-time, but even beyond that.

Further, I believe that my hypothesis holds true no matter how steep the payout structure--i.e. it would include winner-take all situations. Mind you, the exact shape of the payoff function will certainly depend on the particulars of the payout structure, but the general convex-concave feture should not be affected by this.

I even believe (gasp--dare I say it?) that this applies to heads-up situations in no-limit winner-take-all settings, as long as the blinds continue to increase regularly. [I say this knowing that this sounds heretical and contrary to all accepted gospel, but I'll just need to go to the theory forum and take my licking...]
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 04:47 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.