Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Poker Discussion > Poker Theory
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #11  
Old 09-02-2003, 11:53 AM
Copernicus Copernicus is offline
Senior Member
 
Join Date: Jun 2003
Posts: 1,018
Default Re: Tournament Theory Question Part 2

Rob, your first equation reflects getting all the way to 20k. Your second equation only reflects the counts after this one hand. A better approach to the 2d equation imo (if there is no more betting this hand) is

20000*p*P(20,15)+20000*(1-p)*P(20,5)=12000 where p is the probability of winning the hand and P(20,x) is the probability of getting to 20 from x.

If you estimate P(20,15) as .8 (you have cut your chances of NOT getting to 20 in half) and P(20,5) as .3 (you have cut your chances of getting to 20 in half) then the break even p is .60, which intuitively feels like the right answer given the two "halving" assumptions...if everything is proportional the original .6 shouldnt change.

However, my guess is that if you look at the probability distributions leading to a 60% rate of getting to 20 by a more skilled plaeyr, that getting to 20k is a better chance than 80% once youve made it to 15 and better than 30% after a drop to 5. If you just change P(20,15) to .875 and P(20,5) to .375 then the break even p drops all the way to 45%.

At P(20,15) = .875 and P(20,5)=.3, p=.52.

I'm going with A.
Reply With Quote
  #12  
Old 09-02-2003, 11:58 AM
Copernicus Copernicus is offline
Senior Member
 
Join Date: Jun 2003
Posts: 1,018
Default Re: Tournament Theory Question Part 2

You're right, I misread the problem. In light of your clarification I think my response to Robk is a reasonable approach to the problem, and I would guess that most reasonable scenarios of getting to 20K from 15K or 5K would drive the needed win proability down to the 50% range, and would go with A as well.
Reply With Quote
  #13  
Old 09-02-2003, 01:53 PM
Bozeman Bozeman is offline
Senior Member
 
Join Date: Sep 2002
Location: On the road again
Posts: 1,213
Default Re: Tournament Theory Question Part 2

"Expected chip count if you fold = 20000*.6 = 12000

Expected chip count if you call = 15000*p + 5000*(1-p) = 10000p + 5000 = 12000
=>10000p = 7000 => p = .7"

If you include a correction for your better than average play in the first case, you need to also include it in the second.

If 10K is worth 12K to you, then 5K should be worth 6K and 15K should be worth 18K (contingent on the fact that you are not amazingly good with either a small stack or a big stack), so I think the second equation should be 18000p+6000(1-p)=12000 => p=50%. The reason is that that now there is no element of "You lose you are out".

There might also be slight extra value for the big stack because it makes you harder to bust. If it is worth 19K to you, then you can call as a slight dog (46.1%).

Thanks for clarifying my thoughts,
Craig

PS I think this is the proper correction in the style of Copernicus. And it falls between his extremes. Also, everything is contingent on it being far from the money, otherwise there is a smaller value for the larger stacks.
Reply With Quote
  #14  
Old 09-02-2003, 02:49 PM
Piers Piers is offline
Senior Member
 
Join Date: Sep 2002
Posts: 246
Default C

[ QUOTE ]
you think you are 60% to double your initial stack without playing big pots

[/ QUOTE ]

I interprite this as implying that you do not want to take any major riskes that are worse than 60-40 in your favor.

Which unless I am missing something makes the answer farly obvious.
Reply With Quote
  #15  
Old 09-02-2003, 03:46 PM
M.B.E. M.B.E. is offline
Senior Member
 
Join Date: Sep 2002
Location: Vancouver, B.C.
Posts: 1,552
Default Re: Tournament Theory Question Part 2

Assume you start with 10K in chips. If you call this hand and lose you'll have 5K; then the probability of getting to 20K will be 0.6 x 0.6 = 0.36 (since you have to double through twice). I don't think that's exact but it should be a good approximation. If you call and win you'll have 15K; then your chance of reaching 20K might be something like 0.8. So we solve for this equation:

0.6 = 0.8p + 0.36(1-p)
0.24 = 0.44p
p = 0.545

So I'll say B.
Reply With Quote
  #16  
Old 09-02-2003, 07:06 PM
Avivs Avivs is offline
Junior Member
 
Join Date: Jul 2003
Location: New York
Posts: 9
Default Re: Tournament Theory Question Part 2

Dear M.B.E.
maybe in a bubble your math is correct, but it should be clear that if you loose this hand your chances of getting back to 10K are not the same as your chances to double up from 10K to 20K because of the different strategies you'll might have to use with your (now) short stack.
The same resoning also apply to your chances of getting to 20K if you win this hand...
Reply With Quote
  #17  
Old 09-02-2003, 10:30 PM
elindauer elindauer is offline
Senior Member
 
Join Date: Jun 2003
Posts: 292
Default An answer, and a question...

See my response to your post titled "Cute Applicable Math Question" in which I derive the formula for the probability of player 1 doubling up when facing this situation over and over is:

p^2/(1-2p(1-p))

Setting this equal to .6, we find that p=.55.


This is a fantastic question, since when you are calling all-in, your call needs to be a 60% favorite (in most tournament payout schemes) to actually be positive EV in real dollars. I believe Mr. Sklansky is driving at a fundamental question in tournament poker... how do you apply this fact to situations when you are not calling all-in?

We've found part of the answer... if you're calling for half your stack, you'll need to be a 55% favorite just to be breaking even in real payout dollars. I suspect that as the percentage of your stack gets smaller, the amount you need to be favored by will drop perpetually closer to 50%.


Now an open question... you almost never face an even money call in real tournament poker, you'll always getting odds on your money. How does this factor into the equation?

For example, you are faced with calling for half your stack on the river, and the pot is laying you 2:1 odds. What win rate do you need to guarantee a double up probability of 60%? We know you'd need a 55% win rate if it was an even money bet... do you just need the same 10% expected return on your money (translating to 11/30 win rate if the pot lays you 2:1), or is the answer more complex?

Reply With Quote
  #18  
Old 09-05-2003, 01:51 AM
sj_poker sj_poker is offline
Junior Member
 
Join Date: Sep 2003
Location: South SF Bay
Posts: 18
Default Re: Tournament Theory Question Part 2

Perhaps I'm missing something in the wording of the question. "you think you are 60% to double your initial stack without playing big pots" What happens the other 40% of the time? Do you lose it all? In other words, if we're doing these calculations based on the eventual expected value of your session after a given play, then we would need to know each possible outcome and its probability occuring.

Also, if you do take the chance on the half-all-in hand, are we assuming that with the resulting chips you possess after the hand (either half or 1.5 times what you started with), that you again have a 60% chance of doubling these up? If so, how would this be the case? If you were playing for just a fixed amount of time, then this would sort of make sense, but the betting amounts (and what you consider a "big pot") wouldn't change, therefore, it wouldn't seem that this is the case.

But since this seems to be referring to an earlier setup that's been posted, I'm probably just missing something. If so, please let me know. It sounds like a very interesting question, but I just can't get around the setup.

Also, as a disclaimer, I'm not trying to be nitpicky or anything (that's one of the reasons I stopped reading RGP, because you can't post anything w/o everyone trying to criticize it). I'm very interested in these sorts of questions. Maybe I should just wait for the answer and then I'll see why I can't get this one.

[also, I know nothing about tournament play].
Reply With Quote
  #19  
Old 09-05-2003, 04:53 PM
emanon emanon is offline
Junior Member
 
Join Date: Aug 2003
Posts: 8
Default Towards a solution...

This post is more of a step toward's David's question from the coin flip simplification.

1. Introducing path dependence.
In the coin flip question, the path taken to the result is ignored.

In a poker tournament this is an invalid assumption, as for equal players your P(victory) = % of chips held.

Even for unequally skilled players a change in relative chips will affect the outcome.

Q1. How does path dependence affect probabilities?
Lets look at the extreme case in the $2 freezeout. If you win the first flip your prob of victory increases. As P(victory|at step 2) goes towards 1, the likelihood of victory in the overall problem moves towards the single flip case.

This effect explains why tournament pros prefer lots of small pots, and an amateur can minimize the skill difference using "The System" from the Tournament poker book.

Q2. If I am better, can we quantify in any way how the probabilities will change?
Here is my stab at a solution for a 2 player tournament:
PlyrA is 1.5X better than PlyrB.
Assume this skill advantage has a constant effect.
Then weight PlyrA's chips at 1.5X PlyrB's.
The new P(A wins a step) = 1.5*PlayerAchips/(1.5*PAchips + PBChips)

I don't see why this wouldn't extend to multiple players.

2. P(victory in freezeout) given PlyrA loses the heads up.
I solved this by writing a program to simulate the outcomes.
On 10,000 paths, it came up with Plyr A winning ~ 20% of the time.

3. Determine a value for the 2 separate paths.
There are 2 possible paths:
I win the hand for 1/2 my stake.
I lose the hand for 1/2 my stake.
The question here is, how have the probilities changed now that you have less money.
An assumption could be made. Or you could do an analysis similar to that above.

P_IwinthisHand*EV(win) + P_Ilosethishand*EV(lose) = current EV

4. Other thoughts/issues...
I haven't had time to finish developing the above, just my thoughts to this point...
Looking at it, I think where it will eventually end up is something similar to a Sharpe ratio in finance. The Sharpe ratio helps determine portfolio performance on a risk(ie SD) adjusted basis.

The smaller and more numerous your bets, the higher your EV_real_money(assuming you have the best of it).
The larger your bets, the greater the SD, and the less valuable your skill edge.

All of this leads to mathematical affirmation of "The System" being a successful strategy to pursue if you are up against much better players.

The key will be determining the optimal EV strategy for relative to the SD.

Also, my simulation code is in an excel macro, and I can post the code if people are interested.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 06:42 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.