PDA

View Full Version : The ICM is not all its cracked up to be


zephyr
10-24-2004, 03:28 AM
There has been a recent resurgence in the volume of posts that use the ICM for analysis. Most of these post have been well thought out with the hand analysis' verging on brilliance.

However, I just wanted to mention a note of caution. Although the ICM is a clever variable transformation from CEV->$EV, it should be noted that it is by no means a perfect mathematical representation of this aspect of the game. A proven mathematical transformation from CEV->$EV is very very far away.

Also, I think results obtained from doing an analysis as illustrated in dethgrinds recent post should only be taken as a very rough estimate. I think the thought process is good, with the critical step being the range of hands that you put your opponent on. Personally, I'd always trust my gut over the number I crunch though.

I'm not trying to critize the ICM as a method of analysis, but am just pointing out the fact that it is only a rough estimate.

Only my opinion,

Zephyr

Solitare
10-24-2004, 09:35 AM
My vote for the biggest weakness in the ICM is that it doesn't take into account the size of the blinds. For instance, if you have 500 on the bubble I would think your EV is larger if the BB is at 50 than if the BB is at 200.

I think this is particularly relevant to the SnGs we play given the blind structure.

RacersEdge
10-24-2004, 02:04 PM
What exactly does ICM say?

stupidsucker
10-24-2004, 05:33 PM
Although I agree that the ICM can certainly use some adjustments, its not perfect ... but..

It's still the best weapon we have to calculate $EV opposed to chip EV. You cant come on here and say its bad and not give reasons why or try to make suggestions on how it can be made better. Got any better ideas?

It lacks in a few areas.

#1(as stated by a previous poster) The blinds make a difference. They make a fairly huge difference.

#2 Skill level has been mentioned before, but this is near impossible to factor in. My roi is about 30% ITM about 40% so I know that my %of the prizepool is higher then 10% when I start, but how can you weigh this?

#3 Stealing fear. I dont think it takes into account certain plays like big stack stealing when you are on the bubble and a short stack is there against some mid stacks

All things aside, I see the ICM as one of the best tools we have. With time will come improvments. A lot of work went into making it(I assume). I know one of the people involved with creating the ICM and I trust his math skills much more then I do my own.

zephyr
10-24-2004, 08:16 PM
[ QUOTE ]
You cant come on here and say its bad and not give reasons why or try to make suggestions on how it can be made better. Got any better ideas?

[/ QUOTE ]

I'm entitled to come on here and say whatever I feel like. But alas I wasn't posting on the qualifications of those who proposed the ICM, nor do I want to get into a debate over whose math skills are the best. I agree that it is a powerful tool for transforming CEV into $EV, but the point of my post was only to remind people that it is nevertheless a very crude and over simplistic technique.

In addition to the points mentioned by you I think one of its biggest pitfalls is its assumption of linearity.

Say it's heads up with each opponent holding 5000 chips. ICM says that with equal skill, each opponent has a 50% chance of getting first and a 50% chance of getting second. I agree with this. Next, if opponent 1 now holds 7500 chips to the others 2500, the ICM would give opponent 1 a 75% chance of finishing 1st and a 25% chance of finishing second. I do not necessarily agree with this.

I have a problem with: chance of finishing 1st = Chips/Total Chips

Why do we choose a linear relationship?
What other options do we have?

In the simple case of two players heads up I think that

chance of finishing 1st = f(stack size, blind size, position, skill level, opponents tendencies, play of previous hands, etc.)

Where some of these quantities are not quantifiable.

I'm not terribly familiar with the derivation of the ICM, but am certain that a great many assumptions are made. I'm curious to hear for what range people believe it’s valid. Say for example:

4 players: 1-2000, 2-3000, 3-1500, 4-3500,

ICM gives Opponent 4 $EV=0.31

How accurate is this? Within what confidence range? 95% of the time will you be between 0.29 and 0.33 or between 0.25 and 0.37?

Has anyone conducted experimental trials on it? Say recording the chip count when it gets down to 3, calculating your $EV from the ICM and then comparing this prediction with the actual outcomes?

Again, I agree with you that it is currently the best means of going from chips to $. It just tickled me the wrong way when people began posting matter of fact solutions to situations that by nature do not have precise solutions. I'm no longer familiar with the progress of game theory as it relates to games of incomplete information, but don't think that it’s advanced to the point where such problems can be solved analytically in a general sense.

Only my opinion,

Zephyr

AleoMagus
10-25-2004, 01:35 AM
[ QUOTE ]
I'm no longer familiar with the progress of game theory as it relates to games of incomplete information, but don't think that it’s advanced to the point where such problems can be solved analytically in a general sense.


[/ QUOTE ]

Why, when it can be done for so many other poker problems?

The point is, we are not talking about a game of incomplete information once we specify all the incomplete variables.

If, for example, I assume in a situation my opponent will call with EXACTLY 17.9% of hands in a bubble scenario, the artificiality of ICM rests primarily with this assumption, not with the theory itself (though it may well be flawed also).

I guess that I just think you are criticising the wrong thing. I can see how criticisms might be levelled against the kind of reasoning I gave in the Blind stealing theorem discussion. For anyone unfamiliar with what I'm talking about, I gave a fairly precise set of push-stealing hands assuming roughly equal stacks between SB/BB and being folded to in the SB.

Well, I most definitely might have been wrong about those hands as a general guideline, because they were in no way aimed at solving a general kind of problem.

I assume EXACTLY what my opponent will call with in that spot. How is it a problem of incomplete info if I make this assumption?

If we look at a more mundane, pot equity calculation in a ring game holding a flush draw vs a made hand, we also make exact calculations and nobody ever complains about those calculations in this game of 'incomplete info'. Why? Because we make precise assumptions about all of the incomplete info, thereby filling in the blanks. If we want, we can even put a confidence level on a range of possibilities in order to fill in the missing data, just as we do when using ICM.

What am I saying exactly? Your 'gut feelings' about a situation do not invalidate ICM. They merely augment the assumptions that you put into the equations.

Any thoughts?

Regards
Brad S

zephyr
10-25-2004, 02:13 AM
Perhaps you are not entirely familiar with game theory (not that I am an expert), but the reason for using it in the game of poker for example is that it eliminates the need to make rational judgments of what the opponents hold.

The examples that you stated in your post are not actually examples of game theory, but are rather examples of basic mathematics.

Calculating pot equity, or using the ICM are not examples of game theory. While these give mathematical approximations to parts of the game, game theory offers a solution to the game as a whole. I know the boys over at the U of A in Edmonton are doing a fair amount of research on these topics.

In theory a computer could be programmed to play poker perfectly. When I say perfectly, I don't mean that the computer would necessarily win the most $/hands, but that the computer would be impossible to beat. Of course the computational power needed for this is very very far away, and may never be realized.

Although many of us want to believe that poker is infinitly complex, it is really only a finite game. The hypothetical game theory computer would not adjust based on its opponents, would not model them, would not use pscychological ploys. It would play strictly according to game theory and would be impossible to beat.

On the topic of incomplete information: poker is a game of incomplete information by nature. Whether we put our opponent on a hand or range of hands it does not change the fact that the game is one of incomplete information.

Back to the ICM. I don't know why everyone believes that the ICM is theoretically correct. I'm certain that its developers would never claim that. It's an approximation, and the main point of my original post was to remind people of that fact when using it for hand analysis. How good of an approximation? I don't know, but think it's a very relevant question.

As always only my opinion,

Zephyr

zephyr
10-25-2004, 02:16 AM
By the way,

Thank you for your guide. It got me started on the right track when I began playing online a year ago. I'm in your debt.

Zephyr

dethgrind
10-25-2004, 02:36 AM
[ QUOTE ]
I have a problem with: chance of finishing 1st = Chips/Total Chips


[/ QUOTE ]

In fact, this is the only assumption that is constant in every tournament finish probability model I've seen. All of which are contained in this highly mathematical post (http://archiveserver.twoplustwo.com/showflat.php?Cat=&Board=&Number=369811&page=&view= &sb=5&o=&fpart=) by Bozeman a while ago.

Also, I recommend reading the chapter "Freezeout Calculations" in TPFAP. I hope it's okay that I quote the first sentence:
[ QUOTE ]
It is a common conception that your chances of winning a tournament against equally skilled players are equivalent to the fraction of the total tournament chips that you hold

[/ QUOTE ]

Sklansky then goes on to explain why this must be the case.




[ QUOTE ]
Although the ICM is a clever variable transformation from CEV->$EV, it should be noted that it is by no means a perfect mathematical representation of this aspect of the game. A proven mathematical transformation from CEV->$EV is very very far away.


[/ QUOTE ]
[ QUOTE ]
It just tickled me the wrong way when people began posting matter of fact solutions to situations that by nature do not have precise solutions.

[/ QUOTE ]

I agree wholeheartedly. Putting your opponent on a range of hands is often quite an approximation. Adding the additional approximation of the ICM could very well produce the wrong conclusion for a given situation. I think it's valuable to put something like this in your post: "if I'm correct that my opponent will have this range of hands, the ICM says this move is better by this %." I think it's dangerous to say, "the ICM says this, therefore it is the right play."

I'm glad someone has expressed some skepticism about the ICM. I'm kind of curious what Mason Malmuth and other big shots think about the kind of analysis that has been going on here.

Bigwig
10-25-2004, 03:35 AM
Could I get a link to all this ICM business? Is it software? Help me out.

AleoMagus
10-25-2004, 04:00 AM
[ QUOTE ]
I think it's valuable to put something like this in your post: "if I'm correct that my opponent will have this range of hands, the ICM says this move is better by this %."

[/ QUOTE ]

These kinds of caveats are completely built into any analysis that I have seen on here lately. If this is unclear to people, then they do not understand the analysis in the first place.

[ QUOTE ]
I think it's dangerous to say, "the ICM says this, therefore it is the right play

[/ QUOTE ]

Again, the real danger is not that people are making claims like this, the real danger is interpreting the claims that are being made in this way.

In all cases, about decisions being guided by ICM, the ONLY thing that the ICM is giving us is a CEV to $EV conversion. Nothing more.

Once we have that, all the calculations we use are the same 'range of possibility' type calculations we always use in situations that don't require the ICM at all (ring games). These calculations do not always turn out to give sound guidelines for play, but the fault lies in our assumptions about these ranges of possibility, not the calculations themselves.

This is to say that when we are wrong with this sort of analysis, our mistakes lie in our assumptions about things like (for example) what hands a reasonable opponent will call a 10xBB all-in raise with in the BB when on the bubble and covered slightly.

I don't want to sound like too much of a religious believer in mathematical models here, because really, I'm not. I simply think that people are confusing the ICM with a lot of the calculations we are performing with it's results.

As far as worrying about the approximations involved in these calculations, I don't know what to say. That's what this game is about. Forming an informed and educated guess about what an opponent holds and then making calculated decisions based on those conclusions.

Sometimes we can come up with a very small range of possibilities with a high degree of certainty and often we cannot. We do what we can though, and when our gut tells us that the result of such analysis is incorrect, what it is most likely telling us is that one or more of our assumptions are wrong. That or we are deluding ourselves.

Regards
Brad S

AleoMagus
10-25-2004, 04:23 AM
I should not have cut and pasted that quote at the beginning of my response to you becasue it seems to have confused things. I was not intending to defend 'game theory' as a whole (though I do think some of what you have said about it are wrong).

I was merely trying to defend the ICM and the calculations we have been doing using it's results

[ QUOTE ]
Again, I agree with you that it is currently the best means of going from chips to $. It just tickled me the wrong way when people began posting matter of fact solutions to situations that by nature do not have precise solutions.


[/ QUOTE ]

It is perhaps this quote that I should have used, which immediately preceeded it. You claim we cannot make matter-of-fact conclusions about these problems because the situations do not have a precise solution by nature.

The thing is, if we precisely define the 'incomplete information' in these problems, we can make matter of fact conclusiuons.

Does this mean the conclusions are matter-of-fact correct? ...well, not exactly.

It means they are correct so long as our assumptions are correct, nothing more.

[ QUOTE ]
On the topic of incomplete information: poker is a game of incomplete information by nature. Whether we put our opponent on a hand or range of hands it does not change the fact that the game is one of incomplete information.

[/ QUOTE ]

OK... but you seem to say this as a justification for you claims about there not being precise solutions to these kinds of problems.

I'll try to give a more concrete example of why I think this is wrong.

Imagine a game where I pick a number between one and ten and you wager on what number it is. This is the 'incomplete info'.

You know nothing about my number picking tendencies so you assume randomness. This is the range of possibility assumption (numbers 1-10)

I give you 20-1 odds on your guess if you want to make the wager. Should you 'call' or not?

Matter-of-factly, I'll say that you should call. Precise answer, incomplete info or not

Regards
Brad S

zephyr
10-25-2004, 03:05 PM
[ QUOTE ]
Imagine a game where I pick a number between one and ten and you wager on what number it is. This is the 'incomplete info'.

You know nothing about my number picking tendencies so you assume randomness. This is the range of possibility assumption (numbers 1-10)

I give you 20-1 odds on your guess if you want to make the wager. Should you 'call' or not?

Matter-of-factly, I'll say that you should call. Precise answer, incomplete info or not

[/ QUOTE ]

What number do you guess? What assumptions do you need to make? This is where game theory comes into play to give you an unbeatable strategy in this situation. That being to randomly choose a number between 1 and 10 (say by rolling a 10 sided die), thus giving your opposition no way of outwitting you. Do you need to make any assumptions about your opponent? NO! You would play the game if your opponent gave you greater than 10:1 odds and not otherwise. This is a very simple game of incomplete information with a trivial solution. Similar situations arise in poker. The situations that people have been analysing with the ICM are not such situations.

[ QUOTE ]
The thing is, if we precisely define the 'incomplete information' in these problems, we can make matter of fact conclusiuons.

Does this mean the conclusions are matter-of-fact correct? ...well, not exactly.

It means they are correct so long as our assumptions are correct, nothing more.

[/ QUOTE ]

My original post was designed to get people to question some of the asssumptions that are inherent to the ICM. There are many assumptions that were made when the ICM was developed that are NOT correct, thus it is impossible to make matter of fact correct decisions when using it.

Given the following situation, 4 players, 1-2500, 2-2500, 3-2500, 4-2500. Suppose you are opponent 4 in the big blind(400), 1 and 2 fold and opponent 3 pushes. Opponent 3 shows you KQs, and you hold AJ. Should you call?

First lets consider it as a cash game,

If you fold you are left with 2100.

If you call:

56.19% chance of winning -> $5000
0.46% of a tie -> $2500
43.35% chance of losing -> 0

So you expect a stack of 0.5619*5000+0.0046*2500 = $2821.

So you call knowing that your decision is matter of factly correct. Hence this problem has a trivial solution.

Now lets look at the same situation in a SNG.

If you fold: 1-2500, 2-2500, 3-2900, 4-2100 for a $EV using the ICM of 0.223.

If you call:

Win: 56.19% -> $EV of 0.3833
Tie: 0.46% -> $EV of 0.25
Lose: 43.35% -> $EV of 0

So this gives us 0.5619*0.3833+0.0046*0.25 = 0.216.

So by this analysis we fold thus giving ourselves a $EV of 0.223 instead of 0.216 had we called. But can we be certain that the fold is the correct move? Is it matter of factly correct? The answer is no, because the ICM has built in assumptions that may or may not be true.

Only my opinion,

Zephyr

zephyr
10-25-2004, 03:27 PM
[ QUOTE ]
Also, I recommend reading the chapter "Freezeout Calculations" in TPFAP. I hope it's okay that I quote the first sentence:
[ QUOTE ]
It is a common conception that your chances of winning a tournament against equally skilled players are equivalent to the fraction of the total tournament chips that you hold

[/ QUOTE ] Sklansky then goes on to explain why this must be the case.

[/ QUOTE ]

Slansky's argument is expectionally weak in TPFAP. I agree that with equal stacks and equal skill, each player has an equal chance of winning. His method for analysing unequal stacks is crude and unconvincing, however.

Don't get me wrong, I think that there is a very good chance that chance of finishing 1st = % of total chips, but have yet to find a substancial proof of that.

Only my opinion,

Zephyr.

eastbay
10-25-2004, 06:32 PM
This is all fine and good, but I think the point remains: do you have a better idea?

Otherwise, your post seems to be useless quibbling.

eastbay

dethgrind
10-25-2004, 07:19 PM
I feel kind of silly arguing this particular point any further, but here goes anyway:

Sklansky and Malmuth (actually Mark Weitzman) offer various plain english explanations for why the chance of finishing 1st = % of total chips in TPFAP and GTAOT. Their target audience is poker players with a basic understanding of some math concepts, not theoretical mathematicians. If you really want that thoroughly rigorous mathematical proof, you could do a quick google search for the gambler's ruin problem. Here (http://mathworld.wolfram.com/GamblersRuin.html) is a good explanation of the problem. Here (http://stat-www.berkeley.edu/users/pitman/s205f02/lecture18.pdf) is a proof.

You might argue that poker doesn't really fit that model, or that multiple opponents change the results. Fine. Run some tournament simulations. I can give you some code if you like. You won't be surprised by the results. That won't give you proof, but at least you'll discover that you won't be able to find a significant counter-example.

You really ought to have at least one decent piece of evidence if you want to question this very basic result that has been accepted by so many very smart people.

zephyr
10-25-2004, 08:47 PM
Thank you for the information in your post. It is well received.

I've been curious if anyone has done an analysis on how accurate the ICM is using tournament data. Do you know of any such investigations?

Zephyr

Bigwig
10-25-2004, 09:36 PM
[ QUOTE ]
Could I get a link to all this ICM business? Is it software? Help me out.

[/ QUOTE ]

Could someone please answer this question?

lastchance
10-25-2004, 09:38 PM
ICM is independent chip model. It accurately (well, to our best guess) describes how chip EV (how many chips we have) = $EV.

This is the ICM calculator (http://www.bol.ucla.edu/~sharnett/ICM/ICM.html).

stupidsucker
10-25-2004, 10:33 PM
Let me do my best to wrap this up.

The ICM is by no means an exact sum or perfect calculation telling you exact $EV based on chip EV. It is however a great tool in the right hands.

Everyone that has a clue about poker knows that a lot of x factors go into chip ev to $EV. The ICM can not calculate the x factors, and I dont know if anyone will ever come up with one that does. The X factors are something you have to have a brain and figure on yourself. The ICM is just a guide to tell you some bare bones facts/approximations.

And yes you are free to come here and belittle anything you want, but if you cant come up with anything constructive to add or any "better ideas" then what points do you have?

Bottom line...

The ICM is a great tool if you have the knowledge to use it properly. Some of that may take imagination. If you require prefection then poker probably isnt even a game for you. Math may be the biggest part of poker, but its not the only part, and it never will be.

dethgrind
10-25-2004, 11:48 PM
Sorry for being a jerk about it.

[ QUOTE ]
I've been curious if anyone has done an analysis on how accurate the ICM is using tournament data. Do you know of any such investigations?


[/ QUOTE ]

No, I haven't seen anything like this, though I'd be very interested as well.

zephyr
10-26-2004, 01:42 AM
This thread got a little out of control from my perspective. I think I may have delved a touch too deep in some of my criticism/comments, however, I just want to reiterate again that the point of my original post was to:

[ QUOTE ]
I'm not trying to critize the ICM as a method of analysis, but am just pointing out the fact that it is only a rough estimate.

[/ QUOTE ]

I agree that the ICM is the best thing we have right now. Since I've done a fair amount of criticism on it I think that I need to do something to add to this area. With regards to ideas for improvement, I think that the biggest gains would come from doing an experimental investigation of CEV -> $EV, and then compare it to the various analytical means that we have of going from chips to $. I'm beginning that experiment now, but as I'm especially busy this time of the year, I'll welcome any data that people would like to send me.

In my field of study, aerospace engineering, if we consider any type of real world fluid flow the analysis cannot be done analytically. The problem arises in the non-linear Navier Stokes equations which to this day have no general solution. Thus, most aerodynamic problems are solved using experimental correlations, with some simpler ones now being solved using computational fluid dynamics. My point is that when problems cannot be solved purely mathematically, it's often a very good idea to attack them from an experimental point of view.

In terms of my analysis I'm thinking of this format:

The instant that the fifth place finisher goes out (down to 4):

Record stack sizes,
Record blind sizes,
Record current position,

Then record what place you come in.
I also think that the players approximate ROI should be considered.

Of course a much more indepth analysis could be done, but I think this will serve fine to begin with. The analysis could probably be done immediately as I'm sure there is no shortage of tournament histories floating around.

Any thoughts?

Only my opinion,

Zephyr

dethgrind
10-26-2004, 02:19 AM
I really like this idea.

It'll be necessary to define a way to lump together similar stack size scenarios. Otherwise the amount of data you'd need to collect would be too huge. I mean, how many times have you run across this exact situation: blinds 100/200, you're on the button, stacks are 1250/575/3060/3115?

This will require some good scripting skills as well.

Good luck, and let me know how I can help

-Sean

JNash
11-06-2004, 04:51 PM
Hi Zephyr & Dethgrind

I just came across the ICM concept for the first time and started to research old posts. I came across the debate between the two of you on whether in heads up play the probability of winning equals the fraction of total chips a player has.

I happen to agree with Zephyr that the S&M proof is unconvincing. In fact, the entire theorem may be wrong--although I can't disprove it myself either. Here goes...

One of the central assumptions is that the two players are equally skilled--an assumption I want to keep. However, if you follow the proof, you'll see that it uses the argument that a player with 1/4 the total chips needs to double up twice to win (so far so good). I agree that WITH EQUAL STACKS, the probability of two equal-skilled players winning must be 50/50. However, it is not clear to me that a player with 1/4 the total chips has a 50/50 chance of doubling up to where he has 1/2 the total chips--even if the two players are equally "skilled". For this to be true, it would require that the probability of doubling up is independent of the relative stack sizes of the two players.

Now we can simply assume that this is so, in which case the S&M proof goes through without any problem. However, my personal intuition and experience is that the big stack enjoys an advantage simply because he has more chips (even if he is playing against his identical, equally smart twin). My heuristic argument for why this is so is that the big stack can (playing optimally) bluff and steal the blinds more than the small stack. This leads to what I once posted under the title"S-Curve Hypothesis"--that the probability of wining in heads-up play as a function of fraction of chips looks like a logistic-type function which goes through 50%/50% (i.e. the probabilities must be equal when the stack sizes are equal and you have equal skill). The curve must be a reflection around this mid-point since the game is zero-sum.

Mine is not a theorem--it's a conjecture which I can't prove. I do know however that to disprove it requires more work than the S&M argument. The random walk references that dethgring gave also don't apply since they assume the probabilities in the Brownian motion are constant. If you assume that the probabilities change depending on where you are along your random walk, the proof would no longer work.

One possible way of tackling this problem would be to prove that the optimal strategy in heads up play is independent of stack size. If that is the case, then my intuition is wrong and the "conventional wisdom" on the freezout calculation is indeed correct.

I'd very much appreciate your thoughts on this...or references to materials that may shed more light on this.

P.S.: Can you give me the definitive reference to the theory and assumptions that underlie the ICM? Thanks!

pzhon
11-06-2004, 06:06 PM
The independent chip model is much more than the assumption about what happens heads-up. It is more than an assumption about the probability of winning a multiplayer freezeout. The ICM is an assumption about how frequently each player will finish in each position, and while it is not perfect, it is a good start.

It doesn't look like you are objecting to the ICM's more subtle projections for finishing 3rd, for example. It looks like you are objecting to far more fundamental issues.

I have argued repeatedly that it is a theoretical advantage to have a small stack in a multiplayer game. You get more information and you get to use the big stacks' fear of each other. Small stacks should gain chips on average. So, the assumption that the probability of winning in a multiplayer NL game is proportionate to stack size is wrong. However, I don't think it is so wrong as to make the ICM unusable.

As for the probability of winning heads-up, the only real question should be the effect of position. If you are playing a game like stud where position is determined by a card, then there is a strategy for each table limit which at least breaks even against any opposition. Following this strategy (paying attention to the size of the smaller stack, not who has it) means you will not lose chips on average. Since at the end of the game, you have all of the chips or none, not losing chips on average means you will win with probability at least as great as the fraction of the chips you hold. This doesn't work precisely for Hold'em because of position, but unless the blinds are huge, it shouldn't make a big difference.

It looks like you are arguing that having more chips heads-up means you can bluff people and steal more. For heads-up play, this may be true in practice (betting 10k chips may look impressive even if your opponent only has to call 2k), but it is wrong in theory. Perhaps you have noticed the way people deviate from theoretically correct play and you exploit it, but that doesn't mean a big stack should have a disproportionate advantage against a good opponent.

JNash
11-06-2004, 06:56 PM
Thanks phzon

To clarify my point, I wasn't commenting on the ICM. I just wanted to express my view that the conventional freezeout calculation for heads-up play (which, as dethgrip said is embraced by many very smart people) may in fact not be exactly correct. The proof depends on the assumption that the probability of doubling up is independent of the relative stack sizes--which to me is far from obvious.

In your reply, you express the belief that small stack size is actually an advantage--I am sure you meant relative to the value you would get by simple chip proportionality, not as an absolute statement. If your statement is true, then you'd get a different shaped S-curve. If f(c) is the probability of winning as a function of fraction c of total chips, your assumption about short stack advantage would imply that the function is concave from 0 to 0.5, and convex from 0.5 to 1. My theory of large stack advantage says it's convex from 0 to 0.5 and concave from 0.5 to 1. The traditional S&M theory says that it's precisely linear.

The linear approximation used by S&M is undoubtedly the best first-order approximation, and I do not have any better formula to propose myself, but I just wanted to challenge the statement that "it has been proven" that the proportionality theorem is correct. I am unable to prove mathematically that my "large stack advantage" conjecture is correct, and your argument for small stack advantage is also not rigorous and based only on qualitative reasoning. [Perhaps there are elements of truth in both of our views which approximately cancel each other out, so that the linear approximation is actually very close to correct.] Please note that I am not concerned with whether actual players behave one way or the other, I want to know what the game-theoretically correct answer is.

pzhon
11-06-2004, 08:02 PM
[ QUOTE ]

In your reply, you express the belief that small stack size is actually an advantage

[/ QUOTE ]
The context of that statement was multiplayer play, not heads-up.

In that post I also outlined a proof that chip value is linear heads-up for games without a predetermined position.

[ QUOTE ]
I am unable to prove mathematically that my "large stack advantage" conjecture is correct, and your argument for small stack advantage is also not rigorous and based only on qualitative reasoning.

[/ QUOTE ]
My arguments for heads-up play are rigorous. What is nonrigorous about my arguments for the small stacks' advantage in multiplayer play is whether there is essentially collusion against the small stacks that somehow exceeds the small stacks' advantages. It is clear that the small stacks benefit when a big stack folds after calling a small stack's push. See this post from June (http://archiveserver.twoplustwo.com/showthreaded.php?Cat=&Number=762326&page=&view=&sb =5&o=&vc=1) for a numerical example of how a small stack may benefit from proper actions of big stacks.

Do you honestly believe the large stack should play differently heads-up than the small stack? If so, can you give an example?

zephyr
11-06-2004, 08:13 PM
I thought that this thread had long since perished, but I guess I'm not quite correct. It is interesting to see this point being discussed further, though.

As pzhon mentioned I think that the key is that the blinds, and position aren't considered when concluding that %chips = chance of finishing first.

In poker the blinds are a part of the game itself, and thus cannot be ignored when finding solutions to that game. Likewise, we cannot ignore say the river, and come up with an exact solution that applies to the a game that has a river. With no blinds, there is no game.

Hence, any solution that we come up with where we assume that the blinds and position are negligible, does not actually apply to the game. Of course, the effect of the blinds and position may be very small and thus the solution we find may very closely approximate the game. On the other hand though, the effects of the blinds may be large, and thus our solution a very bad approximation.

Intuitively, I believe that the actual relationship between %chance of finishing first and stack size is some type of a curve that passes through 0.5, 0.5. I'm unsure of the shape of the curve, however. It may be linear, it may not.

Only my opinion,

Zephyr

dethgrind
11-06-2004, 09:09 PM
[ QUOTE ]
The random walk references that dethgring gave also don't apply since they assume the probabilities in the Brownian motion are constant. If you assume that the probabilities change depending on where you are along your random walk, the proof would no longer work.


[/ QUOTE ]

You're right. If the probabilities are stack size dependent, everything goes to hell. Even Bozeman and Thomas Ferguson's fancy pants diffusion stuff (http://archiveserver.twoplustwo.com/showflat.php?Cat=&Board=&Number=519924&page=&view= &sb=5&o=&fpart=) won't work. (That seems to be the post where Bozeman started calling it the ICM).

I first read about the method in GTaOT, "Settling Up in Tournaments: Part III".

If you want to read some more about this topic, I recommend checking out the RGP archives on google. Look for stuff by Tom Weideman, Barbara Yoon, and William Chen.

JNash
11-06-2004, 10:03 PM
[ QUOTE ]
If you are playing a game like stud where position is determined by a card, then there is a strategy for each table limit which at least breaks even against any opposition. Following this strategy (paying attention to the size of the smaller stack, not who has it) means you will not lose chips on average. Since at the end of the game, you have all of the chips or none, not losing chips on average means you will win with probability at least as great as the fraction of the chips you hold.

[/ QUOTE ]

If what you say here is true, then the optimal NL heads-up strategy is independent of the relative stack sizes, an important part of the proof.

Now, I would be very interested in the proof of the above statement--and in knowing what the optimal heads-up strategy is. Any references for this?

I have two reasons for believing that the probability of coming in first depends not just on the fraction of chips:
1)It seems to me that the strategic options of the small stack become severely limited when his stack is small relative to the blinds. I would think that he would have to play a wider range of hands (i.e play looser) than the big stack since he is so close to being blinded out. So the optimal strategy would seem to depend on stack size relative to the size of the blinds. (This argument is independent of position, since you'd get to be both the big and small blind about the same number of times over the remaining hands. Unless you are literally going to be blinded out in the next 1-2 hands, in which case position would matter too.)

2) As you said, in holdem the blinds do enter the picture, and I believe cannot be ignored. If you are heads-up with 20% of the chip total (opp has 80%), and the blinds are 1%, I think the small stack has a better shot than when the blinds are 10%. I think this can probably be proven with an element-of-ruin argument. This would suggest that even if the optimal strategy is independent of stack size, the probability of coming in first depends not just on the fraction of chips but also on the size of the blinds.

Please understand, I don't have a proof, just a hunch, and I am posting to learn more...

pzhon
11-07-2004, 12:10 PM
[ QUOTE ]
[ QUOTE ]
If you are playing a game like stud where position is determined by a card, then there is a strategy for each table limit which at least breaks even against any opposition. Following this strategy (paying attention to the size of the smaller stack, not who has it) means you will not lose chips on average. Since at the end of the game, you have all of the chips or none, not losing chips on average means you will win with probability at least as great as the fraction of the chips you hold.

[/ QUOTE ]

Now, I would be very interested in the proof of the above statement--and in knowing what the optimal heads-up strategy is. Any references for this?

[/ QUOTE ]
It's just von Neumann's minimax theorem from 1928. That theorem says there is an optimal strategy (in a 2-player zero-sum game with finitely many pure strategies). Symmetry says the value is 0. This does not say what the strategy is. <font color="white">If you want to allow fractional bets, von Neumann's theorem extends to compact games.</font>

[ QUOTE ]
It seems to me that the strategic options of the small stack become severely limited when his stack is small relative to the blinds.

[/ QUOTE ]
If your intuition leads you to think the large stack should accumulate chips heads-up, you should revise or disregard your intuition.

PrayingMantis
11-07-2004, 12:31 PM
[ QUOTE ]
It seems to me that the strategic options of the small stack become severely limited when his stack is small relative to the blinds.

[/ QUOTE ]

(Edit to say I'm replying to JNash statement, not your post, pzhon.)

This is a rather simplistic way to look at it. In many situations it goes both ways. Actually, in many situations, small stacks will have some tactical advantages against bigger stacks.

As to HU situations, the idea that generally the small-stack is forced into a tighter game (which you mention a few times), and that the big-stack bluffs and steals more, is wrong, for a few reasons. Especially since if they both understand EV, small stack will not play to survive, and therefore will not be easy to bluff. On the contrary. And what you suggest can actually work both ways, again: there are players who will play tighter as big stack HU, because they don't want to double-up the small stack. In these cases, small stack will play a wider range of hands, will bluff and steal more, and force *big-stack* into folding.

rachelwxm
11-08-2004, 02:25 PM
zephyr,
Thank you for provide some challenges to ICM. Just like Newton’s law in physics was taken granted for a few hundred years. We need to reexamine ICM from time to time. I think we all agree that it is probably the best tool out there that handles chipev-ev. After reading some of the posts, here is my thought ( I am not a defender of ICM)

1. ICM assume equal skills. Seems most of your critic are about unequal skills assumption of ICM. Of course, every player believes he is better than the field. If you take ICM as a guide of your ROI (-9%), nobody would ever play this game! Then why we still use ICM? Because it is an objective way of measure a fair game.

Take another example u mentioned, two guys HU with equal stack, ICM gives same ev for each two. Let’s say one has a ROI of 35%, the other has 20%. How does the extra added info help you solve the problem? What if one guy w 35% is excellent early-mid stage player while poor HU player? The only FAIR method fit your need is to let those two play HU SNG 100000 games and see average PNL. Can you?

2. I don’t think blind size is a limit of ICM. It at most increases std not average.
/images/graemlins/smile.gif

tubbyspencer
11-18-2004, 12:38 AM
[ QUOTE ]

Also, I recommend reading the chapter "Freezeout Calculations" in TPFAP. I hope it's okay that I quote the first sentence:
[ QUOTE ]
It is a common conception that your chances of winning a tournament against equally skilled players are equivalent to the fraction of the total tournament chips that you hold

[/ QUOTE ]

Sklansky then goes on to explain why this must be the case.


[/ QUOTE ]


I went back and read it, and you're right. It explains why your chances of WINNING are exactly proportional to the number of chips you have divided by the total in play.

But the last paragraph of the chapter states

[ QUOTE ]
Unfortunately, there is no equally simple technique to calculate the chances of coming in second, third, etc. based solely on the chip count. There are, however, some good ways of estimating these probabilities.

[/ QUOTE ]

Isn't this where the ICM breaks down?

That along with not factoring in the size of the blinds (which has already been mentioned) ?