Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > Tournament Poker > Multi-table Tournaments
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #1  
Old 10-18-2005, 03:29 AM
AtticusFinch AtticusFinch is offline
Senior Member
 
Join Date: Feb 2005
Posts: 620
Default Theory again: Let\'s take a couple of steps back

Ok, I think I tried to do too much at once last time, so I'm going to take a couple of steps back and look at one small piece of the problem.

I'd like to try to model the rate at which a person of a given skill can be expected to grow his stack over time. I'll make a few assumptions. I you disagree with any of these, I'd love to hear your thoughts.

My assumptions are:

1) The maximum amount of chips you can make at any time is proportional to your present stack size.
2) Your maximum cEV per hand is a function of your relative skill level
3) Your maximum cEV is limited by the sizes of your opponents' stacks
4) (Closely related to 3) Your maximum cEV is limited by the total number of chips in play

I don't claim these are the only factors, but I think they are significant ones. (Blind size is the notable missing one. I'm still trying to figure out how to factor that in as a parameter, so stay tuned.)

These parameters jibe very well with the parameters used in the Verhulst equation, which I mentioned in the prior thread: http://en.wikipedia.org/wiki/Logistic_curve

The Verhulst equation is used in biostatistics to model, for example, reproduction rates of bacteria in envrionments with limited resources. Its assumptions are very similar to those above:

[ QUOTE ]

* the rate of reproduction is proportional to the existing population, all else being equal
* the rate of reproduction is proportional to the amount of available resources, all else being equal. Thus the second term models the competition for available resources, which tends to limit the population growth.


[/ QUOTE ]

Call your stack a population, reproduction rate your cEV per hand, and all available resources the total number of chips in play.

Since this model requires relating your stack's (or population's) growth rate to its present size, a differential equation is used:

dS/dh = rS(1 - S/T)

Where S is your stack size, r is your average cEV (measured as a percentage of your stack, not in chips) per hand based on your skill level when you hold an average stack size, and T is the total number of chips in play. (Note that this formula is a lot more elegant than my last one [img]/images/graemlins/wink.gif[/img])

The result is an adjusted estimated cEV that factors in your stack size relative to the field.

If you look at the fuction's behavior, it models things that many of us know intuitively:

1) Your skill level makes less of a difference when your stack is so small that you have to push/fold, as you only get to make one decision per hand.

2) When your stack is deep, you can make maximum use of your skills, as you'll be able to play "real" poker on all streets, BUT

3) #2 only applies if your opponent's stack is also deep enough. If your only opponent is in push/fold mode, then so are you (essentially), no matter how deep your stack is.

The good news about this one is it takes a lot less data to test, as I'm only trying to measure cEV per hand, not win rates per tournament.

To that end, if anyone has a bunch of tourney history I can use to run some tests, I'd be grateful. I haven't played nearly enough MTTs myself to be able to make any reasonable conclusions.
Reply With Quote
  #2  
Old 10-18-2005, 04:10 AM
housenuts housenuts is offline
Senior Member
 
Join Date: Jul 2004
Location: Victoria, BC
Posts: 357
Default Re: Theory again: Let\'s take a couple of steps back

your other post talks about gigabet's "bands" concept. is there a link to that thread?

and i'm guessing this post is the "finch formula"?
Reply With Quote
  #3  
Old 10-18-2005, 06:57 AM
SumZero SumZero is offline
Member
 
Join Date: Jul 2004
Posts: 73
Default Re: Theory again: Let\'s take a couple of steps back

My primary issue with the model you describe is the limitation you mention: Largely that it doesn't take into account the size of the blinds. I think a per-hand greatly simplified model to think about would be to consider on each hand one of the following takes place:

1. You lose the BB
2. You lose the SB
4. You win the BB+SB

5. You win BB+SB+6BB
6. You lose 3BB

7. You win BB+SB+20BB
8. You lose 10BB

9. You win BB+SB+60BB
10. You lose 30BB

...

11. You win min(opponent stack, your stack)
12. You lose min(opponent stack, your stack)

Now obviously this is only some of the possible outcomes (and may even be to many), but I think you could assign probabilities to each of the above, and you could varry the probabilities to reflect differences in skill (and you could do conditional probabilities on the basis of if 5 happens to you 6 happens to 1 opponent, 1 to another, and 2 to another). You could then simulate the random walk that occurs and see what growth happens.

My intuitive sense is that in no limit tournaments growth is much more influenced by the blinds than it is by relative stack sizes. Especially because I think many tournements have a fast structure that moves from near everyone is a deep stack to near everyone is a shortish stack pretty quickly, and the effect of stack sizes being abnormally large in both of those situations is minimal.
Reply With Quote
  #4  
Old 10-18-2005, 07:04 AM
Exitonly Exitonly is offline
Junior Member
 
Join Date: Dec 2004
Location: New Jersey
Posts: 3
Default Re: Theory again: Let\'s take a couple of steps back

[ QUOTE ]
Largely that it doesn't take into account the size of the blinds.

[/ QUOTE ]

The current level of blinds shouldn't have any impact on someones % to win the tournament. (Assuming everyone is equal, skill differences are being neglected for now)
Reply With Quote
  #5  
Old 10-18-2005, 07:16 AM
SumZero SumZero is offline
Member
 
Join Date: Jul 2004
Posts: 73
Default Re: Theory again: Let\'s take a couple of steps back

[ QUOTE ]
[ QUOTE ]
Largely that it doesn't take into account the size of the blinds.

[/ QUOTE ]

The current level of blinds shouldn't have any impact on someones % to win the tournament. (Assuming everyone is equal, skill differences are being neglected for now)

[/ QUOTE ]

yeah but

[ QUOTE ]
I'm only trying to measure cEV per hand, not win rates per tournament.

[/ QUOTE ]

and it seems clear to me that blind size has a large role to play in cEV/hand.

Also I disagree with the blinds not having any impact on someones % to win the tournament. First of all there is the different skill issue where obviously (to me) the smaller the blinds relatively speaking the more skill differences should be able to come to the floor.

But even in identicle skill level if the blinds are bigger than either player's stack size (to take an example to extremes) than if one player has 51% of the chips in one situation and 59% of the chips in another I don't think that the win percentages in this situation will be as different as if the two heads up stack sizes were considered if the blinds are 1/10 of 1% of the chips.
Reply With Quote
  #6  
Old 10-18-2005, 07:37 AM
Exitonly Exitonly is offline
Junior Member
 
Join Date: Dec 2004
Location: New Jersey
Posts: 3
Default Re: Theory again: Let\'s take a couple of steps back

Doh, i was basing this on his last theory.


I duunno where this one is going w/ cEV per hand...

so Atticus, wheere is this one going? I'm confused.
Reply With Quote
  #7  
Old 10-18-2005, 01:49 PM
AtticusFinch AtticusFinch is offline
Senior Member
 
Join Date: Feb 2005
Posts: 620
Default Re: Theory again: Let\'s take a couple of steps back

[ QUOTE ]

My intuitive sense is that in no limit tournaments growth is much more influenced by the blinds than it is by relative stack sizes. Especially because I think many tournements have a fast structure that moves from near everyone is a deep stack to near everyone is a shortish stack pretty quickly, and the effect of stack sizes being abnormally large in both of those situations is minimal.

[/ QUOTE ]

I'm with you, and I'm looking at ways to adjust the parameters that take this into account. Consider, though, that stack sizes are strongly influenced by blind size. So even if the blind size doesn't appear explicitly, it could be lurking behind the scenes in a calculation based on stack size.
Reply With Quote
  #8  
Old 10-18-2005, 01:52 PM
AtticusFinch AtticusFinch is offline
Senior Member
 
Join Date: Feb 2005
Posts: 620
Default Re: Theory again: Let\'s take a couple of steps back

[ QUOTE ]
Doh, i was basing this on his last theory.


I duunno where this one is going w/ cEV per hand...

so Atticus, wheere is this one going? I'm confused.

[/ QUOTE ]

I'm still heading in the same direction, but I'm taking it in smaller steps, now. At present, I'm just trying to estimate cEV per hand, based on skill and situational factors.
Reply With Quote
  #9  
Old 10-18-2005, 01:53 PM
Guest
 
Posts: n/a
Default Re: Theory again: Let\'s take a couple of steps back

The more I think about this, the more I think that the random walk solution is as good as it's gonna get. We really need a dataminer to test that.
Reply With Quote
  #10  
Old 10-18-2005, 02:01 PM
AtticusFinch AtticusFinch is offline
Senior Member
 
Join Date: Feb 2005
Posts: 620
Default Re: Theory again: Let\'s take a couple of steps back

[ QUOTE ]

and it seems clear to me that blind size has a large role to play in cEV/hand.


[/ QUOTE ]

Indeed. The larger the blinds, the more luck is involved. It's just a question of how to factor it in.

One possible way would be simply to include this adjustment in the normal expected growth-rate (r-parameter). That's sort of like punting on the issue, of course, but it does keep the model simpler, which is probably a good thing at the outset.

Another option is to factor it into the differential equation somehow, perhaps as some sort of adjustment to the total number of chips.

I suspect either method could be made to work with the right weightings, but I think I'm going to start with the former, as it's pretty easy to do. When I'm estimating r from a dataset, I'll include the blind size as a factor. So I'll take all hands where you had an average stack at blind level b, calculate the average result, and set r to that value.

I'd eventually like to move to the latter method, and add a parameter for blind size into the single equation. Hopefully my testing of the first method can help with this.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 05:10 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.