Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Gambling > Probability
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #1  
Old 08-30-2005, 12:00 AM
elitegimp elitegimp is offline
Junior Member
 
Join Date: Apr 2004
Location: boulder, CO
Posts: 14
Default Poker standard deviation, with a twist

So for those of us that are madly in love with Pokertracker, we are used to thinking about our standard deviation in terms of BB/100. As far as I know, poker tracker groups your hands in 100s (i.e. hands 1-100, 101-200, 201-300, ...), then does the typical SD calculation of summing up the square of the difference from the mean and dividing by n or (n-1) or whatever. Is there a way to calculate the standard deviation _per session_ rather than _per 100 hands_? Would such a statistic even make sense?

I'm thinking in terms of some sort of weighting, but don't really know how to do it. Here's an example, though, of 400 hands played over 3 sessions:

Session 1 (160 hands):
100 hands, +2 BB
60 hands, -1 BB

Session 2 (180 hands):
40 hands, +3 BB
100 hands, +2 BB
40 hands, +5 BB

Session 3 (60 hands):
60 hands, -3 BB

So my understanding (and this may be wrong) is that poker tracker sorts the hands in this manner:

100 hands: +2 BB
100 hands: +2 BB
100 hands: +2 BB
100 hands: +2 BB

and obviously the SD is 0 BB/100 hands (even if I don't know if it is 0*(1/3) or 0*(1/4) [img]/images/graemlins/smile.gif[/img]).

But if we look at the data as
160 hands: +1 BB
180 hands: +10 BB
60 hands: -3 BB

and use a mean of 0.02 BB/hand is there a way to get a reasonable standard deviation? I'm thinking along the following steps --

1) Make an Expected Value chart based on 0.02 BB/hand, like such:
160 hands: +3.2 BB
180 hands: +3.6 BB
60 hands: +1.2 BB

2) For each session, find the square of the deviation from the expected value
160 hands: (2.2 BB)^2 [EV of 3.2 BB, actual of 1 BB]
180 hands: (6.4 BB)^2 [EV of 3.6 BB, actual of 10 BB]
60 hands: (4.2 BB)^2 [EV of 1.2 BB, actual of -3 BB]

3) So obviously there is some variation from session to session, but how do you weight it? And how do you interpret the results?

It seems clear to me that the SD should be between 2.2 BB/session and 6.4 BB/session, but beyond that I'm pretty lost.

Did anybody make it this far with even a vague notion of what I'm trying to do? Anyone have any thoughts?
Reply With Quote
  #2  
Old 08-30-2005, 03:33 AM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Re: Poker standard deviation, with a twist

[ QUOTE ]
So for those of us that are madly in love with Pokertracker, we are used to thinking about our standard deviation in terms of BB/100. As far as I know, poker tracker groups your hands in 100s (i.e. hands 1-100, 101-200, 201-300, ...), then does the typical SD calculation of summing up the square of the difference from the mean and dividing by n or (n-1) or whatever. Is there a way to calculate the standard deviation _per session_ rather than _per 100 hands_? Would such a statistic even make sense?

I'm thinking in terms of some sort of weighting, but don't really know how to do it. Here's an example, though, of 400 hands played over 3 sessions:

Session 1 (160 hands):
100 hands, +2 BB
60 hands, -1 BB

Session 2 (180 hands):
40 hands, +3 BB
100 hands, +2 BB
40 hands, +5 BB

Session 3 (60 hands):
60 hands, -3 BB

So my understanding (and this may be wrong) is that poker tracker sorts the hands in this manner:

100 hands: +2 BB
100 hands: +2 BB
100 hands: +2 BB
100 hands: +2 BB

and obviously the SD is 0 BB/100 hands (even if I don't know if it is 0*(1/3) or 0*(1/4) [img]/images/graemlins/smile.gif[/img]).

But if we look at the data as
160 hands: +1 BB
180 hands: +10 BB
60 hands: -3 BB

and use a mean of 0.02 BB/hand is there a way to get a reasonable standard deviation? I'm thinking along the following steps --

1) Make an Expected Value chart based on 0.02 BB/hand, like such:
160 hands: +3.2 BB
180 hands: +3.6 BB
60 hands: +1.2 BB

2) For each session, find the square of the deviation from the expected value
160 hands: (2.2 BB)^2 [EV of 3.2 BB, actual of 1 BB]
180 hands: (6.4 BB)^2 [EV of 3.6 BB, actual of 10 BB]
60 hands: (4.2 BB)^2 [EV of 1.2 BB, actual of -3 BB]

3) So obviously there is some variation from session to session, but how do you weight it? And how do you interpret the results?

It seems clear to me that the SD should be between 2.2 BB/session and 6.4 BB/session, but beyond that I'm pretty lost.

Did anybody make it this far with even a vague notion of what I'm trying to do? Anyone have any thoughts?

[/ QUOTE ]

This essay by Mason shows you how to compute your SD for variable length sessions. It is from Gambling Theory and Other Topics. If you want, I can email you a spreadsheet that does this. The following is a derivation of that method, which also contains a form which closely resembles the form for fixed length sessions.


This is the derivation of the maximum likelihood estimator for the variance for sessions of variable length. The derivation is exactly the same as the textbook derivation for sessions of equal length, except that the variance is multiplied by the session length Ti, and the standard deviation is multiplied by sqrt(Ti). Here is the derivation (sorry about the ascii):

Let X be a vector of session results, and Ti be the duration of the ith session. Each session result Xi is a random variable distributed as a normal distribution of mean Ui = uTi, and unknown variance Ti*sigma^2, where u and sigma^2 are the mean and variance for 1 unit of time or number of hands (e.g. 100 hands). The probability distribution of a given observation Xi given sigma is:

f(Xi | sigma) = 1/[ sqrt(2*pi*Ti)*sigma ]*exp[ -(Xi - Ui)^2/(2*Ti*sigma^2) ]

This is simply the definition of the normal distribution where the standard deviation has been replaced by sqrt(Ti)*sigma, and the variance has been replaced by Ti*sigma^2. The conditional probability of a vector of N observations X given sigma, called the likelihood function, is obtained by multiplying N of these together, which causes a sum to appear in the exponential, and a product of 1/sqrt(Ti) out front.

f(x | sigma) = [ (2*pi*sigma^2)^-N/2 ]*prod[i=1 to N][1/sqrt(Ti)]*exp[ -1/(2*sigma^2) ]*sum[i = 1 to N](Xi - Ui)^2/Ti

To find the value of sigma^2 which maximizes the likelihood function, it is convenient to take the log of the likelihood function and maximize that. The logs of products become sums.

(-N/2)log(2*pi) - (N/2)*log(sigma)^2 - (N/2)*sum[i=1 to N] *log(Ti) - 1/(2*sigma^2) *sum[i = 1 to N] (Xi - Ui)^2/Ti

Taking the derivative of this with respect to sigma^2 and setting = 0:

-(N/2)*(1/sigma^2) + 1/(2*sigma^4)* sum[i = 1 to N] (Xi - Ui)^2/Ti = 0

sigma^2 = (1/N)*sum[i = 1 to N](Xi – Ui)^2/Ti

Note the similarity of this result to the standard definition of variance for sessions of equal duration. The only differences are that each term inside the sum is divided by the session duration Ti, and the constant mean u has been replaced with Ui which depends on the duration of each session. If the sessions are of equal length, Ti becomes a constant T which can be removed from the sum, and the sum would be divided by NT which is the total number of hours in N sessions.

To put this in the form found in Mason’s essay, expand the square, and break this into 3 sums:

sigma^2 = (1/N)*sum[i = 1 to N] Xi^2/Ti + (1/N)*sum[i = 1 to N] -2*XiUi/Ti + (1/N)*sum[i = 1 to N] Ui^2/Ti

Since Ui = u*Ti,

sigma^2 = (1/N)*sum[i = 1 to N] Xi^2/Ti + (1/N)*(-2u)*sum[i = 1 to N] Xi +
(1/N)*u^2*sum[i = 1 to N] Ti

Now since sum[i = 1 to N] Xi is the sum of the session results, this is the same as the hourly rate u times the total hours, or u* sum[i = 1 to N] Ti, so the second term is
(1/N)*-2u^2* sum[i = 1 to N] Ti . This can be combined with the final term to give Mason’s form:

sigma^2 = (1/N)*sum[i = 1 to N] Xi^2/Ti – (u^2/N)* sum[i = 1 to N] Ti.
Reply With Quote
  #3  
Old 09-04-2005, 02:39 AM
elitegimp elitegimp is offline
Junior Member
 
Join Date: Apr 2004
Location: boulder, CO
Posts: 14
Default Follow-up question

First of all, thanks for the link to the article Bruce. I got a PM from Lexander and that reminded me that I had ignored this thread after reading Bruce's response. I had hoped to get more input, so here's some more info on what I'm thinking.

1) This thread is going to shift from a poker stat thread to a baseball stat thread. Sorry, I just tossed out the BB/session as an analogy because... well, I dunno why.

2) One of my friends asked me if there was a statistical way to track a hitter's consistency, from a scale of "a very consistent hitter" to "a very streaky hitter." Initially, thought that a consistent hitter would be one who doesn't stray far from his season-long batting average in any given game... hence the relation between Hits / Game and BB / session.

3) I figured a good comparison to see if this statistic means anything would be between Albert Pujols (in my opinion, he is very consistent) and Brian Roberts (who started the season extremely hot, and his cooled off considerably since then).

Okay, so I did this analysis as laid out in the article linked by Bruce and found that the variance from game to game was very similar between the two hitters... meaning (to me, at least) that this was a bad method. It's probably why we (as poker players) measure SD in BB/100 and not BB/session [img]/images/graemlins/smile.gif[/img]

I think it would be interesting to group the ABs in 10s or 20s and calculate the standard deviation in H/10 ABs or H/20 ABs, but that would mean an incredible amount of work for me to cull the data out of game logs (I'll probably end up doing this to satisfy my inner curiousity, but I'm not looking forward to it).

The last thing I did, and I think it's really neat, is that used data from each game to plot the number of hits each hitter had on the season as a function of their total at-bats.

Example: Player A plays in three games and goes 0-2, 1-4, 4-4, so I plot the points (2,0), (6,1), and (10,5).

I then fit the least squares line to the data (I don't know the verb form of "linear regression"), and plotted the "actual hits - predicted hits" at each data point. Interestingly, Pujol's error data looked like random noise, varying between -4 Hits and +4 Hits, while Robert's error data had a noticeable "parabolic" curve. It was positive early in the season, when he was running hot, then negative in the middle of the season, and positive again towards the end. He also had a much bigger range than Pujols, varying as much as +-8 hits.

So I think that is very interesting, but I don't really know how to interpret it. I also don't know how to turn that information into a single number. Both models had very high correlation co-efficients (0.999 for Pujols, 0.995 for Roberts I believe), and I don't like the idea of looking at a stat on a scale of 0.99 to 1 or something.

So anyway, I guess my next step is to calculate the standard deviation in H/10. Does anyone have any thoughts? Anything to point out that I may have missed? And does anyone have a good interpretation of the regression data? I'll post some plots tomorrow afternoon if I get a chance.
Reply With Quote
  #4  
Old 09-04-2005, 05:26 PM
Lexander Lexander is offline
Member
 
Join Date: Sep 2003
Posts: 47
Default Re: Follow-up question

You got an R^2 of .99 (or was it just R) on real baseball data using multiple regression? That tends to be a bit on the high side.

After thinking about what you are doing, are you plotting the number of games played versus the number of hits total for a player?

If so, then you have a bit of a problem. Multiple regression assumes independent errors. But your errors are not independent, since the next value depends on the previous value, so your model assumptions aren't correct (if a person is well above their average, the next day they are likely to remain well above average). Not sure exactly how wrong your model is without looking at it in more detail, but that would concern me. You have more of a time series model.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 01:57 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.