Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Gambling > Probability
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #1  
Old 03-03-2005, 04:37 PM
jubeirm jubeirm is offline
Junior Member
 
Join Date: Mar 2005
Posts: 12
Default Computing weighted standard deviation for $/hr

I've been keeping records of my play for a while and computing a "simple" (see below) standard deviation (StdDev). I have also been downloading my playing history and realize that through this I can probably get some much better information about my volatility. What is the difference between the "simple", weighted, and theoretical limit StdDev that I discuss below? What is "best" in a theoretical sense and what is best for our purposes as poker players? Why?

First lets assume I have the following 5 sessions in my logs. Notice that I have most sessions between 1-2 hours and 2 sessions on the extremes (30 min and 5 hours).

Session 1: +30$ for 1:00 hr = +30.00 $/hr
Session 2: -23$ for 1:30 hr = -15.33 $/hr
Session 3: +12$ for 5:00 hr = + 2.50 $/hr
Session 4: +15$ for 2:15 hr = + 6.67 $/hr
Session 5: -17$ for 0:30 hr = -34.00 $/hr

Now what I am calling a "simple" StdDev is the usual sum of squared differences between the average and each observation. For example, for the above sessions StdDev = 24.08 $/hr

A weighted StdDev would give more weight to a longer session (e.g. Session 3) and less weight to a short session (e.g. Session 5). So for the above the average is -2.03 $/hr and the weighted StdDev

Var = Sum{i=1 to 5} [w_i * (x_i - xavg)^2] / (n-1) =
[1*(30 + 2.03)^2 + 1.5*(-25.33 + 2.03) + 5*(2.50 + 2.03)
+ 2.25*(6.67 + 2.03) + 0.5*(-34.00 + 2.03)^2] / 4 = 518.12

StdDev = sqrt(Var) = 22.76

Lastly, one could take this as far as physically possible by considering each win/loss and the time between them. For example, if I pull down a big pot ($60) after 5 minutes, fold for 20 minutes, and then lose a large pot ($47) I will have two observations and weights:

w1 = 5/60, ob1 = 60/w1 = 720
w2 = 20/60, ob2 = -47/w2 = -141

Then I can compute a weighted StdDev over all my observations (i.e. wins and losses) for all sessions. The trouble I am having is how do I interpret the results. I'm not sure what the difference between the three is and how to determine which result I should prefer. Any ideas?

Note: In all the above I used the formula for Sample StdDev. Also, all the above can just as easily be applied to BB/hr, $/100, BB/100, etc.
Reply With Quote
  #2  
Old 03-03-2005, 10:08 PM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Re: Computing weighted standard deviation for $/hr

[ QUOTE ]
Var = Sum{i=1 to 5} [w_i * (x_i - xavg)^2] / (n-1)

[/ QUOTE ]

This is the correct way to compute the unbiased estimate of the variance, meaning that the expected value of this estimator is the true variance for normally distributed data. If you divide by n instead of n-1, this would be the maximum-likelihood estimate of the variance, meaning that this gives the estimate of the variance which maximizes the likelihood of the observed outcomes, assuming normally distributed data. That would be equivalent to the formula that Mason used to have posted here in the essay section, but it looks like that section has been taken down. It also appears in his book Gambling Theory and Other Topics with a derivation in the appendix. Taking the square root of this estimate gives the maximum likelihood estimate of the standard deviation; however, the square root of the unbiased variance estimate is not the unbiased standard deviation. Dividing by n+1 gives the estimate which minimizes the mean squared error of the estimate. These differences are unimportant in practice since they are all about equal once you have a large enough number of sessions.


[ QUOTE ]
Lastly, one could take this as far as physically possible by considering each win/loss and the time between them. For example, if I pull down a big pot ($60) after 5 minutes, fold for 20 minutes, and then lose a large pot ($47) I will have two observations and weights:

w1 = 5/60, ob1 = 60/w1 = 720
w2 = 20/60, ob2 = -47/w2 = -141

Then I can compute a weighted StdDev over all my observations (i.e. wins and losses) for all sessions. The trouble I am having is how do I interpret the results. I'm not sure what the difference between the three is and how to determine which result I should prefer. Any ideas?

[/ QUOTE ]

You don't want to use very short time periods because your results will not be normally distributed. The estimators assume normally distributed data. You really want to go the other way and use sessions that are several hours long, so that the results more closely follow a normal distribution.
Reply With Quote
  #3  
Old 03-04-2005, 12:18 AM
blank frank blank frank is offline
Member
 
Join Date: Nov 2004
Posts: 52
Default Re: Computing weighted standard deviation for $/hr

[ QUOTE ]

[ QUOTE ]

Then I can compute a weighted StdDev over all my observations (i.e. wins and losses) for all sessions. The trouble I am having is how do I interpret the results. I'm not sure what the difference between the three is and how to determine which result I should prefer. Any ideas?

[/ QUOTE ]

You don't want to use very short time periods because your results will not be normally distributed. The estimators assume normally distributed data. You really want to go the other way and use sessions that are several hours long, so that the results more closely follow a normal distribution.

[/ QUOTE ]

Well, if you have all of the individual game results (I play at home, so I'm not sure how this poker tracking software works), wouldn't you be better off throwing time out the window? Just throw all of your observations into one set and calculate the standard deviation off of that.

Or is the statistic of interest being measured over the time period, and not over the number of game? I'd still consider throwing them all into one pot and calculating from there.
Reply With Quote
  #4  
Old 03-04-2005, 12:55 AM
uuDevil uuDevil is offline
Senior Member
 
Join Date: Jul 2003
Location: Remembering P. Tillman
Posts: 246
Default Re: Computing weighted standard deviation for $/hr

[ QUOTE ]
That would be equivalent to the formula that Mason used to have posted here in the essay section, but it looks like that section has been taken down.

[/ QUOTE ]

The link is missing, but the page is still there:

Mason's SD Essay
Reply With Quote
  #5  
Old 03-04-2005, 01:16 AM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Re: Computing weighted standard deviation for $/hr

[ QUOTE ]
[ QUOTE ]

[ QUOTE ]

Then I can compute a weighted StdDev over all my observations (i.e. wins and losses) for all sessions. The trouble I am having is how do I interpret the results. I'm not sure what the difference between the three is and how to determine which result I should prefer. Any ideas?

[/ QUOTE ]

You don't want to use very short time periods because your results will not be normally distributed. The estimators assume normally distributed data. You really want to go the other way and use sessions that are several hours long, so that the results more closely follow a normal distribution.

[/ QUOTE ]

Well, if you have all of the individual game results (I play at home, so I'm not sure how this poker tracking software works), wouldn't you be better off throwing time out the window? Just throw all of your observations into one set and calculate the standard deviation off of that.

Or is the statistic of interest being measured over the time period, and not over the number of game? I'd still consider throwing them all into one pot and calculating from there.

[/ QUOTE ]

I'm not sure what you mean by "throw them all into one pot". What is your formula for the variance? When we compute the variance, we must sum over N data points, and the question is how we obtain those N data points. I hope you are not suggesting that we use the result of each individual hand as a data point. Remember, when we compute the variance, we are assuming that our observations are distributed by a normal distribution, and we are attempting to estimate the variance of that distribution. Your results per hand, or even per hour, will not satisfy the assumption of normality.

If your observations correspond to different time periods, or different numbers of hands, then you must take these different time periods or numbers of hands into account properly when you compute your variance. The formula given above shows how to take this into account. If you throw away this information, then you are essentially assuming that all observations correspond to equal time periods, or an equal number of hands, and your result will be incorrect. Even if we have data for every hand, we need to group these results into, say 100 hand samples, to compute a variance in units of bb^2/100 hands. If we want units of bb^2/hr, then we are better to have samples corresponding to several hours, so that the central limit theorem has time to kick in and better validate the assumption of normality.
Reply With Quote
  #6  
Old 03-04-2005, 01:19 AM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Re: Computing weighted standard deviation for $/hr

[ QUOTE ]
[ QUOTE ]
That would be equivalent to the formula that Mason used to have posted here in the essay section, but it looks like that section has been taken down.

[/ QUOTE ]

The link is missing, but the page is still there:

Mason's SD Essay

[/ QUOTE ]

You're not supposed to peek at the man behind the curtain. [img]/images/graemlins/smile.gif[/img]
Reply With Quote
  #7  
Old 03-04-2005, 03:02 AM
jason1990 jason1990 is offline
Senior Member
 
Join Date: Sep 2004
Posts: 205
Default Re: Computing weighted standard deviation for $/hr

I'm not sure what you mean here, but I see nothing wrong with using the result of each hand as a data point, as long as you have enough hands. You are not assuming normality when you use the empirical standard deviation to estimate the true standard deviation. As long as the law of large numbers applies, the empirical SD will converge to the true SD. And if the central limit theorem applies, the error (between the estimated variance and the true variance) will be approximately normal.
Reply With Quote
  #8  
Old 03-04-2005, 09:56 AM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Re: Computing weighted standard deviation for $/hr

[ QUOTE ]
I'm not sure what you mean here, but I see nothing wrong with using the result of each hand as a data point, as long as you have enough hands. You are not assuming normality when you use the empirical standard deviation to estimate the true standard deviation.

[/ QUOTE ]

I am assuming normality when I use the empirical variance as the maximum likelihood estimator.
Reply With Quote
  #9  
Old 03-04-2005, 10:46 AM
jason1990 jason1990 is offline
Senior Member
 
Join Date: Sep 2004
Posts: 205
Default Re: Computing weighted standard deviation for $/hr

Okay, I see. For blank frank's sake, I'd just like to point out that one doesn't need to use a maximum likelihood estimator in order to have an accurate estimate of the variance. In fact, with a large number of hands, I believe that in most situations you're likely to get a better estimate by using the unbiased estimator with each hand as a data point than by breaking the hands into blocks of 100 or more and assuming normality.
Reply With Quote
  #10  
Old 03-04-2005, 11:17 AM
BruceZ BruceZ is offline
Senior Member
 
Join Date: Sep 2002
Posts: 1,636
Default Re: Computing weighted standard deviation for $/hr

[ QUOTE ]
In fact, with a large number of hands, I believe that in most situations you're likely to get a better estimate by using the unbiased estimator with each hand as a data point than by breaking the hands into blocks of 100 or more and assuming normality.

[/ QUOTE ]

Where do you get that your estimate is unbiased? You have only stated that your estimate approaches the true variance as n -> infinity. This is not the same as being unbiased, which means that the expected value of your estimator is equal to the true variance for any n. How do you know that your estimator is unbiased if you don't even know what your underlying distribution is? We get an unbiased estimator from the maximum likelihood estimator by multiplying by n/(n-1). Proving this again requires the assumption of normality.

Dividing by n+1 instead of n minimizes the mean square error of the estimate (over any estimate using n sessions, again assuming normality). All three of these estimates are going to be almost the same value for reasonable n. In what sense is your estimate "better", and how do you come to your conclusion that it is better in "most situations". I suspect that proving this will depend on your exact distribution, or at least the 4th moment.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 02:41 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.