Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Poker Discussion > Beginners Questions
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #1  
Old 05-28-2003, 04:47 AM
Poseidon65 Poseidon65 is offline
Junior Member
 
Join Date: Jan 2003
Posts: 13
Default Standard Deviation vs. Monte Carlo Simulation

I've read that a lot of players think it's important to know what your standard deviation is, so you know whether or not your swings are within ranges that are to be expected given your variance.

However, it doesn't seem that standard deviation is very useful for this purpose. This is because the distribution of each indiviaul hand is not even close to normally distributed. I suppose you could argue (using the Central Limit Theorem) that if you play enough hands, the non-normality of the distribution isn't really relevant. I'm not sure to what degree it's relevant or not, so I thought I'd try something that might be more robust than standard deviation here.

So I've been logging the result of every Paradise hand I've played for the last few days (about 450 hands). Once I play a few thousand hands, I should enough data to determine with good accuracy the distribution of an individual hand is, for me, at least.

Once I have this, let's say I play 100 hands, I get some result for those hands (for example, down 25BB), and I want to know the probability of that happening. I use a Monte Carlo simulation, drawing 100 hands randomly from my distribution, and do so 1,000,000 times. Now, I can simply see what percentage of these 1,000,000 results lie above -25BB for the 100 hands.

I wonder if anyone's ever thought of doing this, or if it sounds like a waste of time. I definitely want to see how normal the distribution of an entire session is, and this seems like a pretty good way to find out. It also seems that this would be more accurate in finding out just how good or bad a particular session really is.
Reply With Quote
  #2  
Old 05-28-2003, 11:22 AM
ResidentParanoid ResidentParanoid is offline
Senior Member
 
Join Date: Nov 2002
Location: MidWest USA
Posts: 562
Default Re: Standard Deviation vs. Monte Carlo Simulation


Estimating all of the points in a distribution is a very hard problem: Hard in the sense that you need an overwhelmingly large number of data to estimate all of those probabilities precisely. For example, I'm guessing you would break down your results into something like estimating the probability of outcomes like: -20BB, -19BB, -18BB, ... +20 BB, etc. So you have something like 40 or 50 (or more) probabilities to estimate. You need an awful lot of data to estimate this number of things precisely. For example, to get a 90% confidence interval to a precision of +/- 0.01 in estimating a probability that is around 0.05, you need around 1300 data points. It turns out that probabilities close to 0 and 1 are easier to estimate than ones close to 0.5.

Also, if you use the same data for all of the probabilities, you end up with a problem of correlation among all your estimates.

Having said that, it still is interesting. And even for very non-normal data, averages of 100 are most often very normal, or well approximated by a normal. Recall that most of your results are going to be "0" if you are playing correctly, leading to a very skewed distribution for the outcome in an individual hand. You'd probably get better results if you focused on hands that you were involved in. But then you'd have much less data...

Have fun...
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 07:35 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.